text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
In this article, we are going to start learning system programming by first of all creating a copy program like cp using low-level IO functions and not the standard IO functions. Now first let’s see what is system programming and what is low-level IO functions in C.
System Programming:
System programming aims at producing software that is used by other software instead of directly used by users. In system programming, mostly low-level languages like C are used. System programmers are supposed to have an in-depth understanding of kernel and hardware.
Low-level IO functions in C:
When you generally write your program in C. You end up using functions like fopen, fclose, etc. These functions are wrapper over low-level IO functions that are open, close, read and write. In this article, we are going to use these functions to write our copy program.
Lets first see the difference between the two. If you using a Linux system you can see their usage by using the below commands.
For open function: man 2 open, you will see something like below
);
You can see here usage of this command as open(char “pathname”, flags), these flags are the ones that define which mode to open it in. Similarly, there are man pages for read, close and write. You can also search for their details online. I have provided the link on the name of the calls.
Now let’s write code to accomplish our task.
For fopen function: man fopen and you will see the syntax to use it. But remember it is a wrapper over open and not a low-level call.
#include <fcntl.h> #include <stdlib.h> #define BUFFERSIZE 10000 void main(){ int filein, fileout; char buffer[BUFFERSIZE]; int count; if ((filein == open("source_file", O_RDONLY)) < 0){ perror("source_file"); exit(1); } if ((fileout = open("destination_file", O_WRONLY| O_CREAT, 0644)) < 0){ perror("destination_file"); exit(2); } while ((count = read(filein, buffer, BUFFERSIZE)) > 0) write(fileout, buffer, count); close(filein); close(fileout); }
Now you have to compile this code using GCC. Use the command below.
gcc -o copy copy.c
Now you can execute this using below
./copy
Code walkthrough
Let’s take a look at the code now. First of all, we included the libraries that are required. Next, we defined the buffer size(BUFFERSIZE) in which we will read the data. We will read 1000 bytes in each iteration. Then starts our main program.
Next, we defined two variables: filein and fileout to keep the file descriptors for input and output file. Then a buffer variable to keep the data. Then we try to open the file and get the file descriptor with 0_RDONLY which means read-only, we check if the value of file descriptor is not a negative integer. If it is a negative integer it means it failed to get the file descriptor.
Next, we do the same for the file descriptor for output file with O_WRONLY and O_CREAT which means write-only and create if not exists. Once these two descriptor is there. You can now start reading the file in the buffer and writing it to the destination file. Count variable writes count bytes from buffer. The whole look will break once it gets 0, which is when the file is read completely.
Now you can see the content of your destination file to compare.
This was a small article on how to write a basic Copy program in C using low-level function calls.
If you like the article please share.
|
https://www.learnsteps.com/system-programming-basic-copy-program-in-c-using-low-level-io/
|
CC-MAIN-2022-27
|
refinedweb
| 590
| 74.19
|
high level function:
- def remote_exec(source):
- """return channel object for communicating with the asynchronously executing 'source' code which will have a corresponding 'channel' object in its executing namespace."""
With remote_exec you send source code to the other side and get both a local and a remote Channel object, which you can use to have the local and remote site communicate data in a structured way. Here is an example:
>>>:
# # API for sending and receiving anonymous values # channel.send(item): sends the given item to the other side of the channel, possibly blocking if the sender queue is full. Note that items need to be marshallable (all basic python types are): channel.receive(): receives an item that was sent from the other side, possibly blocking if there is none. Note that exceptions from the other side will be reraised as gateway.RemoteError exceptions containing a textual representation of the remote traceback. channel.waitclose(timeout=None): wait until this channel is closed. Note that a closed channel may still hold items that will be received or send. Note that exceptions from the other side will be reraised as gateway.RemoteError exceptions containing a textual representation of the remote traceback. channel.close(): close this channel on both the local and the remote side. A remote side blocking on receive() on this channel will get woken up and see an EOFError exception.
|
http://codespeak.net/py/dist/execnet.html
|
crawl-001
|
refinedweb
| 228
| 55.64
|
IFeed::Xml Method
Returns the XML for the feed.
Syntax
Parameters
- count
- [in] LONG value that specifies the maximum number of items to return. Pass
FEEDS_XML_COUNT_MAXto return all the items in the feed.
- sortProperty
- [in] A FEEDS_XML_SORT_PROPERTY value that defines the sort column.
- sortOrder
- [in] A FEEDS_XML_SORT_ORDER value that defines the sort direction.
- filterFlags
- [in] A FEEDS_XML_FILTER_FLAGS value that specifies whether to include items based on read status.
- includeFlags
- [in] A FEEDS_XML_INCLUDE_FLAGS value that specifies whether to include namespace extensions to Really Simple Syndication (RSS) 2.0 in the XML.
- xml
- [out, retval] Pointer to a BSTR that receives the XML source for the feed.
Return Value
Returns S_OK if successful, or an error value otherwise.
Remarks
If you specify FXSP_NONE in sortProperty, you must also specify FXSO_NONE in sortOrder and vice versa. If these values are not used together, this method will return E_INVALIDARG.
The user is responsible for freeing the memory with SysFreeString.
See Also
IFeedItem::Xml
Show:
|
http://msdn.microsoft.com/en-us/library/ms686396(v=vs.85).aspx
|
CC-MAIN-2014-52
|
refinedweb
| 160
| 51.65
|
This article aims to explain what an XPath expression is and why they can be extremely useful to a C# programmer.
When I first started .NET programming, I was immediately exposed to the use of XML. It was everywhere, and I had hardly any exposure to it previously. After understanding why XML documents were used so widely, I took the decision to incorporate XML files into my next application. So off I went, added the System.Xml namespace to my project, and created an XmlDocument object. I called the LoadXml() method and passed in a valid XML string. "Great!" I thought, but soon discovered I had no idea how I would get the data I wanted out of the XmlDocument object.
System.Xml
XmlDocument
LoadXml()
Throughout this guide, I will refer to the following XML file:
<?xml version="1.0" encoding="utf-8" ?>
<books>
<book>
<title>A beginners guide to XPath</title>
<author>Gary Francis</author>
<description>A book that explains XPath for beginners</description>
<data type="Price">12.00</data>
<data type="ISBN">1234567890</data>
</book>
<book>
<title>Advanced C# Programming</title>
<author>A. Uther</author>
<description>Advanced applied C# techniques.</description>
<data type="Price">47.00</data>
</book>
<book>
<title>Understanding C# for beginners</title>
<author>Any body</author>
<description>How to get started with C# and .NET</description>
<data type="Price">12.00</data>
<data type="Comment">This was a great book... It helped Loads.</data>
<data type="Comment">Excellent material if you new to C#.</data>
</book>
</books>
The above XML file contains information about books. As you can imagine, this file could be a lot more complex, but for the sake of simplicity, we will leave it like this.
Before we can do anything useful with this data, we need to create an XmlDocument object and load the data into it. From within Visual Studio, create a new C# Windows Forms application. On the form, drop a button, and double click the button to bring up its event handler. The following code loads the XML data from a file:
XmlDocument document = null;
XmlNodeList nodeList = null;
XmlNode node = null;
// Try and load xml data into an Xml document object and throw an
// error message if this fails
try
{
document = new XmlDocument();
document.Load("Data.xml");
}
catch (Exception ex)
{
MessageBox.Show("Error loading 'Data.xml'. Exception: " + ex.Message);
}
In order for the above code to compile, you will need to reference the System.Xml namespace at the top of your code file. This example also assumes that you have a file called "Data.xml" in the output directory of your project. The easiest way to do this is to add a new XML file into your project, insert the above XML data into the file, and name the file Data.xml. Finally, you should set the "Copy To Output Directory" property of the XML file to "Copy Always".
Now that we have successfully loaded the data, we need to reference some data. To do this, we will use an XPath expression. The data we are going to try and retrieve initially is a NodeList of all book elements within the XML file. The XPath expression to achieve this is:
NodeList
book
/books/book
Don't worry too much that you don't know what this means as all will be revealed shortly. So we add the following code to the method:
// Try and retrieve all book nodes
nodeList = document.SelectNodes("/books/book");
This will populate our XmlNodeList will all of the book elements. Well, within each of these elements, we know there is going to be a <title> element. So we can use another XPath query to access that. The following code will achieve this for each of the book elements we just retrieved:
XmlNodeList
<title>
foreach (XmlNode book in nodeList)
{
// Show a message with the book title
MessageBox.Show(book.SelectSingleNode("title").InnerText);
}
That was simple, right? So, right now, I bet you can imagine all sorts of ways you could use XPath in your applications, and you would be right. There is still the slight problem of the XPath syntax.
As this is just a beginner's tutorial, I will go over only the basic XPath expressions and what they mean. As time goes on, you might want to try more complex XPath queries such as reading XML data in reverse (going from a child node to a parent node). This is outside the scope of this document, but there are plenty of other references on the internet that should be able to help.
The first important thing you should remember when using XPath is the context you are in when you try to use the expression. For example, if we use the /books expression whilst we are trying to select the title of the book, we would have in fact been looking for a books node within the book node we were already in. For this to have worked, our XML document would have had to look something like:
/books
books
<books>
<book>
<books>
<book></book>
</books>
</book>
</books>
From the above example, I hope you can understand the importance of context as this will become apparent when you try to put XPath to use in your applications.
So the first thing we might need to know how to do using XPath is selecting some nodes. Here are some examples of how you can do this:
nodename
This would select all nodes under the "books" element
/
books/book
This would select all of the book nodes contained within the books element
//
books//book
This would also select all of the book nodes contained within the books element, but if there was a book element outside of the books element, they would also be selected
@
books/book/data[@type='Price']
This would select all book elements that have a price attribute
price
It is also important to note that you can use indexes when you want to select a particular node. For example: /books[1] will return the first "books" element. Notice that indexes are not zero based.
/books[1]
Note: This is the W3C specification. In some browser's, this is implemented incorrectly and the browser treats the indexes as zero based. This should only be a concern if you are copying code that was originally written for certain browsers. I believe IE5 and IE6 fall foul of this, but I have never investigated to be truthful, so this may or may not be the case.
I would seriously recommend that you download the source files for this project. It will allow you to play around with different XML data scenarios and XPath expressions easily and will help you to learn. I always say that it is better to learn from your mistakes than it is to not bother trying.
Well, that is about it for my first contribution to CodeProject. If you find this useful, or you would like me to write a follow up article which goes into some more detail on some of the more complex XPath expressions that might be needed, drop me an e-mail. My address can be found on my CodeProject profile.
|
https://www.codeproject.com/Articles/18960/A-beginner-s-guide-to-XPath?PageFlow=FixedWidth
|
CC-MAIN-2018-05
|
refinedweb
| 1,195
| 62.48
|
Hello!
How do i install cordova plugins in my ionic-capacitor app.I have seen this document . I want to install call number in my app () .So i have run this command
npm install --save @ionic-native/call-number.When i test the app on a device it is giving me error saying
plugin not installed .When i run
npx cap sync it is showing
Found 0 Capacitor plugins for android: . How do i make cordova plugins work in my capacitor app.Somebody please help me
Hello!
You are only installing the Ionic Native wrapper for the plugin, you also need to install the plugin itself:
npm install call-number --save
Now i have run the following commands
npm install --save call-number npx cap sync
now it has detected the plugin and it is showing
Found 1 Cordova plugin for android: CallNumber (1.0.1)
But i tried to import the plugin in my app.module file like this
import { CallNumber } from 'call-number';
It is showing me error like this
[ts] Cannot find module 'call-number'.
Do i need to import it in app.module file or not?
You should be importing it from
@ionic/native:
import { CallNumber } from '@ionic-native/call-number';
Thank you… Earlier i installed only
call-number or
@ionic-native/call-number never both. Now i have installed both the plugins and it working cool. Should i need to install the Ionic Native wrapper plugin and the plugin itself for other cordova plugins?
If you want to use the Ionic Native wrapper you need to install both, you can install just the plugin though. Generally, if the plugin is available in Ionic Native you should use it.
Alright thank you so much
I tried to follow the same pattern to install speech recognition from by installing the native wrapper like this
npm install --save @ionic-native/speech-recognition . When i run
npx cap sync it just shows
Found 1 Cordova plugin for android:CallNumber (1.0.1)
I cannot see speech recognition here. So i need to install the plugin speech recognition also. But from where should i install the plugin?
I found the plugin…I issue is solved
|
https://forum.ionicframework.com/t/how-to-use-cordova-plugins-in-capacitor-app/130611
|
CC-MAIN-2020-40
|
refinedweb
| 365
| 65.73
|
C++ References
In C++, when we declare a variable as a reference it will become an alias of the existing variable. We can declare a variable as a reference using "&" in the declaration.
Example :
#include<iostream> using namespace std; int main() { int A = 10; // refer is a reference to A. int& refer = A; // Value of A is now changed to 20 refer = 20; cout << "A = " << A << endl ; // Value of A is now changed to 30 A = 30; cout << "refer = " << refer << endl ; return 0; }
Output:
A = 20 refer = 30
In the above example, A is a variable of type int and then we create reference variable of A, which is refer. So when we change the value of A, the value of the reference variable will also change.
Difference Between Pointer and Reference
Exercise:-
1. Which of the following statement is correct?
View Answer
Ans : B
Explanation: None
2. Which of the following statement is correct about the references?
View Answer
Ans : C
Explanation:None
Program
C++ program to Reference to print value of student_age
#include <iostream> int main() { int student_age=10; int &age=student_age; // reference variable cout<< " value of student_age :"<< student_age << endl; cout<< " value of age :"<< age << endl; age=age+10; cout<<"\nAFTER ADDING 10 INTO REFERENCE VARIABLE \n"; cout<< " value of student_age :"<< student_age << endl; cout<< " value of age :"<< age << endl; return 0; }
Output:
value of student_age :10 value of age :10 AFTER ADDING 10 INTO REFERENCE VARIABLE value of student_age :20 value of age :20
Visit :
|
https://letsfindcourse.com/tutorials/cplusplus-tutorials/cpp-references
|
CC-MAIN-2022-40
|
refinedweb
| 247
| 55.58
|
Ive started with this program
def calcFinal():
while True:
try:
asg1Mark = int(raw_input("Enter Asg1 mark: "))
asg2Mark = int(raw_input("Enter Asg2 mark: "))
examMark = int(raw_input("Enter Exam mark: "))
final = asg1Mark + asg2Mark + examMark
return final
except ValueError:
print 'Please enter only numbers'
def mark_check(finalMark):
print "Final mark is %d, grade is" % finalMark,
if finalMark < 50:
print "Sorry, You have Fail"
elif finalMark <= 65:
print "You Passed. Well done"
elif finalMark <= 75:
print "You have earned a Credit :-) "
elif finalMark <= 85:
print "Distinction !!!"
elif finalMark >= 85:
print "Congrats you have earned an High Distinction"
else:
print "Invaild selction has been entered. Please try again"
def main():
finalMark = calcFinal()
mark_check(finalMark)
if __name__ == "__main__":
main()
A student MUST pass the final exam in order to pass the unit. Furthermore, if a student passes the exam, but fails the unit by not more than 5% of the total assessment, they are offered an AA – a chance to pass the unit by resubmitting an assignment and attaining a nominated standard.
Similarly, if a student fails the exam, but their total marks are within 5% of a passing grade, they are offered an AE – a chance to resit the exam and attain a passing grade.
What would be the best to approach this question
would it be to create another loop ??
Any help would be great
|
https://www.daniweb.com/programming/software-development/threads/275507/final-grades-program
|
CC-MAIN-2017-22
|
refinedweb
| 221
| 52.94
|
File formats¶
Summary of file formats¶
Parameter files¶
- mdp
- run parameters, input for gmx grompp and gmx convert-tpr
- m2p
- input for gmx xpm2ps
Structure files¶
Topology files¶
Trajectory files¶
- tng
- Any kind of data (compressed, portable, any precision)
- trr
- x, v and f (binary, full precision, portable)
- xtc
- x only (compressed, portable, any precision)
- gro
- x and v (ascii, any precision)
- g96
- x only (ascii, fixed high precision)
- pdb
- x only (ascii, reduced precision)
- Formats for full-precision data:
- tng or trr
- Generic trajectory formats:
- tng, xtc, trr, gro, g96, or pdb
Energy files¶
Other files¶
- dat
- generic, preferred for input
- edi
- essential dynamics constraints input for gmx mdrun
- eps
- Encapsulated Postscript
- log
- log file
- map
- colormap input for gmx do_dssp
- mtx
- binary matrix data
- out
- generic, preferred for output
- tex
- LaTeX input
- xpm
- ascii matrix data, use gmx xpm2ps to convert to eps
- xvg
- xvgr input
File format details¶
cpt¶.
dat¶
Files with the dat file extension contain generic input or output. As it is not possible to categorise all data file formats, GROMACS has a generic file format called dat of which no format is given.
dlg¶
The dlg file format is used as input for the gmx view trajectory viewer. These files are not meant to be altered by the end user.
Sample¶
grid 39 18 { group "Bond Options" 1 1 16 9 { radiobuttons { " Thin Bonds" " Fat Bonds" " Very Fat Bonds" " Spheres" } "bonds" "Ok" " F" "help bonds" } group "Other Options" 18 1 20 13 { checkbox " Show Hydrogens" "" "" "FALSE" "help opts" checkbox " Draw plus for atoms" "" "" "TRUE" "help opts" checkbox " Show Box" "" "" "TRUE" "help opts" checkbox " Remove PBC" "" "" "FALSE" "help opts" checkbox " Depth Cueing" "" "" "TRUE" "help opts" edittext "Skip frames: " "" "" "0" "help opts" } simple 1 15 37 2 { defbutton "Ok" "Ok" "Ok" "Ok" "help bonds" } }
edi¶
Files with the edi file extension contain information for gmx mdrun to run Molecular Dynamics with Essential Dynamics constraints.
edr¶
The edr file extension stands for portable energy file. The energies are stored using the xdr protocol.
See also gmx energy.
ene¶
The ene file extension stands for binary energy file. It holds the energies as generated during your gmx mdrun.
The file can be transformed to a portable energy file (portable across hardware platforms), the edr file using the program gmx eneconv.
See also gmx energy.
eps¶.
g96¶
A:
-
Header block:
-
TITLE(mandatory)
-
Frame blocks:
-
TIMESTEP(optional)
-
POSITION/POSITIONRED(mandatory)
-
VELOCITY/VELOCITYRED(optional)
-
BOX(optional)
See the GROMOS-96 manual for a complete description of the blocks.
Note that all GROMACS programs can read compressed or g-zipped files.
gro¶
Files
‘
Lines contain the following information (top to bottom):
- title string (free format string, optional time in ps after ‘
t=’)
- number of atoms (free format integer)
- one line for each atom (fixed format, see below)
- box vectors (free format, space separated reals), values: v1(x) v2(y) v3(z) v1(y) v1(z) v2(x) v2(z) v3(x) v3(y), the last 6 values may be omitted (they will be set to zero). GROMACS only supports boxes with v1(y)=v1(z)=v2(z)=0.
This format is fixed, ie. all columns are in a fixed
position. Optionally (for now only yet with trjconv) you can write gro
files with any number of decimal places, the format will then be
n+5 positions with
n decimal places (
n+1
for velocities) in stead of
8 with
3 (with
4 for velocities). Upon reading, the precision will be
inferred from the distance between the decimal points (which will be
n+5). Columns contain the following information (from left to
right):
- residue number (5 positions, integer)
- residue name (5 characters)
- atom name (5 characters)
- atom number (5 positions, integer)
- position (in nm, x y z in 3 columns, each 8 positions with 3 decimal places)
- velocity (in nm/ps (or km/s), x y z in 3 columns, each 8 positions with 4 decimal places)
Note that separate molecules or ions (e.g. water or Cl-) are regarded as residues. If you want to write such a file in your own program without using the GROMACS libraries you can use the following formats:
- C format
"%5d%-5s%5s%5d%8.3f%8.3f%8.3f%8.4f%8.4f%8.4f"
- Fortran format
(i5,2a5,i5,3f8.3,3f8.4)
- Pascal format
- This is left as an exercise for the user
Note that this is the format for writing, as in the above example fields may be written without spaces, and therefore can not be read with the same format statement in C.
hdb¶
The hdb file extension stands for hydrogen database
Such a file is needed by gmx pdb2gmx
when building hydrogen atoms that were either originally missing, or that
were removed with
-ignh.
itp¶
The itp file extension stands for include topology. These files are included in topology files (with the top extension).
log¶
Logfiles are generated by some GROMACS programs and are usually in
human-readable format. Use
more logfile.
m2p¶
The m2p file format contains input options for the gmx xpm2ps program. All of these options are very easy to comprehend when you look at the PosScript(tm) output from gmx xpm2ps.
; Command line options of xpm2ps override the parameters in this file black&white = no ; Obsolete titlefont = Times-Roman ; A PostScript Font titlefontsize = 20 ; Font size (pt) legend = yes ; Show the legend legendfont = Times-Roman ; A PostScript Font legendlabel = ; Used when there is none in the .xpm legend2label = ; Used when merging two xpm's legendfontsize = 14 ; Font size (pt) xbox = 2.0 ; x-size of a matrix element ybox = 2.0 ; y-size of a matrix element matrixspacing = 20.0 ; Space between 2 matrices xoffset = 0.0 ; Between matrix and bounding box yoffset = 0.0 ; Between matrix and bounding box x-major = 20 ; Major ticks on x axis every .. frames x-minor = 5 ; Id. Minor ticks x-firstmajor = 0 ; First frame for major tick x-majorat0 = no ; Major tick at first frame x-majorticklen = 8.0 ; x-majorticklength x-minorticklen = 4.0 ; x-minorticklength x-label = ; Used when there is none in the .xpm x-fontsize = 16 ; Font size (pt) x-font = Times-Roman ; A PostScript Font x-tickfontsize = 10 ; Font size (pt) x-tickfont = Helvetica ; A PostScript Font y-major = 20 y-minor = 5 y-firstmajor = 0 y-majorat0 = no y-majorticklen = 8.0 y-minorticklen = 4.0 y-label = y-fontsize = 16 y-font = Times-Roman y-tickfontsize = 10 y-tickfont = Helvetica
map¶
This file maps matrix data to RGB values which is used by the gmx do_dssp program.
The format of this file is as follow: first line number of elements in the colormap. Then for each line: The first character is a code for the secondary structure type. Then comes a string for use in the legend of the plot and then the R (red) G (green) and B (blue) values.
In this case the colors are (in order of appearance): white, red, black, cyan, yellow, blue, magenta, orange.
8 ~ Coil 1.0 1.0 1.0 E B-Sheet 1.0 0.0 0.0 B B-Bridge 0.0 0.0 0.0 S Bend 0.0 0.8 0.8 T Turn 1.0 1.0 0.0 H A-Helix 0.0 0.0 1.0 G 3-Helix 1.0 0.0 1.0 I 5-Helix 1.0 0.6 0.0
mdp¶
See the user guide for a detailed description of the options.
Below is a sample mdp file. The ordering of the items is not important, but if you enter the same thing twice, the last is used (gmx grompp gives you a note when overriding values). Dashes and underscores on the left hand side are ignored.
The values of the options are values for a 1 nanosecond MD run of a protein in a box of water.
Note: The parameters chosen (e.g., short-range cutoffs) depend on the force field being used.
integrator = md dt = 0.002 nsteps = 500000 nstlog = 5000 nstenergy = 5000 nstxout-compressed = 5000
With this input gmx grompp will produce a commented file with the default name
mdout.mdp. That file will contain the above options, as well as all other
options not explicitly set, showing their default values.
mtx¶
Files with the mtx file extension contain a matrix. The file format is identical to the trr format. Currently this file format is only used for hessian matrices, which are produced with gmx mdrun and read by gmx nmeig.
ndx¶
The GROMACS index file (usually called index.ndx) contains some user definable sets of atoms. The file can be read by most analysis programs, by the graphics program (gmx view) and by the preprocessor (gmx grompp). Most of these programs create default index groups when no index file is supplied, so you only need to make an index file when you need special groups.
First the group name is written between square brackets. The following atom numbers may be spread out over as many lines as you like. The atom numbering starts at 1.
An example file is here:
[ Oxygen ] 1 4 7 [ Hydrogen ] 2 3 5 6 8 9
There are two groups, and total nine atoms. The first group Oxygen has 3 elements. The second group Hydrogen has 6 elements.
An index file generation tool is available: gmx make_ndx.
out¶
Files with the out file extension contain generic output. As it is not possible to categorise all data file formats, GROMACS has a generic file format called out of which no format is given.
pdb¶
Files with the pdb extension are molecular structure files in the protein databank file format. The protein databank file format describes the positions of atoms in a molecular structure. Coordinates are read from the ATOM and HETATM records, until the file ends or an ENDMDL record is encountered. GROMACS programs can read and write a simulation box in the CRYST1 entry. The pdb format can also be used as a trajectory format: several structures, separated by ENDMDL, can be read from or written to one file.
rtp¶
The rtp file extension stands for residue topology. Such a file is needed by gmx pdb2gmx to make a GROMACS topology for a protein contained in a pdb file. The file contains the default interaction type for the 4 bonded interactions and residue entries, which consist of atoms and optionally bonds, angles dihedrals and impropers. Parameters can be added to bonds, angles, dihedrals and impropers, these parameters override the standard parameters in the itp files. This should only be used in special cases. Instead of parameters a string can be added for each bonded interaction, the string is copied to the top file, this is used for the GROMOS96 forcefield.
g
separated by 3 bonds (except pairs of hydrogens).
When more interactions need to be excluded, or some pair interactions should
not be generated, an
[exclusions] field can be added, followed by
pairs of atom names on separate
tex¶
We use LaTeX for document processing. Although the input is not so user friendly, it has some advantages over word processors.
- LaTeX knows a lot about formatting, probably much more than you.
- The input is clear, you always know what you are doing
- It makes anything from letters to a thesis
- Much more…
tng¶
Files with the
.tng file extension can contain all kinds of data
related to the trajectory of a simulation. For example, it might
contain coordinates, velocities, forces and/or energies. Various mdp
file options control which of these are written by mdrun, whether data
is written with compression, and how lossy that compression can be.
This file is in portable binary format and can be read with gmx dump.
gmx dump -f traj.tng
or if you’re not such a fast reader:
gmx dump -f traj.tng | less
You can also get a quick look in the contents of the file (number of frames etc.) using:
gmx check -f traj.tng
top¶
The top file extension stands for topology. It is an ascii file which is read by gmx grompp which processes it and creates a binary topology (tpr file).
A sample file is included below:
; ; Example topology file ; [ defaults ] ; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ 1 1 no 1.0 1.0 ; The force field files to be included #include "rt41c5.itp" [ moleculetype ] ; name nrexcl Urea 3 [ atoms ] ; nr type resnr residu atom cgnr charge 1 C 1 UREA C1 1 0.683 2 O 1 UREA O2 1 -0.683 3 NT 1 UREA N3 2 -0.622 4 H 1 UREA H4 2 0.346 5 H 1 UREA H5 2 0.276 6 NT 1 UREA N6 3 -0.622 7 H 1 UREA H7 3 0.346 8 H 1 UREA H8 3 0.276 [ bonds ] ; ai aj funct c0 c1 3 4 1 1.000000e-01 3.744680e+05 3 5 1 1.000000e-01 3.744680e+05 6 7 1 1.000000e-01 3.744680e+05 6 8 1 1.000000e-01 3.744680e+05 1 2 1 1.230000e-01 5.020800e+05 1 3 1 1.330000e-01 3.765600e+05 1 6 1 1.330000e-01 3.765600e+05 [ pairs ] ; ai aj funct c0 c1 2 4 1 0.000000e+00 0.000000e+00 2 5 1 0.000000e+00 0.000000e+00 2 7 1 0.000000e+00 0.000000e+00 2 8 1 0.000000e+00 0.000000e+00 3 7 1 0.000000e+00 0.000000e+00 3 8 1 0.000000e+00 0.000000e+00 4 6 1 0.000000e+00 0.000000e+00 5 6 1 0.000000e+00 0.000000e+00 [ angles ] ; ai aj ak funct c0 c1 1 3 4 1 1.200000e+02 2.928800e+02 1 3 5 1 1.200000e+02 2.928800e+02 4 3 5 1 1.200000e+02 3.347200e+02 1 6 7 1 1.200000e+02 2.928800e+02 1 6 8 1 1.200000e+02 2.928800e+02 7 6 8 1 1.200000e+02 3.347200e+02 2 1 3 1 1.215000e+02 5.020800e+02 2 1 6 1 1.215000e+02 5.020800e+02 3 1 6 1 1.170000e+02 5.020800e+02 [ dihedrals ] ; ai aj ak al funct c0 c1 c2 2 1 3 4 1 1.800000e+02 3.347200e+01 2.000000e+00 6 1 3 4 1 1.800000e+02 3.347200e+01 2.000000e+00 2 1 3 5 1 1.800000e+02 3.347200e+01 2.000000e+00 6 1 3 5 1 1.800000e+02 3.347200e+01 2.000000e+00 2 1 6 7 1 1.800000e+02 3.347200e+01 2.000000e+00 3 1 6 7 1 1.800000e+02 3.347200e+01 2.000000e+00 2 1 6 8 1 1.800000e+02 3.347200e+01 2.000000e+00 3 1 6 8 1 1.800000e+02 3.347200e+01 2.000000e+00 [ dihedrals ] ; ai aj ak al funct c0 c1 3 4 5 1 2 0.000000e+00 1.673600e+02 6 7 8 1 2 0.000000e+00 1.673600e+02 1 3 6 2 2 0.000000e+00 1.673600e+02 ; Include SPC water topology #include "spc.itp" [ system ] Urea in Water [ molecules ] Urea 1 SOL 1000
tpr¶
The tpr file extension stands for portable binary run input file. This file contains the starting structure of your simulation, the molecular topology and all the simulation parameters. Because this file is in binary format it cannot be read with a normal editor. To read a portable binary run input file type:
gmx dump -s topol.tpr
or if you’re not such a fast reader:
gmx dump -s topol.tpr | less
You can also compare two tpr files using:
gmx check -s1 top1 -s2 top2 | less
trr¶
Files with the trr file extension contain the trajectory of a simulation. In this file all the coordinates, velocities, forces and energies are printed as you told GROMACS in your mdp file. This file is in portable binary format and can be read with gmx dump:
gmx dump -f traj.trr
or if you’re not such a fast reader:
gmx dump -f traj.trr | less
You can also get a quick look in the contents of the file (number of frames etc.) using:
% gmx check -f traj.trr
xpm¶
The GROMACS xpm file format is compatible with the XPixMap format and is used for storing matrix data. Thus GROMACS xpm files can be viewed directly with programs like XV. Alternatively, they can be imported into GIMP and scaled to 300 DPI, using strong antialiasing for font and graphics. The first matrix data line in an xpm file corresponds to the last matrix row. In addition to the XPixMap format, GROMACS xpm files may contain extra fields. The information in these fields is used when converting an xpm file to EPS with gmx xpm2ps. The optional extra field are:
- Before the
gv_xpmdeclaration:
title,
legend,
x-label,
y-labeland
type, all followed by a string. The
legendfield determines the legend title. The
typefield must be followed by
"continuous"or
"discrete", this determines which type of legend will be drawn in an EPS file, the default type is continuous.
- The xpm colormap entries may be followed by a string, which is a label for that color.
- Between the colormap and the matrix data, the fields
x-axisand/or
y-axismay be present followed by the tick-marks for that axis.
The example GROMACS xpm file below contains all the extra fields. The C-comment delimiters and the colon in the extra fields are optional.
/* XPM */ /* This matrix is generated by g_rms. */ /* title: "Backbone RMSD matrix" */ /* legend: "RMSD (nm)" */ /* x-label: "Time (ps)" */ /* y-label: "Time (ps)" */ /* type: "Continuous" */ static char * gv_xpm[] = { "13 13 6 1", "A c #FFFFFF " /* "0" */, "B c #CCCCCC " /* "0.0399" */, "C c #999999 " /* "0.0798" */, "D c #666666 " /* "0.12" */, "E c #333333 " /* "0.16" */, "F c #000000 " /* "0.2" */, /* x-axis: 0 40 80 120 160 200 240 280 320 360 400 440 480 */ /* y-axis: 0 40 80 120 160 200 240 280 320 360 400 440 480 */ "FEDDDDCCCCCBA", "FEDDDCCCCBBAB", "FEDDDCCCCBABC", "FDDDDCCCCABBC", "EDDCCCCBACCCC", "EDCCCCBABCCCC", "EDCCCBABCCCCC", "EDCCBABCCCCCD", "EDCCABCCCDDDD", "ECCACCCCCDDDD", "ECACCCCCDDDDD", "DACCDDDDDDEEE", "ADEEEEEEEFFFF"
xtc¶
The xtc format is a portable format for trajectories. It uses the xdr routines for writing and reading data which was created for the Unix NFS system. The trajectories are written using a reduced precision algorithm which works in the following way: the coordinates (in nm) are multiplied by a scale factor, typically 1000, so that you have coordinates in pm. These are rounded to integer values. Then several other tricks are performed, for instance making use of the fact that atoms close in sequence are usually close in space too (e.g. a water molecule). To this end, the <i>xdr</i> library is extended with a special routine to write 3-D float coordinates.
All the data is stored using calls to xdr routines.
- int magic
- A magic number, for the current file version its value is 1995.
- int natoms
- The number of atoms in the trajectory.
- int step
- The simulation step.
- float time
- The simulation time.
- float box[3][3]
- The computational box which is stored as a set of three basis vectors, to allow for triclinic PBC. For a rectangular box the box edges are stored on the diagonal of the matrix.
- 3dfcoord x[natoms]
- The coordinates themselves stored in reduced precision. Please note that when the number of atoms is smaller than 9 no reduced precision is used.
Using xtc in your “C” programs¶
To read and write these files the following “C” routines are available:
/* All functions return 1 if successful, 0 otherwise */ extern int open_xtc(XDR *xd,char *filename,char *mode); /* Open a file for xdr I/O */ extern void close_xtc(XDR *xd); /* Close the file for xdr I/O */ extern int read_first_xtc(XDR *xd,char *filename, int *natoms,int *step,real *time, matrix box,rvec **x,real *prec); /* Open xtc file, read xtc file first time, allocate memory for x */ extern int read_next_xtc(XDR *xd, int *natoms,int *step,real *time, matrix box,rvec *x,real *prec); /* Read subsequent frames */ extern int write_xtc(XDR *xd, int natoms,int step,real time, matrix box,rvec *x,real prec); /* Write a frame to xtc file */
To use the library function include
"gromacs/fileio/xtcio.h"
in your file and link with
-lgmx.$(CPU).
Using xtc in your FORTRAN programs¶
To read and write these in a FORTRAN program use the calls to
readxtc and
writextc as in the following sample program
which reads and xtc file and copies it to a new one:
program testxtc parameter (maxatom=10000,maxx=3*maxatom) integer xd,xd2,natoms,step,ret,i real time,box(9),x(maxx) call xdrfopen(xd,"test.xtc","r",ret) print *,'opened test.xtc, ret=',ret call xdrfopen(xd2,"testout.xtc","w",ret) print *,'opened testout.xtc, ret=',ret call readxtc(xd,natoms,step,time,box,x,prec,ret) if ( ret .eq. 1 ) then call writextc(xd2,natoms,step,time,box,x,prec,ret) else print *,'Error reading xtc' endif stop end
To link your program use
-L$(GMXHOME)/lib/$(CPU) -lxtcf
on your linker command line.
xvg¶:
|
https://manual.gromacs.org/5.1.5/user-guide/file-formats.html
|
CC-MAIN-2021-17
|
refinedweb
| 3,562
| 73.27
|
XForms is the next generation of HTML forms.
In our XForms tutorial you will learn how to use XForms in your applications.
Start learning XForms now!
XForms Functions
Description of all the predifined functions in XForms.
XForms Data Types
A reference of all data types that can be used in XForms.
Introduction to XForms
This chapter explains what XForms is and how it differs from HTML forms.
XForms Model
This chapter describes the XForms model.
XForms Namespace
This chapter describes the XForms namespace.
XForms Example
This chapter demonstrates an XForms example.
XForms and XPath
This chapter describes how XForms uses XPath to address data.
XForms Input
This chapter describes the XForms user interface controls.
XForms Data types
This chapter describes the data types used in XForms.
XForms Properties
This chapter describes XForms properties..
XForms Actions
This chapter describes XForms
|
http://www.w3schools.com/xforms/
|
crawl-002
|
refinedweb
| 139
| 69.89
|
27 September 2012 07:56 [Source: ICIS news]
SINGAPORE (ICIS)--AkzoNobel is building a new trimethylgallium (TMG) plant and will expand its trimethylaluminum (TMAL) unit at the company’s Battleground site at ?xml:namespace>
TMAL is a feedstock for TMG, a high purity metal organic used in products such as light-emitting diode (LED) wafer production.
The expanded TMAL unit is expected to be completed in the third quarter of next year, while the new TMG plant will be ready in August 2014, the company said in a statement.
Financial details of the project as well as the capacity of the units were not disclosed.
The global LED industry is expected to surge over the next decade, driven by applications in displays such as PCs, laptops and tablet screens, according
|
http://www.icis.com/Articles/2012/09/27/9599004/dutch-akzonobel-to-boost-capacity-at-battleground-site-in.html
|
CC-MAIN-2014-10
|
refinedweb
| 130
| 54.56
|
I get no errors though don't get right answer to problem.
In the problem I'm searching through an array of strings in each string if a letter corresponds point to point with the letter in a certain separate string and is the same letter then that string will have a match of one etc. I need to return the index of the string in dictionary which has the highest matches. Should there be multiple strings with same no. of matches then need to choose lowest index. In my program it somehow chooses the highest index when there is a tie instead of lowest index of the two. "matched.get(i) > max)" This is my code where I specific that higher index needs to have a match to be maximum not equality. I get no error by the way.
Code :
import java.util.HashMap; import java.util.Map; public class grafixCorrupt { public static void main(String[] args){ String[] dictionary = {"cat", "cab", "lab"}; String candidate = "dab"; System.out.println(selectWord(dictionary, candidate)); } public static int selectWord(String[] dictionary, String candidate){ int max = 0; Map<Integer, Integer>matched = new HashMap<Integer, Integer>(); for(int i = 0;i<dictionary.length;i++){ matched.put(i, 0); } for(int i = 0;i<dictionary.length;i++){ int index = 0; for(int j = 0;j<candidate.length();j++){ if(dictionary[i].substring(j,j+1).equals(candidate.substring(j,j+1))){ index++; } } matched.put(i, matched.get(i)+index); } for(int i = 0;i<matched.size();i++){ if(matched.get(i) > max){ max = i; } i++; } if(max == 0){ return -1; } return max; } }
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/33434-niggling-issue-code-printingthethread.html
|
CC-MAIN-2015-22
|
refinedweb
| 266
| 59.19
|
Bars3D QML Type
Properties
- barSpacing : size
- barSpacingRelative : bool
- barThickness : real
- columnAxis : CategoryAxis3D
- floorLevel : real
- multiSeriesUniform : bool
- primarySeries : Bar3DSeries
- rowAxis : CategoryAxis3D
- selectedSeries : Bar3DSeries
- seriesList : list<Bar3DSeries>
- valueAxis : ValueAxis3D
Methods
- void addSeries(Bar3DSeries series)
- void insertSeries(int index, Bar3DSeries series)
- void removeSeries(Bar3DSeries series)
Detailed Description
This type enables developers to render bar graphs in 3D with Qt Quick 2.
You will need to import data visualization module to use this type:
import QtDataVisualization 1.2
After that you can use Bars3D in your qml files:
import QtQuick 2.0 import QtDataVisualization 1.2 Item { width: 640 height: 480 Bars3D { width: parent.width height: parent.height Bar3DSeries { itemLabelFormat: "@colLabel, @rowLabel: @valueLabel" ItemModelBarDataProxy { itemModel: dataModel // Mapping model roles to bar series rows, columns, and values. rowRole: "year" columnRole: "city" valueRole: "expenses" } } } ListModel { id: dataModel ListElement{ year: "2012"; city: "Oulu"; expenses: "4200"; } ListElement{ year: "2012"; city: "Rauma"; expenses: "2100"; } ListElement{ year: "2012"; city: "Helsinki"; expenses: "7040"; } ListElement{ year: "2012"; city: "Tampere"; expenses: "4330"; } ListElement{ year: "2013"; city: "Oulu"; expenses: "3960"; } ListElement{ year: "2013"; city: "Rauma"; expenses: "1990"; } ListElement{ year: "2013"; city: "Helsinki"; expenses: "7230"; } ListElement{ year: "2013"; city: "Tampere"; expenses: "4650"; } } }
See Qt Quick 2 Bars Example for more thorough usage example.
See also Bar3DSeries, ItemModelBarDataProxy, Scatter3D, Surface3D, and Qt Data Visualization C++ Classes.
Property Documentation
Bar spacing in X and Z dimensions.
Preset to
(1.0, 1.0) by default. Spacing is affected by the barSpacingRelative property.
Whether spacing is absolute or relative to bar thickness.
If
true, the value of
0.0 means that the bars are placed side-to-side,
1.0 means that a space as wide as the thickness of one bar is left between the bars, and so on. Preset to
true.
The bar thickness ratio between the X and Z dimensions. The value
1.0 means that the bars are as wide as they are deep, whereas
0.5 makes them twice as deep as they are wide.
The active column axis.
If an axis is not given, a temporary default axis with no labels is created. This temporary axis is destroyed if another axis is explicitly set to the same orientation.
The floor level for the bar graph in Y-axis data coordinates.
The actual floor level will be restricted by the Y-axis minimum and maximum values. Defaults to zero.
Defines whether bars are to be scaled with proportions set to a single series bar even if there are multiple series displayed. If set to
true, bar spacing will be correctly applied only to the X-axis. Preset to
false by default.
The primary series of the graph. It is used to determine the series is null, this property resets to default. Defaults to the first added series or zero if no series are added to the graph.
The active row axis.
If an axis is not given, a temporary default axis with no labels is created. This temporary axis is destroyed if another axis is explicitly set to the same orientation.
The selected series or
null. If selectionMode has the
SelectionMultiSeries flag set, this property holds the series that owns the selected bar.
\qmldefaultThe series of the graph. By default, this property contains an empty list. To set the series, either use the addSeries() function or define them as children of the graph.
The active value axis.
If an axis is not given, a temporary default axis with no labels and an automatically adjusting range is created. This temporary axis is destroyed if another axis is explicitly set to the same orientation.
Method Documentation
Adds the series to the graph. A graph can contain multiple series, but only one set of axes, so the rows and columns of all series must match for the visualized data to be meaningful. If the graph has multiple visible series, only the first one.
Remove the series from the.
|
https://doc-snapshots.qt.io/qt6-6.1/qml-qtdatavisualization-bars3d.html
|
CC-MAIN-2021-39
|
refinedweb
| 641
| 50.12
|
To understand how MongoDB’s sharding works, you need to know about all the components that make up a sharded cluster and the role of each component in the context of the cluster as a whole. In this article, I’ll help you understand the components of a sharded custer. ?This article is excerpted from MongoDB in Action. Save 39% on MongoDB in Action with code 15dzamia at manning.com. To understand how MongoDB’s sharding works, you need to know about all the components that make up a sharded cluster and the role of each component in the context of the cluster as a whole. A sharded cluster consists of shards, mongos routers, and config servers, as shown in figure 1.
(image)
Let’s examine each component in figure 1:
- Shards (upper left) store the application data. In a sharded cluster, only the mongos routers or system administrators should be connecting directly to the shards. Like an unsharded deployment, each shard can be a single node for development and testing, but should be a Replica Set in production.
- mongos routers (center) cache the cluster metadata and use it to route operations to the correct shard or shards.
- Config servers (upper right) persistently store metadata about the cluster, including which shard has what subset of the data. If you find that once you’ve config your services, you could look into a sell used servers business.
Now, let’s discuss in more detail the role each of these components plays in the cluster as a whole.
Shards: storage of application data
A shard, shown at the upper left of figure 1, is either a single mongod server or a replica set that stores a partition of the application data. In fact, the shards are the only places where the application data gets saved in a sharded cluster. For testing, a shard can be a single mongod server but should be deployed as a replica set in production, because then it will have its own replication mechanism and can fail over automatically. You can connect to an individual shard just as you would to a single node or a replica set, but if you try to run operations on that shard directly, you’ll see only a portion of the cluster’s total data. processes are lightweight and nonpersistent1. Because of this, they are often deployed on the same machines as the application servers, ensuring that only one network hop is required for requests to any given shard. In other words, the application connects locally to a mongos, and the mongos manages connections to the individual shards.
Config servers: storage of metadata
mongos processes are nonpersistent, which means something must durably store the metadata needed to properly manage the cluster. That’s the job of the config servers, shown in the top right of figure 1. This metadata includes the global cluster configuration; the locations of each database, collection, and the particular ranges of data therein; and a change log preserving a history of the migrations of data across shards.
The metadata held by the config servers is central to the proper functioning and upkeep of the cluster. For instance, every time a mongos process is started, the mongos fetches a copy of the metadata from the config servers. Without this data, no coherent view of the shard cluster is possible. The importance of this data, then, informs the design and deployment strategy for the config servers.
If you examine figure 1, you’ll see there are three config servers, but they’re not deployed as a replica set. They demand something stronger than asynchronous replication; when the mongos process writes to them, it does so using a two-phase commit. This guarantees consistency across config servers. You must run exactly three config servers in any production deployment of sharding, and these servers must reside on separate machines for redundancy.2
Now you know what a shard cluster consists of, but you’re probably still wondering about the sharding machinery itself. How is data actually distributed? We’ll explain that in the next section, first introducing the two ways to shard in MongoDB, and then covering the core sharding operations.
Distributing data in a sharded cluster
Before discussing the different ways to shard, let’s first discuss how data is grouped and organized in MongoDB. This topic is relevant to a discussion of sharding, because it illustrates the different boundaries on which we can partition our data.
To illustrate this, we’ll use a Google Docs–like application. Figure 2 shows how the data for such an application would be structured in MongoDB.
(image)
Looking at the figure from the innermost box moving outward, you can see there are four different levels of granularity in MongoDB: document, chunk, collection. and database.
These four levels of granularity represent the units of data in MongoDB:
- Document—The smallest unit of data in MongoDB. A document represents a single object in the system and can’t be divided further. You can compare this to a row in a relational database. Note that we consider a document and all its fields to be a single atomic unit. In the innermost box in figure 2, you can see a document with a username field with a value of “hawkins”.
- Chunk—A group of documents clustered by values on a field. A chunk is a concept that exists only in sharded setups. This is a logical grouping of documents based on their values for a field or set of fields, known as a shard key. We’ll cover the shard key when we go into more detail about chunks later in this article. The chunk shown in figure 2 contains all the documents that have the field username with values between “bakkum” and “verch”.
- Collection—A named grouping of documents within a database. To allow users to separate a database into logical groupings that make sense for the application, MongoDB provides the concept of a collection. This is nothing more than a named grouping of documents, and it must be explicitly specified by the application to run any queries. In figure 2, the collection name is spreadsheets. This collection name essentially identifies a subgroup within the cloud-docs database, which we’ll discuss next.
- Database—Contains collections of documents. This is the top-level named grouping in the system. Because a database contains collections of documents, a collection must also be specified to perform any operations on the documents themselves. In figure 2, the database name is cloud-docs. To run any queries, the collection must also be specified—spreadsheets in our example. The combination of database name and collection name together is unique throughout the system, and is commonly referred to as the namespace. It is usually represented by concatenating the collection name and database together, separated by a period character. For the example shown in figure 2, that would look like cloud-docs.spreadsheets.
Ways data can be distributed in a sharded cluster
So now you know the different ways in which data is logically grouped in MongoDB. The next question is, how does this interact with sharding? On which of these groupings can we partition our data? The quick answer to this question is that data can be distributed in a sharded cluster on two of these four groupings:
- On the level of an entire database, where each database along with all its collections is put on its own shard
- On the level of partitions or chunks of a collection, where the documents within a collection itself are divided up and spread out over multiple shards, based on values of a field or set of fields called the shard key in the documents
You may wonder why MongoDB does partitioning based on chunks rather than on individual documents. It seems like that would be the most logical grouping, because a document is the smallest possible unit. But when you consider the fact that not only do we have to partition the data, but we also have to be able to find it again, you’ll see that if we partition on a document level—for example, by allowing each spreadsheet in our Google Docs–like application to be independently moved around—we need to store metadata on the config servers, keeping track of every single document independently. If you imagine a system with small documents, half of your data may end up being metadata on the config servers just keeping track of where your actual data is stored.
Granularity jump from database to partition of collection
You may also wonder why there’s a jump in granularity from an entire database to a partition of a collection. Why isn’t there an intermediate step where we can distribute on the level of whole collections, without partitioning the collections themselves?
The real answer to this question is that it’s completely theoretically possible. It just hasn’t been implemented yet.3 Fortunately, because of the relationship between databases and collections, there’s an easy workaround. If you’re in a situation where you have different collections, say files.spreadsheets and files.powerpoints, that you want to be put on separate servers, just store them in separate databases. For example, you could store spreadsheets in files_spreadsheets.spreadsheets and PowerPoint files in files_powerpoints.powerpoints. Because files_spreadsheets and files_powerpoints are two separate databases, they’ll be distributed, and thus so will the collections.
Now we’ll discuss distributing entire databases.
Distributing databases to shards
As you create new databases in a sharded cluster, each database is assigned to a different shard. If you do nothing else, a database and all its collections will live forever on the shard where they were created. The databases themselves don’t even need to be sharded.
Because the name of a database is specified by the application, you can think of this as a kind of manual partitioning. MongoDB has nothing to do with how well partitioned your data is. To see why this is manual, consider using this method to shard the spreadsheets collection in our documents example. To shard this two ways using database distribution, you’d have to make two databases—say files1 and files2 —and evenly divide the data between the files1.spreadsheets and the files2.spreadsheets collections. It’s completely up to you to decide how to decide which spreadsheet goes in which collection and come up with a scheme to query the appropriate database to find them later. This is a difficult problem, which is why we don’t recommend this approach for this type of application.
When is the database distribution method really useful? One example of a real application for database distribution is MongoDB as a service. In one implementation of this model, customers can pay for access to a single MongoDB database. On the backend, each database is created in a sharded cluster. This means that if each client uses roughly the same amount of data, the distribution of the data will be optimal simply due to the distribution of the databases throughout the cluster.
Sharding within collections
Now, we will see the more powerful form of MongoDB sharding: sharding an individual collection. This is what the phrase automatic sharding refers to, since this is the form of sharding in which MongoDB itself makes all the partitioning decisions, without any direct intervention from the application.
To allow for partitioning of an individual collection, MongoDB defines the idea of a chunk, which as you saw earlier is a logical grouping of documents, based on the values of a predetermined field or set of fields called a shard key. It’s the user’s responsibility to choose the shard key.
For example, consider the following document from a spreadsheet management application:
{ _id: ObjectId("4d6e9b89b600c2c196442c21") filename: "spreadsheet-1", updated_at: ISODate("2011-03-02T19:22:54.845Z"), username: "banks", data: "raw document data" }
If all the documents in our collection have this format, we can, for example, choose a shard key of the _id field and the username field. MongoDB will then use that information in each document to determine what chunk the document belongs to.
How does MongoDB make this determination? At its core, MongoDB’s sharding is range based; this means that each “chunk” represents a range of shard keys. When MongoDB is looking at a document to determine what chunk it belongs to, it first extracts the values for the shard key, and then finds the chunk whose shard key range contains the given shard key values.
To give a concrete example of what this looks like, imagine that we chose a shard key of username for this spreadsheets collection, and we have two shards, “A” and “B.” Our chunk distribution may look something like that shown in table 1.
(image)
Looking at the table, it becomes a bit clearer what purpose chunks serve in a sharded cluster. If we gave you a document with a username field of “Babbage”, you would immediately know that it should be on shard A, just by looking at the table above. In fact, if we gave you any document that had a username field, which in this case is our shard key, you’d be able to use table 1 to determine which chunk the document belonged to, and from there determine which shard it should be sent to.
|
https://freecontent.manning.com/mongodb-understanding-components-of-a-sharded-cluster/
|
CC-MAIN-2022-05
|
refinedweb
| 2,232
| 60.14
|
ScriptManager Class For Calling WebServices From Javascript
Hi,
Ajax plays a vital role in the web application development.
With the help Microsoft ajax tool kit it is easy for a developer to make asynchronous call.
The main class in the microsoft ajax tool kit “ScriptManager“.
ScriptManager class plays a vital role in the .net web applications development.
Uses of Script Manager
Script Manager in general can be used in 3 different ways.
1.For sending custom scripts to the browser. This means we can add java script code from the code behind file.
2.Partial page rendering. Script manager can be used to refresh parts of the page without any post back.
3. Calling web services and methods in code behind file from java script.
So in this post we can see the third one that is calling web service from java script.
Calling Webservice From Javascript
1. Create an empty web application in the visual studio.
2. Add a web service to the project created in the above step and name it as TestService.
3.Uncomment the line “[System.Web.Script.Services.ScriptService]“. because this will enable calling of web service from javascript.
4.Web service is created. Now add a web method. I created a web method with name TestWebService and it is having a single parameter.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; /// <summary> /// Summary description for TestService /// < TestService : System.Web.Services.WebService { public TestService () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public string TestWebService(string name) { return "Hello "+name+" How are you"; } }
5.Now create an aspx page. We are actually going to call the web service from the javascript that we are going to write in this aspx page.
6.Add the following piece of code to the aspx page.
<asp:ScriptManager </Services> </asp:ScriptManager>
So we are adding the script manager control and to that we are adding a collection of services with the help of that can contain number of pointing to different web services.
So microsoft ajax framework generates client proxies for each and every present is collection. This makes the work of a developer easy.
Once we are done with the adding Scriptmanager control the next step is writing javascript code that is used to call the web methods present in the web servcie.
<script language=javascript> function callwebservice() { TestService.TestWebService("Pavan", onSuccess, onFailure); } function onSuccess(result) { alert(result); } function onFailure(result) { alert("There is an error"); } </script>
So in the above code we are calling the web method “TestWebService” as follows
TestService.TestWebService("Pavan", onSuccess, onFailure); //Here TestService is the class name and TestWebService is web method. //Here it take 3 parameters in our case. //1st parameter the actual parameter of the web method. //2nd parameter is the call back function that is to be executed if the call is successful. //3rd parameter is the callback function that is to be called if call is failed. //also there is an optional parameter called user context.
So i created a button and calling the javascript function callwebservice which in turn calls the web service web method.
<%@ Page <html xmlns=""> <head runat="server"> <title></title> <script language=javascript> function callwebservice() { TestService.TestWebService("Pavan", onSuccess, onFailure); } function onSuccess(result) { alert(result); } function onFailure(result) { alert("There is an error"); } </script> </head> <body> <form id="form1" runat="server"> <div> <asp:ScriptManager </Services> <asp:ServiceReference </asp:ScriptManager> <input type="button" id="btn1" runat="server" value="Button" name="Button" onclick="callwebservice()"/> </div> </form> </body> </html>
[...] In my previous post we saw how to call a web service from the java script . [...]
Calling Methods Present In The Code-behind File Using ScriptManager « pavanarya
December 24, 2011 at 1:01 pm
[...] <%@ Control </Services> </asp:ScriptManagerProxy> <input type=text <script language=javascript> function CallWSFromJS() { var val = document.getElementById("<%= stk.ClientID %>").value; ServiceUsedInChildControl.HelloWorld(val, onSuccess, onFailure); } function onSuccess(result) { alert(result); } function onFailure(result) { } </script> </div> This user control contains a text box, a button, and some javascript. Scriptmanagerproxy is used to add service reference to “ServiceUsedInChildControl.asmx” which is used only by this user control. From javascript code we are calling the web service. To know more about how to call javascript from web service see this post. [...]
ScriptManagerProxy And Its Usage « pavanarya
December 26, 2011 at 4:16 pm
[...] make use of Ajax tool kit provided by microsoft. You can see my previous post that makes use of ScriptManager class provided by Microsoft .Net Framework to call web services from [...]
Calling a Web Service from Javascript using XMLHttpRequest « pavanarya
May 20, 2012 at 6:19 pm
Hi, This is really a nice article. Using ScriptManager is a really good thing for accessing WS in java script. But, what happens if the script manager is added in the master page, then in all those pages which uses that master page tend to have the signature and comments of the Web methods or the page methods. So, which increases the size of the page while we can avoid this. So, to reduce the size of the page and being able to access the WS/page methods from the javascript use ScriptManagerProxy. For more information contact me at bishnu.panigrahi@gmail.com.
Bishnu
June 17, 2012 at 5:25 am
Hi Bishnu i already wrote an extension article to scriptmanger that is scriptmanagerproxy long back….Url is
pavanarya
June 17, 2012 at 8:43 am
What’s Taking place i am new to this, I stumbled upon this I’ve discovered It positively helpful and it has aided me out
loads. I am hoping to give a contribution & aid other users like its aided me.
Great job.
Morgan
January 17, 2013 at 9:55 am
Thank you so much Morgan..
pavanarya
January 17, 2013 at 10:08 am
|
http://pavanarya.wordpress.com/2011/12/23/scriptmanager-class-for-calling-webservices-from-javascript/
|
CC-MAIN-2014-41
|
refinedweb
| 970
| 57.98
|
1492/how-can-connect-my-main-ethereum-network-web3-py-using-python
After performing all the operations and testing i want to connect my API to main Ethereum network. How do I do it?
There is no android wallet to connect ...READ MORE
You can actually connect to the specific ...READ MORE
Give the RPC (--rpc) option when you ...READ MORE
The peers communicate among them through the ...READ MORE
This was a bug. They've fixed it. ...READ MORE
Just remove the json_encode call, and it should work:
$resp ...READ MORE
You need to install testrpc globally on ...READ MORE
This should work:
#!/usr/bin/env python
import getpass
import json
import requests ...READ MORE
When you start with Geth you need ...READ MORE
The problem lies in the command:
truffle migrate
Your truffle migrate command ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/1492/how-can-connect-my-main-ethereum-network-web3-py-using-python
|
CC-MAIN-2021-04
|
refinedweb
| 146
| 79.46
|
Red Hat Bugzilla – Bug 86301
"wchar_t" conflit in gcc-lib (ver 3.2/2.96) and X11
Last modified: 2007-04-18 12:52:12 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)
Description of problem:
conflicting types for `wchar_t' in /usr/X11R6/include/X11/Xlib.h
and /usr/lib/gcc-lib/i386-redhat-linux/3.2/include/stddef.h
Version-Release number of selected component (if applicable):
gcc-3.2-7.src.rpm
How reproducible:
Always
Steps to Reproduce:
1. I can not simplify the program , since my program eniroment are very
complicated But I list my sample :
when building my library, produce following error :
--- Building NdmFm LIB ---
/usr/bin/cc -DLINUX_V8 -DLINUX -D_GNU_SOURCE -g -DMOTIF -DXT_CODE -DSYSV -
DXOPEN_C
ATALOG -Dlinux -D__i386__ -D_POSIX_C_SOURCE=199309L -D_POSIX_SOURCE -
D_XOPEN_SOURC
E=500L -D_BSD_SOURCE -D_SVID_SOURCE -I/nice/nice.v2/LINUX_V8/inc -
I/usr/include/X
11 -I/usr/X11R6/include -L/nice/nice.v2/LINUX_V8/lib -L/usr/X11R6/lib -
L/usr/lib/X
11 -lXm -lXp -lXt -lSM -lICE -lXext -lX11 -lXt -lX11 -c UxXt.c
2.
3.
Actual Results: In file included from /usr/X11R6/include/X11/Intrinsic.h:56,
from UxXt.c:25:
/usr/X11R6/include/X11/Xlib.h:78: conflicting types for `wchar_t'
/usr/lib/gcc-lib/i386-redhat-linux/3.2/include/stddef.h:294: previous
declaration
of `wchar_t'
Additional info:
This is an XFree86 thing.
#ifndef X_WCHAR
#include <stddef.h>
#else
#ifdef __UNIXOS2__
#include <stdlib.h>
#else
/* replace this with #include or typedef appropriate for your system */
typedef unsigned long wchar_t;
#endif
#endif
Looking at the part of the header you posted, W_WCHAR shouldn't be defined.
Then wchar_t is picked up from stddef.h. Of course I don't know what other
effects this will have. But I would guess nothing which conflicts or which
would disable functionality..
|
https://bugzilla.redhat.com/show_bug.cgi?id=86301
|
CC-MAIN-2016-40
|
refinedweb
| 309
| 51.95
|
NAME
s3dw_widget - s3d widget information
SYNOPSIS
#include <s3dw.h>
STRUCTURE MEMBERS
struct _s3dw_widget { int type; s3dw_widget *parent; s3dw_style *style; int nobj; s3dw_widget **pobj; int focus; int flags; float ax; float ay; float az; float as; float arx; float ary; float arz; float width; float height; uint32_t oid; void *ptr; float x; float y; float z; float s; float rx; float ry; float rz; }
DESCRIPTION
This is the most basic widget type, it contains all the "general" widget information. If you want to move a widget, you'd change x,y,z,s and rx,ry,rz and call s3dw_moveit to turn your action reality. Every other widget has this type as first entry, so a simple typecast to s3dw_widget will give you the widgets "general" information. For typecast, you may use S3DWIDGET(). The pointer ptr allows linking to user-specific data structures. That comes in handy if the widget is called back by an event, and the program must now find out on which data the user reacted.
AUTHOR
Simon Wunderlich Author of s3d
|
http://manpages.ubuntu.com/manpages/oneiric/man9/s3dw_widget.9.html
|
CC-MAIN-2014-52
|
refinedweb
| 174
| 58.82
|
.
In the last post, we showed how to set up stripe payments with Netlify Functions, which allow us to create serverless lambda functions with ease. We’ll use this same application to show another Nuxt-specific functionality as well.
Let’s get started!
Creating the page
In this case, we’ve got some dummy data for the store that I created in mockaroo and am storing in the static folder. Typically you’ll use fetch or axios and an action in the Vuex store to gather that data. Either way, we store the data with Vuex in
store/index.js, along with the UI state, and an empty array for the cart.
import data from '~/static/storedata.json' export const state = () => ({ cartUIStatus: 'idle', storedata: data, cart: [] })
It’s important to mention that in Nuxt, all we have to do to set up routing in the application is create a
.vue file in the pages directory. So we have an
index.vue page for our homepage, a
cart.vue page for our cart, and so on. Nuxt automagically generates all the routing for these pages for us.
In order to create dynamic routing, we will make a directory to house those pages. In this case, I made a directory called
/products, since that’s what the routes will be, a view of each individual product details.
In that directory, I’ll create a page with an underscore, and the unique indicator I want to use per page to create the routes. If we look at the data I have in my cart, it looks like this:
[ { "id": "9d436e98-1dc9-4f21-9587-76d4c0255e33", "color": "Goldenrod", "description": .", "gender": "Male", "name": "Desi Ada", "review": "productize virtual markets", "starrating": 3, "price": 50.40, "img": "1.jpg" }, … ]
You can see that the ID for each entry is unique, so that’s a good candidate for something to use, we’ll call the page:
_id.vue
Now, we can store the id of the particular page in our data by using the route params:
data() { return { id: this.$ route.params.id, } },
For the entry from above, our data if we looked in devtools would be:
id: "9d436e98-1dc9-4f21-9587-76d4c0255e33"
We can now use this to retrieve all of the other information for this entry from the store. I’ll use
mapState:
import { mapState } from "vuex"; computed: { ...mapState(["storedata"]), product() { return this.storedata.find(el => el.id === this.id); } },
And we’re filtering the
storedata to find the entry with our unique ID!
Let the Nuxt config know
If we were building an app using
yarn build, we’d be done, but we’re using Nuxt to create a static site. When we use Nuxt to create a static site, we’ll use the
yarn generate command. We have to let Nuxt know about the dynamic files with the
generate command in nuxt.config.js.
This command will expect a function that will return a promise that resolves in an array that will look like this:
export default { generate: { routes: [ '/product/1', '/product/2', '/product/3' ] } }
To create this, at the top of the file we’ll bring in the data from the static directory, and create the function:
import data from './static/storedata.json' let dynamicRoutes = () => { return new Promise(resolve => { resolve(data.map(el => `product/$ {el.id}`)) }) }
We’ll then call the function within our config:
generate: { routes: dynamicRoutes },
If you’re gathering your data from an API with axios instead (which is more common), it would look more like this:
import axios from 'axios' let dynamicRoutes = () => { return axios.get('').then(res => { return res.data.map(product => `/product/$ {product.id}`) }) }
And with that, we’re completely done with the dynamic routing! If you shut down and restart the server, you’ll see the dynamic routes per product in action!
For the last bit of this post, we’ll keep going, showing how the rest of the page was made and how we’re adding items to our cart, since that might be something you want to learn, too.
Populate the page
Now we can populate the page with whatever information we want to show, with whatever formatting we would like, as we have access to it all with the product computed property:
<main> <section class="img"> <img : </section> <section class="product-info"> <h1>{{ product.name }}</h1> <h4 class="price">{{ product.price | dollar }}</h4> <p>{{ product.description }}</p> </section> ... </main>
In our case, we’ll also want to add items to the cart that’s in the store. We’ll add the ability to add and remove items (while not letting the decrease count dip below zero
<p class="quantity"> <button class="update-num" @-</button> <input type="number" v- <button class="update-num" @+</button> </p> ... <button class="button purchase" @Add to Cart</button>
In our methods on that component, we’ll add the item plus a new field, the quantity, to an array that we’ll pass as the payload to mutation in the store.
methods: { cartAdd() { let item = this.product; item.quantity = this.quantity; this.tempcart.push(item); this.$ store.commit("addToCart", item); } }
In the Vuex store, we’ll check if the item already exists. If it does, we’ll just increase the quantity. If not, we’ll add the whole item with quantity to the cart array.
addToCart: (state, payload) => { let itemfound = false state.cart.forEach(el => { if (el.id === payload.id) { el.quantity += payload.quantity itemfound = true } }) if (!itemfound) state.cart.push(payload) }
We can now use a getter in the store to calculate the total, which is what we’ll eventually pass to our Stripe serverless function (the other post picks up from here). We’ll use a reduce for this as reduce is very good at retrieving one value from many. (I wrote up more details on how reduce works here).
cartTotal: state => { if (!state.cart.length) return 0 return state.cart.reduce((ac, next) => ac + next.quantity * next.price, 0) }
And there you have it! We’ve set up individual product pages, and Nuxt generates all of our individual routes for us at build time. You’d be Nuxt not to try it yourself. 😬
The post Creating Dynamic Routes in a Nuxt Application appeared first on CSS-Tricks.
|
http://design-lance.com/tag/routes/
|
CC-MAIN-2019-47
|
refinedweb
| 1,038
| 64.91
|
I'm doing an example question from my C book.. a leap year is any year divisible by 4, unless it's divisible by 100 but not 400. this is what I came up with, but it won't compile. it says "leap.c:14: error: lvalue required as left operand of assignment"..
anyone know what the deal is? thanks for any help!!anyone know what the deal is? thanks for any help!!Code:#include <stdio.h> char line[10]; int input; main() { printf("Is this year a leap year?\nInput year: "); fgets(line, sizeof(line), stdin); sscanf(line, "%d", &input); if (input%4 == 0) { if ((input%100 == 0) && (input%400 =! 0)) printf("Is not a leap year\n"); else printf("Is a leap year\n"); } else printf("Is not a leap year\n"); return (0); }
|
http://cboard.cprogramming.com/c-programming/121269-lvalue-required-left-operand-assignment.html
|
CC-MAIN-2013-48
|
refinedweb
| 136
| 78.96
|
QFileDialog, How to hide the hidden files of showing?
As we know by default the
QFileDialogshows the hidden files.
Is there a way or a method to make the hidden files don't show?
@Prmego said in QFileDialog, How to hide the hidden files of showing?:
Hi
If you are ok with using the Qt and not native dialog, you can use QSortFilterProxyModel
remember the includes:
#include <QSortFilterProxyModel>
#include <QModelIndex>
#include <QFileSystemModel>
Thanks, But I am still at the beginning in use Qt, and that way is a bit difficult.
is there a short way to do what I want, and what about
QFileDialog::setFilter, Is it suitable for that purpose?
@Prmego said in QFileDialog, How to hide the hidden files of showing?:
QFileDialog::setFilter
Yes it should, i must have misread/missed docs as it does have
QDir::Hidden
:)
That is of course much easier.
I have written the following code:
QFileDialog fd; fd.setFilter(QDir::Dirs|QDir::Files|QDir::Drives); QString openFilePath = fd.getOpenFileName(mainWindow, "Open File", lastDirectoryPath, "Text files (*.txt);;All files (*)");
But unfortunately, when I open the OpenDialog I found it still shows the hidden files/directories.
I don't know why that happened.
@Prmego said in QFileDialog, How to hide the hidden files of showing?:
getOpenFileName
Try with fd.setOption(QFileDialog::DontUseNativeDialog);
I have been appended the
setOptionas follow:
QFileDialog fd; fd.setFilter(QDir::Dirs|QDir::Files|QDir::Drives); fd.setOption(QFileDialog::DontUseNativeDialog); QString openFilePath = fd.getOpenFileName(mainWindow, "Open File", lastDirectoryPath, "Text files (*.txt);;All files (*)");
But the hidden files/dirs still shows.
- mrjj Qt Champions 2016
Hi try
QFileDialog fd;
fd.setFilter(QDir::Dirs|QDir::Files|QDir::Drives);
fd.setOption(QFileDialog::DontUseNativeDialog);
fd.exec();
works with win 10 here.
I think the reason the other do not work is due to you using a static function.
Its actually QFileDialog::getOpenFileName(xx)
so it will ignore ANYTHING u set on fd.
a static member function is a function that does not need an instance to be be callable.
Even it seem that its via fd. its not. it has its own inner instance and hence the filter you set
on the instance fd, does not come into play.
Thank you, I appreciate your helping to me.
@Prmego
Np. :)
When not using FileDialog::DontUseNativeDialog it seems to ignore it
it evens ignore the setting in windows. ( odd enough as its a windows dialog then)
|
https://forum.qt.io/topic/79289/qfiledialog-how-to-hide-the-hidden-files-of-showing
|
CC-MAIN-2017-22
|
refinedweb
| 398
| 50.73
|
I can't install mousetrap pluggins
Hi! I've been trying to install the current development version of mousetrap pluggin for the 3.2.8 version of OpenSesame. When I run the command in the debug window, the following message appears:
"You are using pip version 9.0.1, however version 19.2.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command."
Following the error messahe, i tried to run the command for upgrading it:
import pip._internal
pip._internal.main(['install', 'python-pygaze', '--upgrade'])
But the the error message says "ImportError: No module named _internal"
How do I fix this in order to properly install the mousetrap pluggin?
Hi there,
the following is only a notification but not an error message:
"You are using pip version 9.0.1, however version 19.2.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command."
So in principle, the installation should work regardless of this message. Have you checked whether mousetrap was installed correctly?
Best,
Pascal
Thank you Pascal
No it's not installed. I just saw it says that my access was denied. I guess there's nothing to do now?
Have you followed the instructions at? If you are using Windows you need to run opensesame as an administrator to avoid the access denied issue. Hope this helps!
Best
Pascal
|
https://forum.cogsci.nl/discussion/comment/17128
|
CC-MAIN-2021-04
|
refinedweb
| 234
| 61.12
|
<identifier> expected error
expected error expected error
print("
import java.util.*;
public class Person{
Queue<Person> busQ = new LinkedList<Person>();
busQ.addLast(homer);
busQ.addLast(marge);
busQ.addLast(maggie);
busQ.addLast
equal symbol expected in jsp - JSP-Servlet
equal symbol expected in jsp Hi frndz i am using following code... error
org.apache.jasper.JasperException: /upload_page.jsp(2,26) equal symbol expected
org.apache.jasper.compiler.DefaultErrorHandler.jspError
error
error while iam compiling iam getting expected error
jQuery Autocomplete fetch_object fatal error?
jQuery Autocomplete fetch_object fatal error? I am consistently getting this error when I use the code supplied here, even after updating.../autocomplete.html
"Fatal Error: Call to a member function fetch_object() on a non
error detection - Java Beginners
me an error,of "(" or "[" expected. i tried to change everything i can to repair the error but got stuck. please help me to find out what must i do to repair...error detection
Hi Amardeep and all friends out
in this file.
# Struts Validator Error Messages
errors.required={0..."/>
</plug-in>
</struts-config>
validator...;!--
This file contains the default Struts Validator pluggable validator
servlet action not available - Struts
times try to make it as response object
also in struts config...servlet action not available hi
i am new to struts and i am getting the error "servlet action not available".why it is displaying this error
Object slicing
Object slicing I have a program as bellow:
#include<iostream.h>
#include<conio.h>
class A
{
public:
int x...;<obj1.y;
getch();
}
The error message is "y is not member
Using Standard Validator & Custom Validator
validation error message.
Steps to create a custom validator:
1. Create a class...Using Standard Validator & Custom Validator... is used to make sure that the component
is going to obtain the expected content
Java Compilation error. Hibernate code problem. Struts first example - Hibernate
Java Compilation error. Hibernate code problem. Struts first example Java Compilation error. Hibernate code problem. Struts first example
Access properties of bean from request object using OGNL.
Access properties of bean from request object using OGNL.
In this section, we will introduce you how to access beans properties from
request object using...;%@taglib
uri="/struts-tags"
prefix="s"%>
<html>
404 error
404 error struts application having 404 error while hosting to a online server.it works fine in my local machine but dont work in virtual host.showing 404 error
Struts Articles
security principle and discuss how Struts can be leveraged to address...)
8. if validate() returns empty ActionErrors object or null, Struts calls... returns an ActionForward object, which is used by Struts to select the destination URI Books
;
New
Book on Struts
Object Source Publications has published... applications are written to use the Struts Validator from the get-go, most start out... Struts Validator counterpart. The point of the exercise is not the example validate Alternative
. This is a major difference to Struts.
You can use any object as a command or form...
Struts Alternative
Struts is very robust and widely used framework, but there exists the alternative to the struts framework
what is struts? - Struts
what is struts? What is struts?????how it is used n what... of the Struts framework is a flexible control layer based on standard technologies like Java... Commons packages. Struts encourages application architectures based on the Model 2
Java error identifier excepted
Java error identifier excepted
Java Error Identifier excepted occurred...
that help you in understanding the Java error identifier excepted. For this we
validations in struts - Struts
I an getting an error in tomcat while running the application in struts validations
the error in server as "validation disabled"
plz give me reply as soon...}.
-------------------------------
Visit for more information.
reached end of file while parsing and while expected
Java - Struts
Java this error occur when we run our struts wev b applcation
Servlet action is currently unavailable
dropdown in struts - Struts
in struts application when i have the workflow as jsp->acton->Business Delegator->controller->Business object->DAO->database.
Where should i... can i set the list to context to get it from there in a collection object url problem in Switching module using SwitchAction class - Struts
struts url problem in Switching module using SwitchAction class when... error "invalid path" was requested
because it searches in the login module... attributes set in request
i want to send the ArrayList object
Parsing into date object
. But, it is showing error at the line where i performed parsing operation. Please help me
error come
error come com.techi.bean.Employee cannot be cast...();){
Object[] row = (Object[]) it.next();
System.out.println("ID: " + row[1]); }
if i m not using this
Object[] row = (Object file uploading - Struts
Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm... storing this byte array object to database (column data type is blob) and the user
object
object is it possible to create object in the same class..?.
Yes, you can.
class CreateObject
{
CreateObject(){
System.out.println("I have an object.");
}
public static void main(String[] error class interface or enum excepted
Java error class interface or enum excepted
Java Error class interface or enum excepted are the class of java error that occurred
when a programmer
Object Orient Programming
Object Orient Programming I have this program that needs to display multiple taxis. I have the code but there is an error. Could someone tell me where i am going wrong??
import java.awt.*;
import javax.swing.*;
public
Struts Interview Questions
Struts Interview Questions
Question: Can I setup Apache Struts to use
multiple configuration files?
Answer: Yes Struts can use multiple configuration files. Here
Swing error in code
ActionListener() {
^
SClient.java:29: error: ')' expected...: error: ';' expected
b.addActionListener(new ActionListener...() {
^
SClient.java:29: error: <identifier> expected
b.addActionListener(new
How Struts Works
in the struts frameworks which helps you
in sending error messages...
How Struts Works
The basic purpose of the Java Servlets in struts is to
handle requests
Unable to understand Struts - Struts
= "success";
/**
* This is the action called from the Struts...
{
out.close();
}
return mapping.findForward("error... if a servlet-specific error occurs
* @throws IOException if an I/O error
still error
an object (button) of JButton class
then use the following
Hi... - Struts
Hi... Hi,
If i am using hibernet with struts then require... of this installation Hi friend,
Hibernate is Object-Oriented mapping tool that maps the object view of data into relational database and provides efficient
Struts Tutorial
, struts integration with other framework,
Struts Validator Framework.
What...,
internationalization, error handling, tiles layout etc. Struts framework is also...In this section we will discuss about Struts.
This tutorial will 2 Actions
Struts 2 Actions
In this section we will learn about Struts 2 Actions, which is a fundamental
concept in most of the web application frameworks. Struts 2 Action are the
important concept in Struts 2
struts-config.xml
got the struts-config.xml .and when i double clikked on it .i got the error...struts-config.xml i come out of the myeclipse editor and went... locate the resource specified. Error processing resource '
what are Struts ?
what are Struts ? What are struts ?? explain with simple example.
The core of the Struts framework is a flexible control layer based..., as well as various Jakarta Commons packages. Struts encourages application
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions.
|
http://www.roseindia.net/tutorialhelp/comment/69556
|
CC-MAIN-2013-20
|
refinedweb
| 1,257
| 50.94
|
Part 19: I Thought I Wanted It But I Changed My Mind
This continues the introduction started here. You can find an index to the entire series here.
Introduction
Twisted is an ongoing project and the Twisted developers regularly add new features and extend old ones. With the release of Twisted 10.1.0, the developers added a new capability — cancellation — to the
Deferred class which we’re going to investigate today.
Asynchronous programming decouples requests from responses and thus raises a new possibility: between asking for the result and getting it back you might decide you don’t want it anymore. Consider the poetry proxy server from Part 14. Here’s how the proxy worked, at least for the first request of a poem:
- A request for a poem comes in.
- The proxy contacts the real server to get the poem.
- Once the poem is complete, send it to the original client.
Which is all well and good, but what if the client hangs up before getting the poem? Maybe they requested the complete text of Paradise Lost and then decided they really wanted a haiku by Kojo. Now our proxy is stuck with downloading the first one and that slow server is going to take a while. Better to close the connection and let the slow server go back to sleep.
Recall Figure 15, a diagram that shows the conceptual flow of control in a synchronous program. In that figure we see function calls going down, and exceptions going back up. If we wanted to cancel a synchronous function call (and this is just hypothetical) the flow control would go in the same direction as the function call, from high-level code to low-level code as in Figure 38:
Of course, in a synchronous program that isn’t possible because the high-level code doesn’t even resume running until the low-level operation is finished, at which point there is nothing to cancel. But in an asynchronous program the high-level code gets control of the program before the low-level code is done, which at least raises the possibility of canceling the low-level request before it finishes.
In a Twisted program, the lower-level request is embodied by a
Deferred object, which you can think of as a “handle” on the outstanding asynchronous operation. The normal flow of information in a deferred is downward, from low-level code to high-level code, which matches the flow of return information in a synchronous program. Starting in Twisted 10.1.0, high-level code can send information back the other direction — it can tell the low-level code it doesn’t want the result anymore. See Figure 39:
Canceling Deferreds
Let’s take a look at a few sample programs to see how canceling deferreds actually works. Note, to run the examples and other code in this Part you will need a version of Twisted 10.1.0 or later. Consider deferred-cancel/defer-cancel-1.py:
from twisted.internet import defer def callback(res): print 'callback got:', res d = defer.Deferred() d.addCallback(callback) d.cancel() print 'done'
With the new cancellation feature, the
Deferred class got a new method called
cancel. The example code makes a new deferred, adds a callback, and then cancels the deferred without firing it. Here’s the output:
done Unhandled error in Deferred: Traceback (most recent call last): Failure: twisted.internet.defer.CancelledError:
Ok, so canceling a deferred appears to cause the errback chain to run, and our regular callback is never called at all. Also notice the error is a
twisted.internet.defer.CancelledError, a custom Exception that means the deferred was canceled (but keep reading!). Let’s try adding an errback in deferred-cancel/defer-cancel-2.py:
from twisted.internet import defer def callback(res): print 'callback got:', res def errback(err): print 'errback got:', err d = defer.Deferred() d.addCallbacks(callback, errback) d.cancel() print 'done'
Now we get this output:
errback got: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: ] done
So we can ‘catch’ the errback from a cancel just like any other deferred failure.
Ok, let’s try firing the deferred and then canceling it, as in deferred-cancel/defer-cancel-3.py:
from twisted.internet import defer def callback(res): print 'callback got:', res def errback(err): print 'errback got:', err d = defer.Deferred() d.addCallbacks(callback, errback) d.callback('result') d.cancel() print 'done'
Here we fire the deferred normally with the
callback method and then cancel it. Here’s the output:
callback got: result done
Our callback was invoked (just as we would expect) and then the program finished normally, as if
cancel was never called at all. So it seems canceling a deferred has no effect if it has already fired (but keep reading!).
What if we fire the deferred after we cancel it, as in deferred-cancel/defer-cancel-4.py?
from twisted.internet import defer def callback(res): print 'callback got:', res def errback(err): print 'errback got:', err d = defer.Deferred() d.addCallbacks(callback, errback) d.cancel() d.callback('result') print 'done'
In that case we get this output:
errback got: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: ] done
Interesting! That’s the same output as the second example, where we never fired the deferred at all. So if the deferred has been canceled, firing the deferred normally has no effect. But why doesn’t
d.callback('result') raise an error, since you’re not supposed to be able to fire a deferred more than once, and the errback chain has clearly run?
Consider Figure 39 again. Firing a deferred with a result or failure is the job of lower-level code, while canceling a deferred is an action taken by higher-level code. Firing the deferred means “Here’s your result”, while canceling a deferred means “I don’t want it any more”. And remember that canceling is a new feature, so most existing Twisted code is not written to handle cancel operations. But the Twisted developers have made it possible for us to cancel any deferred we want to, even if the code we got the deferred from was written before Twisted 10.1.0.
To make that possible, the
cancel method actually does two things:
-.
Since older Twisted code is going to go ahead and fire that canceled deferred anyway, step #1 ensures our program won’t blow up if we cancel a deferred we got from an older library.
This means we are always free to cancel a deferred, and we’ll be sure not to get the result if it hasn’t arrived (even if it arrives later). But canceling the deferred might not actually cancel the asynchronous operation. Aborting an asynchronous operation requires a context-specific action. You might need to close a network connection, roll back a database transaction, kill a sub-process, et cetera. And since a deferred is just a general-purpose callback organizer, how is it supposed to know what specific action to take when you cancel it? Or, alternatively, how could it forward the cancel request to the lower-level code that created and returned the deferred in the first place? Say it with me now:
I know, with a callback!
Canceling Deferreds, Really
Alright, take a look at deferred-cancel/defer-cancel-5.py:
from twisted.internet import defer def canceller(d): print "I need to cancel this deferred:", d def callback(res): print 'callback got:', res def errback(err): print 'errback got:', err d = defer.Deferred(canceller) # created by lower-level code d.addCallbacks(callback, errback) # added by higher-level code d.cancel() print 'done'
This code is basically like the second example, except there is a third callback (
canceller) that’s passed to the
Deferred when we create it, rather than added afterwards. This callback is in charge of performing the context-specific actions required to abort the asynchronous operation (only if the deferred is actually canceled, of course). The
canceller callback is necessarily part of the lower-level code that returns the deferred, not the higher-level code that receives the deferred and adds its own callbacks and errbacks.
Running the example produces this output:
I need to cancel this deferred: <Deferred at 0xb7669d2cL> errback got: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: ] done
As you can see, the
canceller callback is given the deferred whose result we no longer want. That’s where we would take whatever action we need to in order to abort the asynchronous operation. Notice that
canceller is invoked before the errback chain fires. In fact, we may choose to fire the deferred ourselves at this point with any result or error of our choice (and thus preempting the
CancelledError failure). Both possibilities are illustrated in deferred-cancel/defer-cancel-6.py and deferred-cancel/defer-cancel-7.py.
Let’s do one more simple test before we fire up the reactor. We’ll create a deferred with a
canceller callback, fire it normally, and then cancel it. You can see the code in deferred-cancel/defer-cancel-8.py. By examining the output of that script, you can see that canceling a deferred after it has been fired does not invoke the
canceller callback. And that’s as we would expect since there’s nothing to cancel.
The examples we’ve looked at so far haven’t had any actual asynchronous operations. Let’s make a simple program that invokes one asynchronous operation, then we’ll figure out how to make that operation cancellable. Consider the code in deferred-cancel/defer-cancel-9.py:
from twisted.internet.defer import Deferred def send_poem(d): print 'Sending poem' d.callback('Once upon a midnight dreary') def get_poem(): """Return a poem 5 seconds later.""" from twisted.internet import reactor d = Deferred() reactor.callLater(5, send_poem, d) return d def got_poem(poem): print 'I got a poem:', poem def poem_error(err): print 'get_poem failed:', err def main(): from twisted.internet import reactor reactor.callLater(10, reactor.stop) # stop the reactor in 10 seconds d = get_poem() d.addCallbacks(got_poem, poem_error) reactor.run() main()
This example includes a
get_poem function that uses the reactor’s
callLater method to asynchronously return a poem five seconds after
get_poem is called. The
main function calls
get_poem, adds a callback/errback pair, and then starts up the reactor. We also arrange (again using
callLater) to stop the reactor in ten seconds. Normally we would do this by attaching a callback to the deferred, but you’ll see why we do it this way shortly.
Running the example produces this output (after the appropriate delay):
Sending poem I got a poem: Once upon a midnight dreary
And after ten seconds our little program comes to a stop. Now let’s try canceling that deferred before the poem is sent. We’ll just add this bit of code to cancel the deferred after two seconds (well before the five second delay on the poem itself):
reactor.callLater(2, d.cancel) # cancel after 2 seconds
The complete program is in deferred-cancel/defer-cancel-10.py, which produces the following output:
get_poem failed: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: ] Sending poem
This example clearly illustrates that canceling a deferred does not necessarily cancel the underlying asynchronous request. After two seconds we see the output from our errback, printing out the
CancelledError as we would expect. But then after five seconds will still see the output from
send_poem (but the callback on the deferred doesn’t fire).
At this point we’re just in the same situation as deferred-cancel/defer-cancel-4.py. “Canceling” the deferred causes the eventual result to be ignored, but doesn’t abort the operation in any real sense. As we learned above, to make a truly cancelable deferred we must add a
cancel callback when the deferred is created.
What does this new callback need to do? Take a look at the documentation for the
callLater method. The return value of
callLater is another object, implementing
IDelayedCall, with a
cancel method we can use to prevent the delayed call from being executed.
That’s pretty simple, and the updated code is in deferred-cancel/defer-cancel-11.py. The relevant changes are all in the
get_poem function:
def get_poem(): """Return a poem 5 seconds later.""" def canceler(d): # They don't want the poem anymore, so cancel the delayed call delayed_call.cancel() # At this point we have three choices: # 1. Do nothing, and the deferred will fire the errback # chain with CancelledError. # 2. Fire the errback chain with a different error. # 3. Fire the callback chain with an alternative result. d = Deferred(canceler) from twisted.internet import reactor delayed_call = reactor.callLater(5, send_poem, d) return d
In this new version, we save the return value from
callLater so we can use it in our cancel callback. The only thing our callback needs to do is invoke
delayed_call.cancel(). But as we discussed above, we could also choose to fire the deferred ourselves. The latest version of our example produces this output:
get_poem failed: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: ]
As you can see, the deferred is canceled and the asynchronous operation has truly been aborted (i.e., we don’t see the
send_poem).
Poetry Proxy 3.0
As we discussed in the Introduction, the poetry proxy server is a good candidate for implementing cancellation, as it allows us to abort the poem download if it turns out that nobody wants it (i.e., the client closes the connection before we send the poem). Version 3.0 of the proxy, located in twisted-server-4/poetry-proxy.py, implements deferred cancellation. The first change is in the
PoetryProxyProtocol:
class PoetryProxyProtocol(Protocol): def connectionMade(self): self.deferred = self.factory.service.get_poem() self.deferred.addCallback(self.transport.write) self.deferred.addBoth(lambda r: self.transport.loseConnection()) def connectionLost(self, reason): if self.deferred is not None: deferred, self.deferred = self.deferred, None deferred.cancel() # cancel the deferred if it hasn't fired
You might compare it to the older version. The two main changes are:
-.
Now we need to make sure that canceling the deferred actually aborts the poem download. For that we need to change the
ProxyService:
class ProxyService(object): poem = None # the cached poem def __init__(self, host, port): self.host = host self.port = port def get_poem(self): if self.poem is not None: print 'Using cached poem.' # return an already-fired deferred return succeed(self.poem) def canceler(d): print 'Canceling poem download.' factory.deferred = None connector.disconnect() print 'Fetching poem from server.' deferred = Deferred(canceler) deferred.addCallback(self.set_poem) factory = PoetryClientFactory(deferred) from twisted.internet import reactor connector = reactor.connectTCP(self.host, self.port, factory) return factory.deferred def set_poem(self, poem): self.poem = poem return poem
Again, you may wish to compare this with the older version. This class has a few more changes:
- We save the return value from
reactor.connectTCP, an IConnector object. We can use the
disconnectmethod on that object to close the connection.
- We create the deferred with a
cancelercallback. That callback is a closure which uses the
connectorto close the connection. But first it sets the
factory.deferredattribute to
None. Otherwise, the factory might fire the deferred with a “connection closed” errback before the deferred itself fires with a
CancelledError. Since this deferred was canceled, having the deferred fire with
CancelledErrorseems more explicit.
You might also notice we now create the deferred in the
ProxyService instead of the
PoetryClientFactory. Since the canceler callback needs to access the
IConnector object, the
ProxyService ends up being the most convenient place to create the deferred.
And, as in one of our earlier examples, our
canceler callback is implemented as a closure. Closures seem to be very useful when implementing cancel callbacks!
Let’s try out our new proxy. First start up a slow server. It needs to be slow so we actually have time to cancel:
python blocking-server/slowpoetry.py --port 10001 poetry/fascination.txt
Now we can start up our proxy (remember you need Twisted 10.1.0):
python twisted-server-4/poetry-proxy.py --port 10000 10001
Now we can start downloading a poem from the proxy using any client, or even just curl:
curl localhost:10000
After a few seconds, press Ctrl-C to stop the client, or the curl process. In the terminal running the proxy you should
see this output:
Fetching poem from server. Canceling poem download.
And you should see the slow server has stopped printing output for each bit of poem it sends, since our proxy hung up. You can start and stop the client multiple times to verify each download is canceled each time. But if you let the poem run to completion, then the proxy caches the poem and sends it immediately after that.
One More Wrinkle
We said several times above that canceling an already-fired deferred has no effect. Well, that’s not quite true. In Part 13 we learned that the callbacks and errbacks attached to a deferred may return deferreds themselves. And in that case, the original (outer) deferred pauses the execution of its callback chains and waits for the inner deferred to fire (see Figure 28).
Thus, even though a deferred has fired the higher-level code that made the asynchronous request may not have received the result yet, because the callback chain is paused waiting for an inner deferred to finish. So what happens if the higher-level code cancels that outer deferred? In that case the outer deferred does not cancel itself (it has already fired after all); instead, the outer deferred cancels the inner deferred.
So when you cancel a deferred, you might not be canceling the main asynchronous operation, but rather some other asynchronous operation triggered as a result of the first. Whew!
We can illustrate this with one more example. Consider the code in deferred-cancel/defer-cancel-12.py:
from twisted.internet import defer def cancel_outer(d): print "outer cancel callback." def cancel_inner(d): print "inner cancel callback." def first_outer_callback(res): print 'first outer callback, returning inner deferred' return inner_d def second_outer_callback(res): print 'second outer callback got:', res def outer_errback(err): print 'outer errback got:', err outer_d = defer.Deferred(cancel_outer) inner_d = defer.Deferred(cancel_inner) outer_d.addCallback(first_outer_callback) outer_d.addCallbacks(second_outer_callback, outer_errback) outer_d.callback('result') # at this point the outer deferred has fired, but is paused # on the inner deferred. print 'canceling outer deferred.' outer_d.cancel() print 'done'
In this example we create two deferreds, the outer and the inner, and have one of the outer callbacks return the inner deferred. First we fire the outer deferred, and then we cancel it. The example produces this output:
first outer callback, returning inner deferred canceling outer deferred. inner cancel callback. outer errback got: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: ] done
As you can see, canceling the outer deferred does not cause the outer cancel callback to fire. Instead, it cancels the inner deferred so the inner cancel callback fires, and then outer errback receives the
CancelledError (from the inner deferred).
You may wish to stare at that code a while, and try out variations to see how they affect the outcome.
Discussion
Canceling a deferred can be a very useful operation, allowing our programs to avoid work they no longer need to do. And as we have seen, it can be a little bit tricky, too.
One very important fact to keep in mind is that canceling a deferred doesn’t necessarily cancel the underlying asynchronous operation. In fact, as of this writing, most deferreds won’t really “cancel”, since most Twisted code was written prior to Twisted 10.1.0 and hasn’t been updated. This includes many of the APIs in Twisted itself! Check the documentation and/or the source code to find out whether canceling the deferred will truly cancel the request, or simply ignore it.
And the second important fact is that simply returning a deferred from your asynchronous APIs will not necessarily make them cancelable in the complete sense of the word. If you want to implement canceling in your own programs, you should study the Twisted source code to find more examples. Cancellation is a brand new feature so the patterns and best practices are still being worked out.
Looking Ahead
At this point we’ve learned just about everything about Deferreds and the core concepts behind Twisted. Which means there’s not much more to introduce, as the rest of Twisted consists mainly of specific applications, like web programming or asynchronous database access. So in the next couple of Parts we’re going to take a little detour and look at two other systems that use asynchronous I/O to see how some of their ideas relate to the ideas in Twisted. Then, in the final Part, we will wrap up and suggest ways to continue your Twisted education. “An Introduction To Asynchronous Programming and Twisted”?
It is definitely a closure and they can be a little brain-bendy).
|
http://krondo.com/?p=2601
|
CC-MAIN-2016-07
|
refinedweb
| 3,550
| 57.06
|
On Apr 10, 2020, at 13:29, Elliott Dehnbostel pydehnbostel@gmail.com wrote:
We could do this: chars = "abcaaabkjzhbjacvb" seek = {'a','b','c'} count = sum([1 for a in chars if a in seek]) However, this changes important semantics by creating an entire new list before summing.
It sounds like you really are just looking for generator expressions, which were added in Python 2.4 a couple decades ago. They have the same syntax as list comprehensions without the square brackets, but they generate one element at a time instead of generating a new list. If that’s your only problem here, just remove the square brackets and you’re done.
Also, adding just one more expression to the most nested block thwarts that refactor.
No it doesn’t:
count = sum(1 for a in chars if a.isalpha() and a in seek)
And you don’t have to stop with adding another expression; you can add a whole new if clause, or even a nested for clause:
count = sum(1 for s in strings if s.isalpha() for a in s if a in seek)
Of course if you add too much, it becomes a lot harder to read—we’re already pushing it here. But in that case, you can always pull out any subexpression into a function:
def matches(s): return (a for a in s if a in seek) count = sum(1 for s in strings for a in matches(s))
… which can often then be simplified further (it should be obvious how here).
Or, even better, you can usually create a pipeline of two (or more) generator expressions:
chars = (a for s in strings for a in s) count = sum(1 for a in chars if a in seek)
(And sometimes it’s natural to replace some of these with map, itertools functions, two-arg iter, etc.; sometimes it’s not. Do whichever one is more natural in each case, of course.)
The great thing about this style is that you can pipeline a dozen transformations without any nesting, and without adding any confusion. They just pile up linearly and you can read each one on its own. David Beazley’s “Generator Tricks for Systems Programmers” presentation has some great examples, which demonstrate it a lot better than I ever could.
And of course you can always fall back to writing nested block statements. The nice declarative syntax of comprehensions is great when it’s simple enough to understand the flow at a glance—but when it isn’t, that usually means you need a visual guide to understand the flow, and that’s exactly what indentation is for. And if there are too many indents, then that usually means you should be refactoring the inner part into a function.
Sure, “usually” isn’t “always”, both times. There are some cases that fall maddeningly in between, where a sort of hybrid where you had nested blocks but could collapse a couple together here and there would be the most readable possibility. I know I run into one a couple times/year. But I don’t think those cases are really much more common than that. Any new feature complicates Python—and this one would potentially encourage people to abuse the feature even when it’s really making things less readable rather than more.
Comprehension syntax is great _because_ comprehensions are limited. You know you can read things declaratively, because there can’t be any statements in there or any flow control besides the linear-nested clauses. That doesn’t apply to nested blocks.
|
https://mail.python.org/archives/list/python-ideas@python.org/message/FMLLOLBDMV2YUT4EU3HTGSYOXUQCBA2Z/
|
CC-MAIN-2022-33
|
refinedweb
| 598
| 68.1
|
Fundulopanchax
gularis (Boulenger
1902)
Taken at the 2004 SKS convention.
From
the Latin gula meaning striped which refers to the maroon throat markings. (Also
seen on females).
Boulenger G.A. 1901. (Fundulus
gularis).
Descriptions of Two New Fishes Discovered
by Dr. W.J.Ansorge in Southern Nigeria.
Proceedings of the Zoological Society London.
II : p 623-624, plate 37, figure 2-3.
Line
drawing used by Boulenger in the original description.
7 cm.
n = 16, A = 16 (Scheel 1968, 1975).
Gularopanchax
Populations
Agberi,
on the Nun River, situated in the west of the Niger Delta. They were caught in
'shallow creeks & flooded Yam plantations'.
Niger & Upper Lobo River drainages eastwards to southwest
Nigeria & southern Benin.
Sympatric sp.
include Fp.sjoestedti, Fp.arnoldi,
Fp.filamentosum,A.multicolor,
A.calliurum.
Humid coastal rainforest
marshes, raffia swamps & other swampy areas of small streams & floodplains.
Scheel
in ROTOW 1 gives 2 photos
on page 245. Kluge collected both forms in the same pool & stated that only
one form was dominant at any one time. I would really like to put these photos
in here but copyright prevents it.
Scheel
in ROTOW 1 reports that
males can be distinguished by a 'very dark red marking on the throat'. Boulenger,
incidentally used this marking in describing the species
In the 1980's much confusion reigned over ditinguishing
between fallax, deltaense
& gulare.
Fp.fallax
can grow larger. Generally fallax has finer
spots on the body. gularis tend to have larger
spots forming a near solid line through the upper centre of the body. The caudal
fin has the same characteristic 'split' in both species (amieti
also has this 'split') where the upper half is spotted & the lower half is
clear. fallax has many fine spots (lines in
some populations, Malende). Also the lower edge of this fin tends to have an outer
marginal band of yellow with a sub-marginal band of red. In fallax
this outer marginal band can either not be present, present as a thin band or
present as a spotted pattern.
Another method
of seperating the species is in the pectoral fins. gularis
tends to be clear in many populations with a feint outer marginal band in some
populations (Ijebu Ode). fallax tends to have
a marginal & a sub-marginal band in these fins.
The obvious distinguishing marking which seperated
deltaensis from gularis
was the clear anal fin which carried no spot or line.
Boulenger described this sp. from 16 specimens taken
at Agberi in the western area of the Niger Delta.
Pellegrin,
in 1907, reports a population from the Ntem River at the southernmost part of
Cameroon. These may have come from the area near the mouth of this river as this
is very swampy. These early collections were placed closest to Fp.sjoestedti.
Imported
into Europe in 1905 & 1907. The first import is not certain. Arnold reports
them being in Germany in 1907 but it has been reported that they were imported
into Germany in 1905. Boulenger identified both imports as being gulare
but Arnold considered them sufficiently distinct from each other to call the 1905
import 'gulare var.A(blue)' & the 1907
import 'gulare var.B (yellow)'. See also Fp.sjoestedti.
This
1907 import was said to have been collected from the Niger Delta area & were
reportedly feeble with there being 'not reproduced'. However, in 1911 aquarists
in Germany reported breeding them. Arnold also reported that both blue & yellow
forms had been crossed in Vienna. Scheel had his doubts about this crossing.
In
1913 Krüger reported receiving preserved individuals of this species from
Schwab, a missionary based in Ebolowa, southern Cameroon. These were reportedly
collected in brackish water. These may have been a similar form to Pellegrin's.
Pellegrin
reported this species from Nyabessan in 1929.
Boulenger
gives the following collectors / locations in his 1915 Catalogue.
In
1955 they were imported into Holland. Scheel had fish from this import along with
Fp.filamentosus. The males of the
Fp.gularis died en route to Copenhagen.
Clausen
collected them at Ago-Iwoye, close to Ijebu Ode, southwest Nigeria. A live male
from this collection was given to Scheel who crossed it with a gardneri
Akure female. He reported that this crossing was 'very viable & almost sterile
in backcrossings'.
In 1961 another import containing Fp.gularis
reached Holland. These were photographed by Nieuwenhuizen & de Looze.
Imported to the US in 1962.
History of the synonym Fundulus
beauforti
(non Ahl 1924) Klee & Turner
1962
In 1962 Klee & Turner identified
some aquarium kept fish as beauforti.
These were later identified as Fp.gularis
In 1963 Radda considered gustavi
& schreineri to be synonyms
of beauforti. This also was found to be in error.
Two
- three females per male is the best ratio. Use a larger tank for breeding with
either bottom mops or a layer of peat. This peat can be taken out every fortnight
& dried for around 6 weeks. Reports would suggest they can be ready to hatch
between 5 & 9 weeks. Other reports suggest a minimum of 8 weeks. In this set
up 100 eggs may be found in a single day's spawning. Fry
are capable of taking newly hatched brine shrimp as a first food. Growth is quite
rapid with sexual maturity being attained around 4-5 weeks. Fry can be susceptible
to velvet so cleanliness & regular water changes are advised.
Sterba in Freshwater
Fishes reported males to be very quarrelsome. Young hatch in 3-8
weeks, usually after 5 weeks.
1mm.
One of the more aggressive sp. of Fundulopanchax.
Velvet is a particular killer. Not a sp. for the beginner.
|
http://www.killifish.f9.co.uk/Killifish/Killifish%20Website/Ref_Library/Fundulopanchax/Fp.gularis.htm
|
crawl-002
|
refinedweb
| 929
| 68.36
|
This article looks at how to prevent CSRF attacks in Flask. Along the way, we'll look at what CSRF is, an example of a CSRF attack, and how to protect against CSRF via Flask-WTF.
Contents
What is CSRF?
CSRF, which stands for Cross-Site Request Forgery, is an attack against a web application in which the attacker attempts to trick an authenticated user into performing a malicious action. Most CSRF attacks target web applications that use cookie-based auth since web browsers include all of the cookies associated with a particular domain with each request. So when a malicious request is made from the same browser, the attacker can easily make use of the stored cookies.
Such attacks are often achieved by tricking a user into clicking a button or submitting a form. For example, say your banking web app is vulnerable to CSRF attacks. An attacker could create a clone of your banking web site that contains the following form:
<form action="" method="POST"> <input type="hidden" name="transaction" value="transfer"> <input type="hidden" name="amount" value="100"> <input type="hidden" name="account" value="999"> <input type="submit" value="Check your statement now"> </form>
The attacker then sends you an email that appears to come from your bank -- cemtralbenk.com instead of centralbank.com -- indicating that your bank statement is ready to view. After clicking the link in the email, you're taken to the malicious web site with the form. You click the button to check your statement. The browser will then automatically send the authentication cookie along with the POST request. Since you're authenticated, the attacker will be able to perform any action that you're allowed to do. In this case, $100 is transferred from you account to account number 999.
Think of all the spam emails you receive daily. How many of them contain hidden CSRF attacks?
Flask Example
Next, let's look at an example of a Flask app that's vulnerable to CSRF attacks. Again, we'll use the banking web site scenario.
That app has the following features:
- Login form that creates a user session
- Account page that displays account balance and a form to transfer money
- Logout button to clear the session
It uses Flask-Login for handling auth and managing user sessions.
You can clone down the app from the csrf-flask-insecure branch of the csrf-example repo. Follow the directions on the readme for installing the dependencies and run the app on:
$ python app.py
Make sure you can log in with:
- username:
test
test
After logging in, you'll be redirected to. Take note of the session cookie:
The browser will send the cookie with each subsequent request made to the
localhost:5000 domain. Take note of the route associated with the account page:
@app.route("/accounts", methods=["GET", "POST"]) @login_required def accounts(): user = get_user(current_user.id) if request.method == "POST": amount = int(request.form.get("amount")) account = int(request.form.get("account")) transfer_to = get_user(account) if amount <= user["balance"] and transfer_to: user["balance"] -= amount transfer_to["balance"] += amount return render_template( "accounts.html", balance=user["balance"], username=user["username"], )
Nothing too complex here: On a POST request, the provided amount is subtracted from the user's balance and added to the balance associated with the provided account number. When the user is authenticated, the bank server essentially trusts the request from the browser. Since this route handler isn't protected from a CSRF attack, an attacker can exploit this trust by tricking someone into performing an operation on the bank server without their knowledge. This is exactly what the hacker/index.html pages does:
<form hidden <input type="number" name="amount" value="2000"> </form> <iframe hidden</iframe> <h3>You won $100,000</h3> <button onClick="hack();" id="button">Click to claim</button> <br> <div id="warning"></div> <script> function hack() { document.getElementById("hack").submit(); document.getElementById("warning").innerHTML="check your bank balance!"; } </script>
You can serve up this page on by navigating to the project directory and running the following command in a new terminal window:
$ python -m http.server --directory hacker 8002
Other than the poor design, nothing seems suspicious to ordinary eyes. But behind the scenes, there's a hidden form that executes in the background removing all the money from the user's account.
An attacker could email a link to this page disguised as some sort of prize giveaway. Now, after opening the page and clicking the "Click to claim" button, a POST request is sent to that exploits the trust established between the bank and the web browser.
How to Prevent CSRF?
CSRF attacks can be prevented by using a CSRF token -- a random, unguessable string -- to validate the request origin. For unsafe requests with side effects like an HTTP POST form submission, you must provide a valid CSRF token so the server can verify the source of the request for CSRF protection.
CSRF Token Workflow
- The client sends a POST request with their credentials to authenticate
- If the credentials are correct, the server generates a session and CSRF token
- The request is sent back the client, and the session is stored in a cookie while the token is rendered in a hidden form field
- The client includes the session cookie and the CSRF token with the form submission
- The server validates the session and the CSRF token and accepts or rejects the request
Let's now see how to implement CSRF protection in our example app using the Flask-WTF extension.
Start by installing the dependency:
$ pip install Flask-WTF
Next, register CSRFProtect globally in app.py:
from flask import Flask, Response, abort, redirect, render_template, request, url_for from flask_login import ( LoginManager, UserMixin, current_user, login_required, login_user, logout_user, ) from flask_wtf.csrf import CSRFProtect app = Flask(__name__) app.config.update( DEBUG=True, SECRET_KEY="secret_sauce", ) login_manager = LoginManager() login_manager.init_app(app) csrf = CSRFProtect() csrf.init_app(app) ...
Now, by default, all POST, PUT, PATCH, and DELETE methods are protected against CSRF. Take note of this. You should never perform a side effect, like changing data in the database, via a GET request.
Next, add the hidden input field with the CSRF token to the forms.
templates/index.html:
<form action='/' method='POST' autocomplete="off"> <input type='text' name='username' id='email' placeholder='username'/> <input type='password' name='password' id='password' placeholder='password'/> <input type='submit' name='submit' value='login'/> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> </form>
templates/accounts.html:
<h3>Central bank account of {{username}}</h3> <a href="/logout">logout</a> <br><br> <p>Balance: ${{balance}}</p> <form action="/accounts" method="POST" autocomplete="off"> <p>Transfer Money</p> <input type="text" name="account" placeholder="accountid"> <input type="number" name="amount" placeholder="amount"> <input type="submit" value="send"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> </form>
That's it. This will take care of CSRF for you. Now, let's see if this prevents the attack. Run both servers again. Log in to the banking app, and then try to the "Click to claim" button. You should see a 400 error:
What happens if you add the same hidden field to the form in hacker/index.html?
<form hidden <input type="number" name="amount" value="2000"> <input type="number" name="account" value="2"> <input type="hidden" name="csrf_token" value="123"> </form>
It should still fail with a 400. We have successfully prevented the CSRF attack.
CORS and JSON APIs
For JSON APIs, having a properly configured Cross-Origin Resource Sharing (CORS) policy is important, but it does not in itself prevent CSRF attacks. In fact, it can make you more vulnerable to CSRF if CORS is not correctly configured.
For preflight requests CORS policies define who can and cannot access particular resources. Such requests are triggered by the browser when XMLHttpRequest or Fetch are used.
When a preflight request is triggered, the browser sends a request to the server asking for the CORS policy (allowed origins, allowed request types, etc.). The browser then checks the response against the original request. If the request doesn't meet the requirements, the browser will reject it.
Simple requests, meanwhile, like a POST request from a browser-based form submission, don't trigger a preflight request, so the CORS policy doesn't matter.
So, if you do have a JSON API, limiting the allowed origins or eliminating CORS altogether is a great way to prevent unwanted requests. You don't need to use CSRF tokens in that situation. If you have a more open CORS policy with regard to origins, it's a good idea to use CSRF tokens.
Conclusion
We've seen how an attacker can forge a request and perform operations without the user's knowledge. As browsers become more secure and JSON APIs are used more and more, CSRF is fortunately becoming less and less of a concern. That said, in the case of a simple request like a POST request from a form, it's vital to secure route handlers that handle such requests especially when Flask-WTF makes is so easy protect against CSRF attacks.
|
https://testdriven.io/blog/csrf-flask/
|
CC-MAIN-2020-50
|
refinedweb
| 1,511
| 53.92
|
BusyBox simplifies embedded Linux systems
A small toolkit for small environments
The birth of BusyBox
BusyBox was first written by Bruce Perens in 1996 for the Debian GNU/Linux setup disk. The goal was to create a bootable GNU/Linux system on a single floppy disk that could be used as an install and rescue disk. A single floppy disk can hold around 1.4-1.7MB, so there's not much room available for the Linux kernel and associated user applications.
BusyBox exploits the fact that the standard Linux utilities share many
common elements. For example, many file-based utilities (such as
grep and
find)
require code to recurse a directory in search of files. When the utilities
are combined into a single executable, they can share these common
elements, which results in a smaller executable. In fact, BusyBox can pack
almost 3.5MB of utilities into around 200KB. This provides greater
functionality to bootable floppy disks and embedded devices that use
Linux. You can use BusyBox with both the 2.4 and 2.6 Linux kernels.
How does BusyBox work?
To make one executable look like many executables, BusyBox exploits a seldom-used feature of argument passing to the main C function. Recall that the C main function is defined as follows:
Listing 1. The C main function
int main( int argc, char *argv[] )
In this definition,
argc is the number of
arguments passed in (argument count) and
argv
is an array of strings representing options passed in from the command
line (argument vector). Index 0 of
argv is the
program name that was invoked from the command line.
The simple C program shown in Listing 2 demonstrates BusyBox invocation. It
simply emits the contents of the
argv
vector.
Listing 2. BusyBox uses
argv[0] to determine which application to invoke
// test.c #include <stdio.h> int main( int argc, char *argv[] ) { int i; for (i = 0 ; i < argc ; i++) { printf("argv[%d] = %s\n", i, argv[i]); } return 0; }
Invoking this application shows that the first argument invoked is the name of the program. You can rename your executable, and you get the new name upon invocation. Further, you can create a symbolic link to your executable, and you get the symlink name when it's invoked.
Listing 3. Command testing after updating BusyBox with a new command
$ gcc -Wall -o test test.c $ ./test arg1 arg2 argv[0] = ./test argv[1] = arg1 argv[2] = arg2 $ mv test newtest $ ./newtest arg1 argv[0] = ./newtest argv[1] = arg1 $ ln -s newtest linktest $ ./linktest arg argv[0] = ./linktest argv[1] = arg
BusyBox uses symbolic links to make one executable look like many. For each
of the utilities contained within BusyBox, a symbolic link is created so
that BusyBox is invoked. BusyBox then invokes the internal utility as
defined by
argv[0].
Configuring and building BusyBox
You can download the latest version of BusyBox from its Web site (see the Related topics section). Like most open source programs, it's distributed in a compressed tarball, and you can transform it into a source tree using the command in Listing 4. (If you downloaded a version other than 1.1.1, use the appropriate version number in this and other version-specific commands.)
Listing 4. Untarring BusyBox
$ tar xvfz busybox-1.1.1.tar.gz $
The result is a directory, called busybox-1.1.1, that contains the BusyBox
source code. To build the default configuration, which includes almost
everything with debugging disabled, use the
defconfig make target:
Listing 5. Building the default BusyBox configuration
$ cd busybox-1.1.1 $ make defconfig $ make $
The result is a rather large BusyBox image, but it's the simplest way to get started. You can invoke this new image directly, which results in a simple Help page with the currently configured commands. To test your image, you can also invoke BusyBox with a command to execute, as shown in Listing 6.
Listing 6. Demonstrating BusyBox command execution and the ash shell in BusyBox
$ ./busybox pwd /usr/local/src/busybox-1.1.1 $ ./busybox ash /usr/local/src/busybox-1.1.1 $ pwd /usr/local/src/busybox-1.1.1 /usr/local/src/busybox-1.1.1 $ exit $
In this example, you invoke the
pwd (print
working directory) command, enter the
ash shell
within BusyBox, and invoke
pwd within
ash.
Manual configuration
If you're building an embedded device that has very specific needs, you can
manually configure the contents of your BusyBox with the
menuconfig make target. If you're familiar with
building a Linux kernel, note that
menuconfig
is the same target for configuring the contents of the Linux kernel. In
fact, the ncurses-based application is the same.
Using manual configuration, you can specify the commands to be included in
the final BusyBox image. You can also configure the BusyBox environment,
such as including support for the United States National Security Agency's
(NSA) Security-Enhanced Linux (SELinux), specifying the compiler to use
(for cross-compiling in an embedded environment), and whether BusyBox
should be compiled statically or dynamically. Figure 1 shows the main
screen for
menuconfig. Here you can see the
different major classes of applications (applets) that you can configure
for BusyBox.
Figure 1. BusyBox configuration using menuconfig
To manually configure BusyBox, use the following commands:
Listing 7. Manually configuring BusyBox
$ make menuconfig $ make $
This provides you with a BusyBox binary that can be invoked. The next step is to build an environment around BusyBox, including the symbolic links that redirect the standard Linux commands to the BusyBox binary. You can do this very simply with the following command:
Listing 8. Building the BusyBox environment
$ make install $
By default, a new local subdirectory is created, called _install, which
contains the basic Linux environment. At the root, you'll find a
linuxrc program that links to BusyBox. The
linuxrc program is useful when building an
install or rescue disk (permits a modularized boot prior). Also at the
root is a /sbin subdirectory that contains operating system binaries (used
primarily for administration), and a /bin subdirectory that contains
binaries intended for users. You can then migrate this _install directory
into your target environment when building a floppy distribution or
embedded initial RAM disk. You can also use the
PREFIX option with the make program to redirect
the install subdirectory to a new location. For example, the following
code segment installs the symlinks using the /tmp/newtarget root directory
instead of the ./_install directory:
Listing 9. Installing symlinks to another directory
$ make PREFIX=/tmp/newtarget install $
The links that are created through the
install
make target come from the busybox.links file. This file is created when
BusyBox is compiled, and it contains the list of commands that have been
configured. When
install is performed, the
busybox.links file is checked for the symlinks to create.
The command links to BusyBox can also be created dynamically at runtime
using BusyBox. The
CONFIG_FEATURE_INSTALLER
option enables this feature, which can be performed at runtime as follows:
Listing 10. Creating command links at runtime
$ ./busybox --install -s $
The
-s option forces symbolic links to be
created (otherwise, hard links are created). This option requires that the
/proc file system is present.
BusyBox build options
BusyBox includes several build options to help you build and debug the right BusyBox for you.
Table 1. Some of the make options available for BusyBox
When a configuration is defined, you just need to type
make to actually build the BusyBox binary. For
example, to build BusyBox for all applications, you can do the
following:
Listing 11. Building the BusyBox binary
$ make allyesconfig $ make $
Shrinking BusyBox
If you're really serious about shrinking the size of your BusyBox image, here are two things to keep in mind:
- Never build as a static binary (which includes all needed libraries in the image). Instead, if you build as a shared image, it uses the available libraries that are used by other applications (for example,
/lib/libc.so.X).
- Build with the uClibc, which is a size-optimized C library that was developed for embedded systems, rather than building with the standard glibc (GNU C library).
Options supported in BusyBox commands
The commands in BusyBox don't support all of the options commonly
available, but they do contain the options that are used most often. If
you need to know which options are supported for a command, you can invoke
and use the
--help option, as shown in Listing
12.
Listing 12. Invoking the --help option
$ ./busybox wc --help BusyBox v1.1.1 (2006.04.09-15:27+0000) multi-call binary Usage: wc [OPTION]... [FILE]... Print line, word, and byte counts for each FILE, and a total line if more than one FILE is specified. With no FILE, read standard input. Options: -c print the byte counts -l print the newline counts -L print the length of the longest line -w print the word counts $
This particular data is available only if the
CONFIG_FEATURE_VERBOSE_USAGE option is enabled.
Without this option, you won't get the verbose data, but you'll also save
about 13KB.
Adding new commands to BusyBox
Adding a new command to BusyBox is simple because of its well-defined architecture. The first step is to choose a location for your new command's source. Select the location based on the type of command (networking, shell, and so on), and be consistent with other commands. This is important because your new command will ultimately show up in the particular configuration menu for menuconfig (in this case, in the Miscellaneous Utilities menu).
For this example, I've called the new command
(
newcmd) and placed it in the ./miscutils
directory. The new command's source is shown in Listing 13.
Listing 13. Source for new command to integrate into BusyBox
#include "busybox.h" int newcmd_main( int argc, char *argv[] ) { int i; printf("newcmd called:\n"); for (i = 0 ; i < argc ; i++) { printf("arg[%d] = %s\n", i, argv[i]); } return 0; }
Next, add your new command source to
Makefile.in
in the chosen subdirectory. In this example, I update
./miscutils/Makefile.in. Add your new command
in alphabetical order to maintain consistency with the existing
commands:
Listing 14. Adding command to Makefile.in
MISCUTILS-$(CONFIG_MT) += mt.o MISCUTILS-$(CONFIG_NEWCMD) += newcmd.o MISCUTILS-$(CONFIG_RUNLEVEL) += runlevel.o
Next, update the configuration file, again within the ./miscutils directory, to make your new command visible within the configuration process. This file is called Config.in, and your new command is added in alphabetical order:
Listing 15. Adding command to Config.in
config CONFIG_NEWCMD bool "newcmd" default n help newcmd is a new test command.
This structure defines a new config entry (through the
config keyword) and then the config option
(
CONFIG_NEWCMD). Your new command will either
be enabled or disabled, so use the
bool
(Boolean) menu attribute for configuration. Its default is disabled
(
n for No), and you end with a short Help
description. You can see the entire grammar for the configuration syntax
in the source tree at ./scripts/config/Kconfig-language.txt.
Next, update the ./include/applets.h file to include your new command. Add the following line to this file, remembering to keep it in alphabetical order. It's important to maintain this order, otherwise your command will not be found.
Listing 16. Adding command to applets.h
USE_NEWCMD(APPLET(newcmd, newcmd_main, _BB_DIR_USER_BIN, _BB_SUID_NEVER))
This defines your command name (
newcmd), its
function name in the Busybox source
(
newcmd_main), where the link will be created
for this new command (in this case, in the /usr/bin directory), and,
finally, whether the command has permissions to set the user id (in this
case, no).
The penultimate step is to add detailed Help information to the ./include/usage.h file. As you'll see from examples in this file, usage information can be quite verbose. In this case, I've just added a little information so I can build the new command:
Listing 17. Adding help information to usage.h
#define newcmd_trivial_usage "None" #define newcmd_full_usage "None"
The final step is to enable your new command (through
make menuconfig and then enable the option in
the Miscellaneous Utilities menu) and then build BusyBox with
make.
With your new BusyBox available, you can test your new command, as shown in Listing 18.
Listing 18. Testing your new command
$ ./busybox newcmd arg1 newcmd called: arg[0] = newcmd arg[1] = arg1 $ ./busybox newcmd --help BusyBox v1.1.1 (2006.04.12-13:47+0000) multi-call binary Usage: newcmd None None
That's it! The BusyBox developers made a tool that's not only great but also simple to extend.
Summary
BusyBox is a great tool for building memory-constrained embedded systems and also floppy-disk based systems. BusyBox shrinks the size of a variety of necessary tools and utilities by pulling them together into a single executable and allowing them to share the common aspects of their code. BusyBox is a useful tool for your embedded toolbox and, therefore, worth your time to explore.
Downloadable resources
Related topics
- Download the latest release of BusyBox. You'll also find the latest news, erratum, and tutorials for using and amending BusyBox.
- uClibc is a reduced memory footprint replacement for glibc. While requiring fewer resources than glibc, porting applications to uClibc typically requires only a recompile.
- The POSIX FAQ at the Open Group can help you learn more about POSIX. Part three of this specification details shells and utilities in particular.
- LinuxTiny is a series of patches that reduce the memory and disk footprint of the 2.6 Linux kernel to as little as 2MB of RAM. If you're interested in shrinking the 2.6 Linux kernel, check out this work by Matt Mackall.
- With IBM trial software, available for download directly from developerWorks, build your next development project on Linux.
|
https://www.ibm.com/developerworks/library/l-busybox/
|
CC-MAIN-2021-04
|
refinedweb
| 2,309
| 56.86
|
I still in the process of understanding concept of data conversion using operator overloading.
I have a class named Int that acts like an int and does addition calculation.
Here is the code:
#include <iostream> using namespace std; #include <stdlib.h> class Int { private: int var; public: Int(): var(0) {} Int(int in): var(in) {} Int operator + (Int v) { long double temp; temp=long double(var)+long double(v.var); cout<<" temp is "<<temp<<endl; if (temp>2147483648) { cout<<" Number is too big "; exit(1); } return static_cast<int>(temp); } void getVal() { cout<<"Enter var "<<endl; cin>>var; } void showVal() { cout<<"var is "<<var<<endl; } }; void main() { Int v1, v2, v3; v1.getVal(); v2.getVal(); v3=v1+v2; v3.showVal(); }
Question:
1. How come compiler doesn't complain when I do:
temp=long double(var)+long double(v.var);
in overloading + function. Since v.var is user defined Int and I try to convert it to long double, I expect compiler to fail it.
2. This statement actually will cause the compiler to fail:
temp=long double(var)+long double(v);
unless I add Int to int conversion to the class:
operator int() {return var;}
How does float overloading function work? does it execute automatically every time a user defined Int variable is assigned to a basic data type?
3. Is:
long double(var)
same thing as:
static_cast<long double>(var)
?
Very much appreciated from ethereal1m
|
http://www.dreamincode.net/forums/topic/131716-convert-between-user-define-data-type-and-basic-data-type/
|
CC-MAIN-2016-07
|
refinedweb
| 235
| 57.16
|
table of contents
NAME¶ares_parse_ns_reply - Parse a reply to a DNS query of type NS into a hostent
SYNOPSIS¶
#include <ares.h>
int ares_parse_ns_reply(const unsigned char *abuf, int alen, struct hostent **host);
DESCRIPTION¶The ares_parse_ns_reply function parses the response to a query of type NS into a struct hostent. The parameters abuf and alen give the contents of the response. The result is stored in allocated memory and a pointer to it stored into the variable pointed to by host. The nameservers are stored into the aliases field of the host structure. It is the caller's responsibility to free the resulting host structure using ares_free_hostent(3) when it is no longer needed.
RETURN VALUES¶ares_parse_ns.
|
https://manpages.debian.org/testing/libc-ares-dev/ares_parse_ns_reply.3.en.html
|
CC-MAIN-2020-10
|
refinedweb
| 117
| 52.9
|
User Details
- User Since
- Aug 26 2015, 8:20 AM (202 w, 5 d)
Today
- Disallowed -fsanitize=function in combination with -fsanitize-minimal-runtime now.
Wed, Jun 26
Any thoughts on this? (cfe-commits had inadvertently been missing from subscribers, it touches clang as well as compiler-rt.)
Mon, Jun 24
friendly ping
Tue, Jun 18
friendly ping
Jun 11 2019
friendly ping
Jun 4 2019
friendly ping
May 21 2019
Of course, adding missing tests reveals shortcomings in the new code.
May 2 2019
Added missing tests at "Add tests for 'Adapt -fsanitize=function to SANITIZER_NON_UNIQUE_TYPEINFO'".
May 1 2019
Apr 23 2019
friendly ping
Apr 16 2019
Feb 13 2019
committed for now to get the crash fixed; if there are issues with the test they can be addressed later
Feb 11 2019
The change itself should probably be uncontroversial (the bad cast had been there ever since getFunctionTypeWithExceptionSpec had been introduced with r221918), but I'm not sure about the test: It tests the relevant code somewhat indirectly; is it fine in clang/test/AST/?; or is such a test even overkill?
Jan 16 2019
One problem I found with the macro __cpp_impl_destroying_delete not being conditional on language version is the following: Recent GCC trunk (since;a=commit;h=76b94d4ba654e9af1882865933343d11f5c3b18b "Implement P0722R3, destroying operator delete.") contains
#if __cpp_impl_destroying_delete #define __cpp_lib_destroying_delete 201806L namespace std { struct destroying_delete_t { explicit destroying_delete_t() = default; }; inline constexpr destroying_delete_t destroying_delete{}; } #endif // destroying delete
at "top-level" (i.e., not in a C++20-only #if or similar) in libstdc++-v3/libsupc++/new. That means that when using Clang against that GCC toolchain, #include <new> in C++03 mode will cause error: unknown type name 'constexpr'.
Nov 1 2018
I run the new checker on LibreOffice project. I found ~25 false positives, which seems small enough to me. This false positives can be supressed easily.
Oct 23 2018
I've silenced this scenario in r344898, thank you for raising the issue!
Oct 17 2018
[...]
Then again, this is a case where you don't get any error but you do get a silent behavioral ambiguity without the current enumerator shadow diagnostic:struct S1; struct S2; struct S3 { void S1(); enum { S2 }; void f(decltype(S2) s); };
So there are cases where this behavior can be somewhat useful.
Oct 16 2018
doesnt this make -Wshadow more aggressive for enumerators than for other entities?
May 31 2018
Apr 18 2018
see "clang-cl triggers ASTContext::getASTRecordLayout Assertion `D && 'Cannot get layout of forward declarations!''" for what appears to be fallout from this change
Apr 17 2018
A random data point from trying this patch on the LibreOffice code base:
It looks like this caused a clang-cl regression "clang-cl emits special functions for non-trivial C-structs ('__destructor_8') introduced for Objective-C".
Mar 7 2018
Turns out DLLAttr-inherited-from-class is only added to members during Sema::CheckCompletedClass -> Sema::checkClassLevelDLLAttribute, when friend re-decls of those members may already have been created.
Jan 5 2018
Should this be backported to Clang 6? Not sure how widespread a problem this is in practice (it hit me with LibreOffice).
Jan 4 2018
friendly ping
Dec 28 2017
made the recommended changes
Dec 21 2017
(need to call getAs<FunctionProtoType> instead of cast<FunctionProtoType> in one place, in case the name in the function decl is wrapped in parens, as happens in HarfBuzz's hb-buffer.cc)
As suggested, solve the issue instead by removing any "noexcept" from the typeinfo emitted for the -fsanitize=function checks.
Dec 20 2017
added a small IR test
Dec 19 2017
Dec 18 2017
Had to revert r320977/r320978 again with r320981/r320982: "At least."
Dec 13 2017
friendly ping
friendly ping; any input on my two questions maybe?
Dec 8 2017
Dec 1 2017
(Diff 125121 had accidentally contained a spurious "}". Fixed that now.)
Nov 30 2017
Nov 24 2017
Oh, I'd meant to upload the version of the patch where the newly added calls to EmitTypeCheck use the default Alignment and SkippedChecks arguments, instead of explicitly skipping Alignment and ObjectSize checks. (That's the version I had successfully used in a LibreOffice UBSan 'make check' test build. But I still don't know which version is better.)
Nov 21 2017
Some items I'm unclear about:
Nov 17 2017
(This code had been introduced with "Define _Bool, bool, true, and false macros in <stdbool.h> when we're in a GNU-compatible C++ dialect. Fixes rdar://problem/8477819" mentioning: "Fixes rdar://problem/8477819." But I don't know what that is.)
Nov 13 2017
Sep 20 2017
Sep 15 2017
...but I'm still curious as to why lib/abi/x86_64-unknown-linux-gnu.abilist should want to mention __cxa_pure_virtual but not __cxa_deleted_virtual
Sep 13 2017
Drop changes of files under lib/abi/<version>.
Sep 11 2017
Aug 18 2017
(the original diff had inadvertently lacked full -U999999 context; uploaded a new diff)
(the original diff had inadvertently lacked full -U999999 context; uploaded a new diff)
|
https://reviews.llvm.org/p/sberg/
|
CC-MAIN-2019-30
|
refinedweb
| 835
| 60.45
|
A look at how you can use autoscaler to scale your worker nodes.
If you want to scale the pods in a Kubernetes cluster, this can be done easily through Replicaset within Kubernetes; but what if you want to scale your worker nodes? In this situation, autoscaler can help you avoid having pods in a pending state in your environment due to a lack of computational resources. You can increase or decrease the number of worker nodes in your work cluster automatically based on the resource demand.
Scale-up and scale-down
To start, we need to understand how scale-up and scale-down work, and it's important to understand the criteria used. The autoscaler works based on the resource request value defined for your deployments/pods and not on the value that is being consumed by the application.
- Scale-up: This situation occurs when you have pending pods because there are insufficient computing resources.
- Scale-down: This occurs when less than the total compute resources are considered underutilized. The default scale-down utilization threshold is utilization below 50%.
Step-by-step instructions
In this step-by-step guide, we will show you how to install and configure the autoscaler on your IBM Cloud Kubernetes Service cluster and perform a little test to see how it works in practice.
Before start, you'll need to install the required CLI into your computer: ibmcloud and Helm version 3 (the correct version is important due to the differences in commands between them).
1. Confirm that your credentials are stored in your Kubernetes cluster
If you do not have credentials stored, you can create one.
2. Check if you worker pool has the required label
If you don't have the required label, you have to add a new worker pool. If you don't know the
<worker_pool_name_or_ID>, you can get it through this command:
3. Add and update the Helm repo into your computer
Note: If you try to add a repo and receive an error message that says "Error: Couldn't load repositories file...", you have to init the helm. On prompt, type:
helm init
4. Install the cluster autoscaler helm chart in the kube-system namespace:
workerpools[0]: The first worker pool to enable autoscaling.
<pool_name>: The name or ID of the worker pool .
max=<number_of_workers>: Specify the maximum number of worker nodes.
min=<number_of_workers>: Specify the minimum number of worker nodes.
It is necessary to set the min value of autoscaler to, at least, the current number of worker nodes of your pool because min size does not automatically trigger a scale-up.
If you set up the autoscaler with a min size below the number of current worker nodes, the autoscaler does not initiate and needs to be set to the correct value before works properly:
Note: In this option, we are using all the default values of the autoscaler and just specifying the minimum and maximum number of worker nodes. However, there are several options that you can change by using the --set option.
5. Confirm that the pod is running and the service has been created
6. Verify if the status of ConfigMap is in SUCCESS state
How to check if Autoscaler is working
In this example, we will perform a basic test by increasing the number of pods to check the scale-up and then decreasing it to check the scale-down.
Scale-up
Please remember that the autoscaler is performed based on the pod's request value, so we do not need to perform a stress test; we just have to increase the number of pods to achieve the worker-node limit.
If your deployment does not have the Requests set up accordingly, autoscaler won't work as you expect. In our example, we will use a simple Replica Set that deploys four nginx pods.
nginx-test-autoscaler.yaml
Note: Take a look at the requests value—we are requesting 100m for CPU and 500Mi for Memory. This is the value of the request for each pod.
Create a replica set using the yaml file:
Check the pod status—one of them is in the "Pending" state:
Let's take a look at the pod's events to verify the reason:
In this case, the pod is in the pending state because there is insufficient memory on worker nodes.
Looking at the Kubernetes cluster inside the IBM Cloud portal, we can verify that a new node is being provisioned. Autoscaler identified that there is no computational resource to start the pod in pending state, so it is automatically scaling up to put the pod in running state:
After the nodes are provisioned, we can check the pod's status and the number of nodes:
Scale-down
In our test, we do not have workload in our pods so we just need to decrease the number of pods, which will decrease the total Requests. After the autoscaler identifies that the pods that are on a node have a request of less than 50% of its capacity, it will start the scale-down process.
Let's change the replicaset from four to two pods and confirm when only two pods are running:
Looking at the Kubernetes cluster inside the IBM Cloud portal, we can verify that one node is being deleted:
Customizing configuration values
You can change the cluster autoscaler configuration values using the helm upgrade command with --set option. If you want to learn more about the available parameters, see the following: Customizing configuration values (--set)
Here are two examples:
- Change the scan interval to 5m and enable autoscaling for the default worker pool, with a maximum of five and minimum of three worker nodes per zone:
- Change the threshold scale-down utilization to 0.7 and maximum time in minutes before pods is automatically restarted:
- We saw in the examples above that there are options to customize the values, but what if you want to return to the default values? There is a command to reset the settings:
Conclusion
Autoscaler can help you to avoid having pods in a pending state in your environment due to lack of computational resources by increasing the number of worker nodes and by decreasing them if they are underutilized.
In this article, we gave just one example of how it can be used and explored the customization options to adapt to the best for your environment. It's also important to take into consideration the time that IBM Cloud can provision a new worker node in order to make adequate the threshold to be used in your environment. In this case, a validation is recommended before going into production.
Learn more
Want to get some free, hands-on experience with Kubernetes? Take advantage of interactive, no-cost Kubernetes tutorials by checking out IBM CloudLabs.
Follow IBM Cloud
Be the first to hear about news, product updates, and innovation from IBM Cloud.Email subscribeRSS
|
https://www.ibm.com/cloud/blog/how-to-use-the-ibm-cloud-kubernetes-services-autoscaler
|
CC-MAIN-2021-43
|
refinedweb
| 1,156
| 54.86
|
#include <IO.h> int io_init_reading( GapIO *io, int N); int io_init_contig( GapIO *io, int N); int io_init_annotations( GapIO *io, int N);
These functions create new reading, contig and annotations structures. Each
takes two arguments; the first being the GapIO pointer, and the second
being the new reading, contig or annotation number to create. This is not the
number of new structures, but rather the highest allowed number for this
structure.
For instance, if we have 10 readings, "
io_init_reading(io, 12)" will
create two more, numbered 11 and 12.
For readings, the records are recovered (by increasing the GDatabase NumReadings field to NReadings) if available. The new GReadings structure are not guaranteed to be clear.
For contigs, the records are recovered if available. The contig_order array is also updated with the new contigs being added at the rightmost position. The new contigs are added to the registration scheme with blank registration lists. The new GContigs structures are not guaranteed to be clear.
For annotations, new records are always allocated from disk. It is up to the caller to first check that there are no free annotations in the free_annotations list. The new GAnnotations structures are not guaranteed to be clear.
All functions returns return 0 for success, and -1 for failure.
|
http://staden.sourceforge.net/scripting_manual/scripting_126.html
|
CC-MAIN-2015-18
|
refinedweb
| 210
| 64.81
|
for Iterations - 08
This content is no longer current. Our recommendation for up to date content:
In this lesson we talk about arrays, which are multi-part variables—a "bucket" containing other "buckets," if you will. We demonstrate how to declare and utilize arrays, including setting and retrieving their values, initializing their values, attempting to access values outside of the boundaries of the array, and iterating through arrays using the foreach statement. Finally, we demonstrate a couple of powerful built-in methods that give arrays added features.
Download the source code for Creating Arrays of Values
I got a bit lost on a bit of this. I did some fiddling and started understanding more. I tried applying the Array.Reverse to the int array and that gave me a bit of a better understanding of how that line of code works. It's use then with the char array made more sense.
Also, what does the ".ToString()" get us in the "Console.WriteLine(numbers[i].ToString());" line do? I did it with just "Console.WriteLine(numbers[i]);" and it worked.
@Dan12R ... First, thanks for the feedback ... this helps us refine the content going forward once we see where the rough spots are.
Second, to your question ... .ToString() ... in this SPECIFIC case, the .ToString() doesn't buy much because Console.WriteLine() can accept just about any data type you throw at it (In fact, there are 19 versions of this method ... based on what you pass into it, the correct version of that WriteLine() method will be called and the data converted appropriately for display in the Console window.
The reason why I added a .ToString() was out of habit - a habit born out of the need in .NET explicitly convert numeric data to a string when presenting on a screen (like a web page, a windows form, etc.)
So, the good news is that you fought through this and figured it out. Congrats. I'm sorry if the explanations were not quite clear enough on this one.
Hi Bob,
Your videos are very good.but have to wait for along time for these videos to buffer.can u tell me what should be the internet speed to play these videos without taking much time to buffer and i wanted to buy these videos and my frd in us will be paying on behalf of me is it possible to access vidoes then from my system
@srinivas: Hi, sorry -- I'm a bit confused with your question. The videos on channel9.msdn.com are free. If you're referring to these videos, you should be able to download them directly from links next to the video player. If you are referring to my personal website, -- if streaming is a problem -- we do offer the ability to download the videos to your hard drive in a .zip format. Please see our site for more information. Thank you for your interest!
@srinivas: I have the same problem when looking at the videos, but then I found out that shutting down the Channel9 tab and finding it again from the startup tab solved the problem. I have to do this after each video, so it is a bit frustrating when I actually do have time to look at several videos in a row. Thanks for the videos btw Bob, great stuff and easy to follow!
I am a beginner to programming and I would like to say this is a great starting point. I have crawled the web to find your gem of a great lesson plan. I would like to say thank you.
Someone's a Lost fan, eh, Bob? ;)
But seriously, your lessons are extremely helpful, and perfectly paced in terms of how you introduce new concepts. Thanks so much for these.
@BobTabor I really like your videos they are definitely helping me understand the language!
Got a question though.
Let's say I want C# to create an array of numbers between 1 and 50 and then print that array to my screen or just store it in memory so that I can manipulate the data in the future.
What would be the most efficient way to do this? I could write out the array myself but what I'd really like is for C# to create the array for me.
Alternatively, if you know of an msdn article that explains how to do this, i'd be happy to do some reading too!
It is a palindrome programme
static void Main(string[] args)
{
string str = "";
Console.WriteLine("Enter a String");
string s = Console.ReadLine();
int i = s.Length;
for (int j = i-1; j >= 0; j--)
{
str = str + s[j];
}
if (str == s)
{
Console.WriteLine(s + " is palindrome");
}
else
{
Console.WriteLine(s + " is not a palindeome");
}
Console.ReadLine();
}
my question is 'str = str + s[j];' in this line.
What is the meaning of 's[j]' ?
Is it a array ?
If it is array, how to work it ?
Please.....Explain it...
here is the another program of palindrome...
static void Main()
{
char[] n;
Console.WriteLine("Enter a string to check palindrome or not");
string a = Console.ReadLine();
n = a.ToCharArray();
Array.Reverse(n);
string n2 = new string(n);
//string n2 = n.ToString();
if (a == n2)
Console.WriteLine("palindrome");
else
Console.WriteLine("Is not palindrome");
}
My question is
'string n2 = new string(n);
//string n2 = n.ToString();'
in this two line...
When i write 'string n2 = new string(n);' it execute properly
but when i execute using next commented line 'string n2 = n.ToString();', it execute but not proper....why????
please explain difference between two lines.....
your all videos are excellent ....
@Sudarsan Mandal: Lower-case 's' string is a C# data type. Upper-case 'S' String is a .NET object. The upper-case 'S' string has access to the .ToString() method. Hope that helps!
Hey Bob your VIDEOS are awesome.
Just wanted to bring to the notice of the other users:
Console.WriteLine(myChar);
and
Console.Write(myChar);
produce two different results. I just got into it in mistake but proved to be a great learning. Thanks once again for your videos Bob.
@ZohaibRaza: Awesome ... always experiment ... great way to push the envelope and learn more like you just did. Best wishes to you!
I've been teaching myself C# for the last year, and while my progress is concrete enough to create a QuickBooks client for the business I work for, these videos are helping to fill in a lot of holes I didn't even realize I had. Specifically, I didn't know that Arrays were so powerful...even after a year! My boss is definitely buying me a lifetime subscription to this site.
@Clinton Billedeaux: Awesome. Thanks for the nice words. Glad this is working for you!
Thanks Bob for another very helpful tutorial. I have a question though when seeing that one: Are arrays only one-dimensional or can they have more dimensions too (like a matrix for example in mathematics)?
@Uli: Sorry I missed this post ... you can have multi-dimensional and even sparse arrays just like in other languages.)?
@Bandy: Hi! When working with little demo apps, it is sometimes hard to see the need for a given language element. I'm working from memory here, but often I would use a for loop, not a foreach when working with arrays. I would use a foreach when working with collections, and I'm sure I'll be demonstrating that later in this series. So, for now, just be aware that there's several means of iterating through a grouping of data (like an array or collection) and later you can see better usage patterns for each. Hope that helps!
I'm following the lessons and most of what I see I have seen before. Let me qualify that statement. Last programming I did was in College some 20 years ago and using Pascal 5.5. With the Arrays lesson I now need to really pay attention. My goal is to modify the Sophia Bot application which was written in 2007 by James Ashley and targeted the .Net 3.0 environment. I am trying to use it as a voice interface for a Robot. I am using VS 2010 and will be using the MS Speech SDK 11 because I am using the MS Kinect and there will be a lot of rules for the grammers. The original program used the MS SAPI 5.0 TTS. I have 2 questions: 1. Are the Arrays in C# infinite or is there a limit to the number of objects the Array can hold and is there a limit to the object size within the Array?. 2. I should have asked this at the beginning lesson but how do I determine which application to use when I start up VS 2010? There are so many choices. I really enjoy the lessons and you are doing a great job!!!! Thanks for the hard work that was put into this series.
@smithdavidp: (1) Check this out:
(2) I would suggest picking either the general development settings or settings specific to C#. I know there are differences in window placement and perhaps some keyboard shortcuts. There's not a lot of help on MSDN for this dialog (at least, that I could find). Check this out:
Good luck!
I just started learning C# for my Programming Fundamentals class and I have to say these videos are helping me so much more than long winded text books. You explain things clearly and go over them several times and what i like best is you suggest where there are hiccups that alot of coders have which shows you really know your stuff. Thanks so very much, your videos are a lifesaver. :)
i want to now about jagged array. if u can have a video tutorial please upload that
I am having problems with "Array.Reverse" and unable to reverse the string.
I get the following message "Error 2 The type or namespace name 'Reverse' does not exist in the namespace 'Array' (are you missing an assembly reference?)"
This is specifically regarding your exercise at 11:17
@Jay and @BobTabor: well i'm new to C#, and i'm watching videos, as fast as i can understand the situation of jay it's better allways to declare statements like BobTabor said on the issue that you Jay got:
using System;
But it's not in need, but it's good practice to add in top of your code for later use, but you can also use library system like that for example:
System.Console.Writeline();
But of course it makes more writing as BobTabor said, so better declare it at top of a file, if i'm wrong correct me :)
And thanks for videos, because i'm learning C3 that i need in my job. It's a challenge to me :)
hi Bob,
Thanks for sharing your knowledge and understanding with the world!!!
Hi Bob, again many thanks for teaching V3. I noticed you saved all as you go. As we develop a program we want to keep the old ones as reminder. How can I save the corrected program as a new project?
Thks a bunch.
Love your videos and great job at explaining. Not sure if there is anything I can do becaue I am having a hard time seeing the text on your screen so I have to download code just to follow along ??
wow. really love you work, looks like im the first Nigerian hear, thanks a lot
Bob,
Fantastic video series! I'm picking things up much faster than I anticipated. I really appreciate them.
hi thanks for all the lessons
however you seem to be typing a bunch of statements etc....then numbers without explaining step by step what this serves purpose wise
for example name [] then number (number) one will get lost completelty as it is not made clear where exactly when to put this i.e. can you put these sentences anywhere?
Great Series, i have gone through C# and HTML5 series. it saves my precious hours which i needed to spend reading books.
Los siguientes ejemplos son en español sobre arreglos unidimensionales y bidimensionales
Another example using a didimensional array:
El siguiente ejemplo de código está en español sobre arreglos unidimensionales en C#
Example of code using a simple array:
opening thread
Hi Bob, Thanks for these lessons.
I'm having a little issue with charArray. When i run the program, each letter is displayed in its own line (vertically, not horizontally) I have Windows 8 and visual studio express 2012. This applies for regular and reverse array.
Thanks Golnazal for opening the thread
Clint. seems like a writeline.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Unsderstandingarrays
{
class Program
{
static void Main(string[] args)
{
//int[] numbers = new int[6];
//numbers[0] = 4;
//numbers[1] = 8;
//numbers[2] = 15;
//numbers[3] = 16;
//numbers[4] = 23;
//numbers[5] = 24;
//Console.WriteLine(nombers[1]);
int[] numbers = { 4, 8, 15, 16, 23, 24, 26 };
for (int i = 0; i < numbers.Length; i++)
{
Console.WriteLine(numbers[i].ToString());
}
string[] names = { "Bob", "Steve", "Brian", "Chuck" };
foreach (string name in names)
{
Console.WriteLine(name);
}
string myText = "Now is the time for all good men to come to the aid of their country.";
char[] charArray = myText.ToCharArray();
// Array.Reverse(charArray);
foreach (char myChar in charArray)
{
Console.WriteLine(myChar);
}
Console.ReadLine();
}
}
}
WriteLine and Write are different. The WriteLine method quite literally does just that. Since the arrary is a character array, each letter is a new position in the arrary. When the loop runs this is what happens in 'pseudo-code'
charArray position 1
write character to console
newline
charArray position 2
write character to console
newline
.................and so on
Console.Write() method does the same without adding a new line after each loop
tl;dr
line 47, change to:
Console.Write(myChar);
Thanks Scarface. I guess i made a mistake, of assuming that he wrote Writeline.
I too used to be a big "Lost" fan
Hy Bob
Your videos are really great. I am watching your videos from the past two years. Your videos helps me alot as a begginer.. Keep it up man... You are the best!!
Hi Bob, thank for this course, you know how to explain in a clear way.
someone never told you that look like A Team's Hannibal Smith? :)
I also enjoy the Lost citation :)
Sir How can insert the value in an array as we insert in the "C" language. or if any better way so please the solution.
int i,a[20];
for (i=0,i<6,i++)
{
scanf("%d",&a[i]);
}
as i use to insert the value in this example , I want to insert the value in C# array (int[] array= new int[20]). I want to insert the value in this array by using the loops so please provide a solution
@Aniket Sharma: It has been a long time since I've done a C console application so I'm going to do a bit more simple of a sample with strings rather than integers. Also you have to watch your language when you're using terms like "inserting" into arrays. That could be taken as you want to change your array size and actually append new items into it rather than just assign a value.
string[] consoleData = new string[20]; for (int i = 0; i < 6; i++) { consoleData[i] = Console.ReadLine(); } for (int i = 0; i < 6; i++) { Console.WriteLine(consoleData[i]); }
Thank you so much , without foreach loop i get the same value as yours but with a foreach i get an infinite chars
Thank you so much for helping us to get the concepts C#
Hi Bob,
I am a beginner, I don't have to say anything about the Series, as you already know they are really awesome!! helps me a lot, whenever I complete a video my enthusiasm and excitement increases after each and every video makes me to learn more and watch the next video. You must definitely be an awesome developer but at the same time you are Very Great/good/awesome Teacher too!!. Not everyone having so much of knowledge communicates so well and wants to share it for free. Hats off to your skills and to you.You have definitely become my icon not only because of your skills but because I believe you are a very good human being.
Thanks a lot Bob Sir for the series.
Bob, you've got the rear talent for teaching , Sir!
I've got a question: what is value of "Length" in "numbers.Length"? If it is 5 as we assigned to array then "i < numbers.Length" should printout first four numbers: 4,8,15,16, skipping last 23, as 5<5 is false. What I am missing here?
Thank you
Thank U, Clint.
Got it now. What im forgetting is that iteration starts at i=0!
How to view or download "Source Code" from Microsoft Virtual Academy ? it's not working!
Microsoft Virtual Academy videos are not working on Google chrome Browser!
please help?
Hi Bob,
Thanks for the videos. I have one question about this one. In the end you use the Write technique as opposed to the WriteLine technique. As I was copying your code I used WriteLine by accident, but seems to give me the same result. I expected all the separate characters on separate lines but somehow that did not occur. Can you maybe clarify this?
Thanks!
Great stuff Bob thanks for your time!!!
After writing the 1st block of code, instead of commenting out the Console.WriteLine, I simply modified it with the code you had provided, added the "for statement", but it would not run. Copied and pasted it inside the curly braces and it works. Fun to get in and play with it.
Hey BoB your tutorials are awesome i have been going through them frequently as i m new to c# i find this language interesting due to the way u have explained them . I am working hard to get through them as it requirement for my job . First time wen i heard i have to work on .Net i was like nah bt now that i have found out a good teacher like you i no its gonna go good .
your doing a great job ! Happy days going on !
God bless u have blessed life :)
Working my way through the series.
Began programming in the 70's. Never liked C, C++ or VB.
Acquired familiarity with C# through many developer events, but never could tolerate a thorough introduction - until I found this series (as you recommended it as a prerequisite for MS Store app development).
Best tutorial series I've seen for any programming language!
Still don't like C#; but can take it in 34 expertly presented portions. 9 down, 25 to go.
Thanks.
Thank you so much for this series. There is 1 thing I kinda miss: A task after each lesson to use what you already showed in the lesson?
Also since I kinda need to understand the logic behind this I did a small program mixing the lessons a bit:
{ { Console.Write("Please type in the text you want to reverse and press Enter: "); } string userInput = Console.ReadLine(); char[] userText = userInput.ToCharArray(); Array.Reverse(userText); foreach (var userOut in userText) { Console.Write(userOut); } Console.ReadLine(); }
Again ty so much for this series, finaly Im going to be able to code C# [H]
How can one express the equality of two different arrays? For example, say I had two arrays, one named arrayTest and one named arrayCheck. Could I do something like if (arrayTest == arrayCheck)? If not, what could I use?
Thanks for the video series, seems to be a great way to step into coding C#. I have skirted the edges of Visual Studio for years with work inside MS SQL development. I enjoy modifying the examples to extend my knowledge. I wondered what else Array could do besides .Reverse so I typed it and the list of attributes was long. I chose Sort and added this line (in different places) to get some additional gibberish: Array.Sort(charArray); [H]
Hi Bob,
This is one the best programming series I have come across. So inspiring so enriching. I look forward to the next level. Thanks Channel 9 Team.
Awesome man ... :)
Mr.Bob you are doing well. and i learnt much from your videos. do you have free videos of OOPS concept. That will be much helpful to me. please. If yes please share the link and if not tell me the procedure to learn oops concept. please HELP!!
|
https://channel9.msdn.com/Series/C-Sharp-Fundamentals-Development-for-Absolute-Beginners/Creating-Arrays-of-Values-09?format=auto
|
CC-MAIN-2017-51
|
refinedweb
| 3,464
| 75.1
|
Almost every single web application you will ever make will seriously benefit from using servlet filters to both cache and compress content. A caching filter optimizes the time it takes to send back a response from your web server, and a compression filter optimizes the size of the content that you send from your web server to a user via the Internet. Since generating content and sending content over the World Wide Web are the bread and butter of web applications, it should be no surprise that simple components that aid in these processes are incredibly useful. This article details the process of building and using a caching filter and a compression filter that are suitable for use with just about any web application. After reading this article, you will understand caching and compressing, have code to do both, and be able to apply caching and compression to any of your future (or existing!) web applications.
Servlet filters are powerful tools that are available to web application developers using the Servlet 2.3 specification or above. Filters are designed to be able to manipulate a request or response (or both) that is sent to a web application, yet provide this functionality in a method that won't affect servlets and JSPs being used by the web application unless that is the desired effect. A good way to think of servlet filters is as a chain of steps that a request and response must go through before reaching a servlet, JSP, or static resource such as an HTML page in a web application. Figure 1 shows the commonly used illustration of this concept.
Figure 1. The servlet filter concept
The large gray box is a web application that has some endpoints, such as JSP, and some filters applied to intercept all requests and responses. The filters are shown in a stack, three high, that each request and response must pass through before reaching an endpoint. At each filter, custom Java code would have a chance to manipulate the request or response, or anything that has to do with either of those objects.
Related Reading
Java Servlet & JSP Cookbook
By Bruce W. Perry
Understand that a user's request for a web application resource can be forced to go through any number of filters, in a given order, and any of the filters may manipulate the request, including stopping it altogether, and respond in a variety of different ways. This is important to understand because later in this article, two filters will be presented that manipulate the HttpServletRequest and HttpServletResponse objects to provide some very convenient functionality. Don't worry if you don't know anything about coding a filter -- it would certainly help if you understood the code, but in the end of the article, all of the code is provided in a JAR that can easily be used without knowing a thing about how it was made.
HttpServletRequest
HttpServletResponse
Before moving on, if you would like to learn more about the basics of servlet filters, I suggest checking out Servlets and JavaServer Pages; the J2EE Web Tier. It is a Servlet 2.4 and JSP 2.0 book I co-authored with Kevin Jones, and the book provides complete coverage of servlets and servlet filters, including the two filters presented later in this article. It would be nice if you bought the book, but the chapters on servlets and filters will soon be available for free on the book support site -- if they're online already, read away.
Compression is the act of removing redundant information, representing what you want in as little possible space. It is incredibly helpful for sending information across the World Wide Web, because the speed at which people get information from a web application is almost always dependent on how much information you are trying to send. The smaller the size of your information, the faster it can all be sent. Therefore, if you compress the content your web application generates, it will get to a user faster and appear to be displayed on the user's screen faster. In practice, simply applying compression to a decent-sized web page often results in saving several seconds of time.
Now, the theory is nice, but the practice is nicer. This theoretical compression isn't something you have to labor over each time you go to code a servlet, JSP, or any other part of a web application. You can obtain very effective compression by having a servlet filter conditionally pipe whatever your web application produces to a GZIP-compressed file. Why GZIP? Because the HTTP protocol, the protocol used to transmit web pages, allows for GZIP compression. Why conditionally? Because not every browser supports GZIP compression, but almost every single modern web browser does. If you blindly send GZIP-compressed content to an old browser, the user might get nothing but gibberish. Since checking for GZIP compression support is nearly trivial, it is no problem to have a filter send GZIP-compressed content to only those users that can handle it.
Source Code
Download jspbook.zip for all of the source code found in this article. Also download jspbook.jar for the ready-to-use JAR with compiled versions of both the cache and compression filter.
I'm saying this GZIP compression stuff is good. But how good? GZIP compression will usually get you around a 6:1 compression ratio; it depends on how much content you are sending and what the content is. In practice, this means you will send content to a user up to six times faster if you simply use GZIP compression whenever you can. The only trick is that you need to be able to convert normal content in to GZIP-compressed content. Thankfully, the standard Java API provides code for doing exactly this: the java.util.zip package. The task is as easy as sending output sent in a web application's response conditionally through the java.util.zip.GZIPOutputStream class. Here is some code for doing exactly that.
java.util.zip
java.util.zip.GZIPOutputStream
As with most every filter, three classes are needed to do the job. A customized implementation of the javax.servlet.Filter interface, a customized implementation of the javax.servlet.ServletOutputStream class, and a customized implementation of the javax.servlet.http.HttpServletResponse class. Full source code for these three classes is provided at the end of the article; for now I will focus only on the relevant code. First, a check needs to be made if a user has support for GZIP-compressed content. This check is best done in the implementation of the Filter class.
javax.servlet.Filter
javax.servlet.ServletOutputStream
javax.servlet.http.HttpServletResponse
Filter
...
public class GZIPFilter implements Filter {
// custom implementation of the doFilter method
public void doFilter(ServletRequest req,
ServletResponse res,
FilterChain chain)
throws IOException, ServletException {
// make sure we are dealing with HTTP
if (req instanceof HttpServletRequest) {
HttpServletRequest request =
(HttpServletRequest) req;
HttpServletResponse response =
(HttpServletResponse) res;
// check for the HTTP header that
// signifies GZIP support
String ae = request.getHeader("accept-encoding");
if (ae != null && ae.indexOf("gzip") != -1) {
System.out.println("GZIP supported, compressing.");
GZIPResponseWrapper wrappedResponse =
new GZIPResponseWrapper(response);
chain.doFilter(req, wrappedResponse);
wrappedResponse.finishResponse();
return;
}
chain.doFilter(req, res);
}
}
Information about GZIP support is conveyed using the accept-encoding HTTP header. This header can be accessed using the HttpServletRequest object's getHeader() method. The conditional part of the code need be nothing more than an if statement that either sends the response as is or sends the response off to be GZIP compressed.
accept-encoding
getHeader()
if
The next important part of the GZIP filter code is compressing normal content with GZIP compression. This code occurs after the above filter has found that the user does know how to handle GZIP-compressed content, and the code is best placed in a customized version of the ServletOutputStream class. Normally, the ServletOutputStream class handles sending text or non-text content to a user while ensuing appropriate character encoding is used. However, we want to have the ServletOutputStream class send content though a GZIPOutputStream before sending it to a client. This can be accomplished by overriding the write() methods of the ServletOutputStream class to GZIP content before sending it off in an HTTP response.
ServletOutputStream
GZIPOutputStream
write()
...
public GZIPResponseStream(HttpServletResponse response)
throws IOException {
super();
closed = false;
this.response = response;
this.output = response.getOutputStream();
baos = new ByteArrayOutputStream();
gzipstream = new GZIPOutputStream(baos);
}
... {
System.out.println("writing...");
if (closed) {
throw new IOException(
"Cannot write to a closed output stream");
}
gzipstream.write(b, off, len);
}
...
There are also a few loose ends to tie up, such as setting the content's encoding to be the MIME type for GZIP, and ensuring the subclass of ServletOutputStream has implementations of the flush() and close() methods that work with the changes to the write() methods. However, all of these changes are minor, and you may see code that does them by looking at the source code provided at the end of this article. The most important point to understand is that the filter has altered the response to ensure that all content is GZIP compressed.
flush()
Test out the above code by grabbing a copy of jspbook.jar, which includes compiled classes of the GZIP filter, and putting the JAR in the WEB-INF/lib directory of your favorite web application. Next, deploy the com.jspbook.GZIPFilter class to intercept all requests to resources ending with ".jsp" or anything that produces HTML. Reload the web application for the changes to take effect. The GZIP filter should now be automatically compressing all responses, if the user's browser supports GZIP compression.
com.jspbook.GZIPFilter.
|
http://www.onjava.com/pub/a/onjava/2003/11/19/filters.html?page=1
|
CC-MAIN-2015-35
|
refinedweb
| 1,612
| 52.8
|
I am looking to expand my JS code understanding down the path of OOP and I had some questions not only about performance but style.
To give an example, my Developer Tools: Dialog script (on userscripts.org) creates what is referred to as a singleton--a single instance that in this case can only be referenced under the "devtools.dialog" name. It is called (regardless of the script) by calling "devtools.dialog.method();". Once that method is called, it stores the ID that was used into an array.
The problem is this:Let's say Script 1 calls the method and the ID is stored into that array. Script 2 calls the method and the ID is stored in the same array as Script 1. When Script 1 looks at the array, it sees the IDs from when it called the method but also from when Script 2 called the method.
What I would like to do:Each script should be able to make a new instance (therefore having multiple instances) which is self-contained with only that instances settings.
From what I have looked at so far, I could do something like this which would allow for multiple instances, private variables/methods and of course public variables/methods:
// Define namespace.
if (typeof Container == 'undefined') {
Container = {};
}
// Set up the constructor.
Container.MyPlugin = function () {
// Private settings for this instance.
var settings = [...];
// Privelaged function to get private settings.
// From my understanding, this is added to memory with every instance. More instances = more functions added to memory.
this.getSettings = function () {
return settings;
}
};
// Public function for the plugin. Can't access the private settings, must use privelaged method.
// From my understanding, this is added to memory only once no matter how many instances are made.
Container.MyPlugin.prototype.doSomething = function () {
// Some code here.
};
// First instance.
var plugin = new Container.MyPlugin();
// Second instance, which means completely seperate settings.
var plugin2 = new Container.MyPlugin();
plugin.settings; // private...undefined
plugin.getSettings(); // privelaged...returns array
plugin.doSomething(); // public...can't access private stuff
Are there any better/more efficient ways of doing this type of thing? The above example wasn't tested, it is just so I can get a better understanding while making it as simple as possible.
I can't see anything wrong in your approach.
If you are really worried about your functions being defined inside your 'constructor' consuming more memory I could only really see dropping the concept of keeping access to them as private. How many instances are you going to create of these objects that memory would be a concern?
I guess it depends what is more important, lowering memory usage or preventing other code from accessing data private to that object. In your example the other functions defined in the objects prototype like 'doSomething()' won't be able to access the settings variable directly either (not like some other OO languages), they will have to use the getSettings() function like everything else.
|
http://community.sitepoint.com/t/oop-in-js/89120
|
CC-MAIN-2014-52
|
refinedweb
| 491
| 55.84
|
05 May 2011 19:56 [Source: ICIS news]
HOUSTON (ICIS)--Flooding along the Mississippi river has further disrupted the feedstock supply used to make methyl methacrylate (MMA) at Lucite’s plant in ?xml:namespace>
Lucite previously declared force majeure (FM) on MMA on 21 April because of supply disruptions of a key feedstock at the plant.
Lucite did not identify the feedstock nor the company that supplies it.
However, bad weather has extended the shutdown of a third-party supplier’s plant by approximately two weeks. As a result, Lucite said that it would be unable to meet allocation levels last communicated to customers.
MMA shipments scheduled to leave Lucite on or before Saturday will be made. However, shipments will otherwise cease for two weeks as of 8 May, the company told customers.
MMA is a key raw material in the production of normal butyl methacrylate, iso-butyl methacrylate, ethyl methacrylate and 2-ethyl hexyl methacrylate. As a result, supplies of these products also will cease for approximately two weeks, according to the company.
Lucite said it would revise allocation levels for all affected products following the restart of MMA production at its
Most
MMA is used in many applications such as paint and coatings, adhesives, inks and acrylic sheet and polymethyl methacrylate (PM
|
http://www.icis.com/Articles/2011/05/05/9457401/feedstock-shortage-shuts-down-lucite-us-mma-plant-in-tennessee.html
|
CC-MAIN-2015-18
|
refinedweb
| 214
| 50.57
|
Make a JAR executableTag(s): Environment
In the manifest file of a JAR, it is possible to specify the class to be used when the JVM is lauched with the JAR as parameter. The class must have a main().
Try with this simple class
import java.awt.*; import java.awt.event.*; public class MyClass { public static void main(String[] args) { Frame f = new Frame(); f.addWindowListener (new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } } ); f.add(new Label("Hello world")); f.setSize(200,200); f.setVisible(true); } }
Manifest-Version: 1.0 Main-Class: MyClass Class-path: .
The Class-path is used to specify the dependency of this jar (if any). You add the directories and jars separated by a space. It is important because when running a jar , the CLASSPATH definition (as defined by the environnement variable) is overridden.
Next, you include the manifest file in the JAR (MyJar.jar) with the MyClass class.
jar cvfm myjar.jar manifest.mft *.class
java -jar myjar.jar
The file association mechanism is made during the installation of the JRE.
You can verify that everything is ok with the following command from a DOS Shell
>assoc .jar .jar=jarfile >ftype jarfile jarfile="C:\Program Files\Java\jre1.5.0_10\bin\javaw.exe" -jar "%1" %*
If the association is broken or you need to change the JRE used then by using the assoc/ftype utilities, you can modify the association easily (use /? to display the syntax of the assoc and ftype utilities).
NOTE: On WinXp (or better), your user account needs to be at the Admin level.
On Windows (NT or better), you can also make JARs run from the command-line by setting the PATHEXT environment variable, for example
SET PATHEXT=.EXE;.BAT;.CMD;.JAR
|
https://rgagnon.com/javadetails/java-0166.html
|
CC-MAIN-2021-21
|
refinedweb
| 294
| 59.8
|
Ir Exterior de Cuba.
According to EGFI's deputy CEO Arash Shahraini, Banco Exterior de Cuba
will start repaying the outstanding loan next month, and has with the
agreement committed to settle the full loan by 2019.
"The debt relates to a credit line extended to Cuba 10 years ago to
import goods and services from Iran," he tells GTR.
The credit, he says, was given to public entities in Cuba, mainly in the
agriculture, equipment and medicine sectors. The loan was backed by the
Cuban government's sovereign guarantee and EGFI covered the repayment.
But the loan was not repaid on time, partly due to the economic
sanctions against Iran. EGFI would not reveal the amount of Cuba's debt
to Iran, but according to the Iranian news site the Financial Tribune,
Cuba has a remaining debt of about €43mn to Iran, €50mn with interest.
The agreement to restructure Cuba's loan forms part of a wider strategy
to strengthen ties between the two countries. Last week Iranian
President Hassan Rouhani made an official visit to Havana, where he met
with Cuba's President Raul Castro and former president and revolutionary
leader Fidel Castro.
According to Shahraini, Iranian exports to Cuba amounted to US$2mn
during the previous Iranian fiscal year. He says the sanctions against
Iran, as well as Cuba's payment default, has affected the volume of
trade between the two countries. "But now we are reviewing our cover
policy on Cuba," he says. "The two countries have good political and
economic relations and they are interested in expanding their relations
especially in economic fields."
Shahraini adds that EGFI is working to pave the way for the presence of
more Iranian companies in Cuba, especially those active in the export of
techno-engineering services who are looking to take part in
infrastructure projects. "There are a lot of Iranian companies who are
very capable in implementing projects such as dams, power plants, roads,
housing etc., who are now engaged in such projects in different parts of
the world," he says.
The lifting of the economic sanctions against the country in January
this year has allowed Iran to revive and establish links with foreign
banks and export credit agencies, and to recover its funds globally, but
the road to full financial transfer capacity remains long.
Source: Iran agrees to restructure Cuba's debt | Global Trade Review
(GTR) -
Thursday, September 29, 2016
Iran agrees to restructure Cuba's debt
|
https://humanrightsincuba.blogspot.com/2016/09/iran-agrees-to-restructure-cubas-debt.html
|
CC-MAIN-2017-43
|
refinedweb
| 410
| 57.4
|
Skinning selected DataGrid header in Flex 4Groovee2010 Jul 22, 2010 2:49 PM
Hi,
What I'm trying to do seems trivial, but after hours of searching for historical clues I have still not achieved it. I simply want to reskin only the selected header. There seems to be multiple approaches:
1) define a custom headerRenderer - but this ends up with the sort indicator cobbled on top of your custom drawn area
2) rework the header background skin to clip/draw the selected column differently - but this requires clipping to the selected column, and figuring out the dimensions to clip to
Am I missing an obvious and easier solution? Any examples out there?
Thanks!
1. Re: Skinning selected DataGrid header in Flex 4Flex harUI
Jul 22, 2010 4:34 PM (in response to Groovee2010)
When you say "selected" header, are you talking about the one that is now
the sort column or the one selected during dragging or something else?
2. Re: Skinning selected DataGrid header in Flex 4Groovee2010 Jul 22, 2010 4:46 PM (in response to Flex harUI)
Hi Alex,
Sorry for the ambiguity. I wish to indentify the current sort column with a colored header in addition to the sort indicator.
Thanks for any assistance.
3. Re: Skinning selected DataGrid header in Flex 4Groovee2010 Jul 29, 2010 8:26 AM (in response to Groovee2010)
So Alex, any hints on how to do that?
Thanks!
4. Re: Skinning selected DataGrid header in Flex 4Flex harUI
Jul 29, 2010 8:55 AM (in response to Groovee2010)1 person found this helpful
I would make a custom header renderer. It can check the
owner.dataProvider.sort to see if it should have different visuals.
5. Re: Skinning selected DataGrid header in Flex 4Groovee2010 Jul 29, 2010 9:48 AM (in response to Flex harUI)
Thanks. One last question here, I hope.
When I draw a gradient within my custom HeaderBackGroundSkin for the entire width of the grid, the default sort indicators properly sit on top of it.
But when I specify a custom Header for each column, it is only given the width not used by the sort indicator (when present), so my highlighted gradient does not fill the entire column header.
Is the solution to similarly skin the sort arrow? It seems that this should be easier (especially since it's not even really Flex 4 is it?). But mostly I was surprised to not find an example of somebody already doing it since it seems like such a common desire.
Thanks again.
6. Re: Skinning selected DataGrid header in Flex 4Flex harUI
Jul 29, 2010 10:09 PM (in response to Groovee2010)1 person found this helpful
Folks have done this before. The reason it is hard is because DG is not a
Spark component so skinning is unpredictable.
I'm pretty sure they way folks do it is by drawing not within your given
size but based on the column's width. You can draw outside your bounds and
it won't get clipped. The column is the header renderer's data object.
7. Re: Skinning selected DataGrid header in Flex 4Groovee2010 Jul 30, 2010 12:26 AM (in response to Flex harUI)
Thanks again Alex.
I managed to achieve something satisfactory with the following custom HeaderRenderer. The negative padding on the right side allows the gradient to extend underneath the sort skin to fill the entire column. This seemed easier (if not cleaner) than also skinning the sort arrow skin in the same way.
I also had to extend DataGridHeader and override drawHeaderIndicator (for the rollover of the column header) and drawSelectionIndicator (for the transitionary state of clicking on the column header before the sort takes effect).
Still a work in progress, but hopefully this will save the next Googler some time in achieving any of these goals. And of course I welcome any refinements or any admonishments that I'm doing something really stupid.
<?xml version="1.0" encoding="utf-8"?>
<s:Group>
implements="mx.controls.listClasses.IListItemRenderer"
xmlns:fx=""
xmlns:s="
library://ns.adobe.com/flex/spark"
xmlns:mx="
library://ns.adobe.com/flex/halo"
width="
100%" height="100%">
<fx:Script>
<![CDATA[
import com.company.player.model.Constants;
import mx.collections.ArrayCollection;
import mx.controls.DataGrid;
import mx.controls.dataGridClasses.DataGridColumn;
import mx.controls.listClasses.IListItemRenderer;
import mx.controls.listClasses.ListBase;
private var _data:Object;
static private const LABEL_BUFFER:int = 4;
static private const SORT_INDICATOR_WIDTH:int = 14;
override protected function updateDisplayList(unscaledWidth:Number, unscaledHeight:Number):void
{
var owner:DataGrid = owner as DataGrid;
var col:DataGridColumn = data as DataGridColumn;
var bSorted:Boolean = false;
var ac:ArrayCollection = owner.dataProvider as ArrayCollection;
if (ac && ac.sort)
{
// there's a sort in place - is it on this column?
var sortFieldName:String = ac.sort.fields[0].name;
bSorted = (sortFieldName == col.dataField);
}
gradientOverlay.alpha = bSorted ? 1 : 0;
gradientOverlay.visible = bSorted;
lbl.width = col.width - LABEL_BUFFER - (bSorted ? SORT_INDICATOR_WIDTH : 0);
lbl.text = col.headerText;
lbl.setStyle(
"color", bSorted ? "#343434" : "#767676");
super.updateDisplayList(unscaledWidth, unscaledHeight);
}
]]>
</fx:Script>
<s:HGroup
<s:Rect
<s:fill>
<s:LinearGradient
<s:GradientEntry
<s:GradientEntry
</s:LinearGradient>
</s:fill>
</s:Rect>
</s:HGroup>
<s:HGroup
<s:Label
</s:HGroup>
</s:Group>
|
https://forums.adobe.com/thread/685714
|
CC-MAIN-2017-30
|
refinedweb
| 862
| 56.66
|
Some significant syntax changesgeek, internet, opensource, programming, spark, tech July 20th, 2008
There are a bunch of updates related to Spark so I think I’ll just run through them in whatever order.
First, the documentation and downloads area have moved to a Drupal cms at instead of using the Trac site’s built-in wiki.
There’s a new release of Spark view engine available on the download page.
The first thing to watch out for is a breaking change: support for the $expression; syntax has been removed due to collisions with normal prototype and jquery usage. Anyplace that was using that syntax will need to change to ${expression} but beyond that any valid csharp expression still works to produce output.
Another requested change from Tim Schmidt has been made to support single-quoted string literals when csharp appears in a spark file. That’s to allow xml attributes to use double quotes even though they contain expressions. For example, the code below that writes out a string if a product is in stock:
<ul> <li each="var product in Products" class="${product.Quantity != 0 ? 'instock' : 'outofstock'}"> ${product.Name} </li> </ul>
The quotes are changed when the generated class is produced. The generated class contains the following:
Output.Write(product.Quantity != 0 ? "instock" : "outofstock");
An even more significant enhancements has been made to the conditional syntax. This is based on some excellent feedback and ideas from Pablo Blamirez a few weeks back.
<test if="!user.IsLoggedIn"> <p>Please sign in. ${Form.FormTag(new { <p>Administrator ${user.Name} etc.</p> <else/> <p>Hello, ${user.Name}.</p> </test>
Note the empty
<else/> elements. The previous formats are still supported, so you could use
<if condition=""> instead of
<test if=""> and the else elements may also be used in a way where they follow the original test. So the following is equivalent.
<test if="!user.IsLoggedIn"> <p>Please sign in. ${Form.FormTag(new { <p>Administrator ${user.Name} etc.</p> </else> <else> <p>Hello, ${user.Name}.</p> </else>
In the end it’s a stylistic preference.
July 20th, 2008 at 9:45 pm
Excellent! I’m already using the new syntax and I’m digging it. I was busy today or I would’ve gotten the namespace patch to you. I’ll get to it at some point in the next couple days.
|
http://whereslou.com/2008/07/20/some-significant-syntax-changes
|
crawl-002
|
refinedweb
| 390
| 66.84
|
i want to develop a code for when user clicks on forgot password then the next page should be enter his mobile no then the password must be sent to his mobile no...!
Thanks in advance
Nag Raj
);
The complete code of forgot password action is given below...
The password forgot Action is invoked
Send forgot Password through mail - JSP-Servlet
Send forgot Password through mail
hello every one
I am designing a admin login page where i am validating the password through file(Example... is provided)
Now my question is how do i send the password if admin click on forgot
forgot password i want to proper code for forgot password, means how to send password into email id if i forgot my password
import java.io.*;
import javax.mail.*;
import javax.mail.internet.*; // important
servlet code to update password?
servlet code to update password? Create a servlet page which helps the user to update his password. The user should give the username,old password... users.then Store it into a table.
please tell me the code for the above question
how to forget password in spring framework
://
Please.../... in spring framework with hibernate give me code
Please visit
Please visit the following link:
Forgot Password of Application i want sample code for Forgot Password of Application
Spring portlet with Hibernate
Spring portlet with Hibernate Hi All,
Could you please help me integrate the code for Forgot Password using spring portlet, hibernate withn MYSQL... link:
Thanks
myJSF,Hibernate and Spring integration code is not working. - Hibernate
myJSF,Hibernate and Spring integration code is not working. the code... not able to get the proper page.
so, plz help me out.
plz give the steps to run... The requested resource (/HibernateMyfaces/) is not available.
so, plz help
coding for forgot password
coding for forgot password coding for forgot password
Forgot Password of Application Forgot Password of Application through servlet and jsp
spring hibernate spring
spring hibernate spring why are all the links in the following page broken.... the following link:
Spring Hibernate
Thanks
Spring 3.0 Tutorials with example code
Spring 3.0 - Tutorials and example code of Spring 3.0 framework... of example code. The
Spring 3.0 tutorial explains you different modules... will learn
all the features of the Spring 3.0 with example code.
With the
spring hibernate encrypted password
In this section, you will learn about encrypted password in spring hibernate
Struts Projects
tested source code and ant build
file
How to used hibernate in the Struts...
Struts project. Download
the source code of Struts Hibernate Integration Tutorial... by
combining all the three mentioned frameworks e.g. Struts, Hibernate and
Spring Hibernate Spring - Login and User Registration - Hibernate
Struts Hibernate Spring - Login and User Registration Hi All,
I fallowed instructions that was given in Struts Hibernate Spring, Login and user....
Spring 4: Login Form using Spring MVC and Hibernate Example
for initializing the Spring
and Hibernate related stuffs
In the following code you...Spring 4 MVC Login Example: Database driven login form using Spring MVC...
Software's:
JDK 7 or above
Maven 3
Code editor
We
plese tell -Struts or Spring - Spring
/hibernate/index.shtml... about spring.
which frameork i should do Struts or Spring and which version.
i...,
You need to study both struts and spring.
Please visit the following links
Java Compilation error. Hibernate code problem. Struts first example - Hibernate
Java Compilation error. Hibernate code problem. Struts first example Java Compilation error. Hibernate code problem. Struts first example
struts with hibernate - Hibernate
struts with hibernate Can u send me Realtime example of struts with hibernate(Saving,Delete,update,select from muliple tables
insert code using spring - Spring
://
Hope that it will be helpful...insert code using spring Hi,
Can any one send code for inserting the data in database using spring and hibernate ..
Many thanks
Spring 3 MVC Validation Example
;
In the above page we are creating a new link to access the Validation Form
example... page as :
Download
example code
In this we have...Spring 3 MVC Validation Example
This tutorial shows you how to validate Spring
Hibernate code - Hibernate
Hibernate code Hi
How to intract struts , spring and Hibernet... the following link:
Here you will get application comprises of Struts,Spring and Hibernate.Hope
Integrating MyFaces , Spring and Hibernate
with source code and library files.
Adding Spring and Hibernate...Integrating MyFaces , Spring and Hibernate
... to configure Hibernate, Spring
and MyFaces to use MySQL Database to build real world
spring with hibernate integration
spring with hibernate integration i want code for update and find operation
how can i update and search operations in one method by using...://
Hibernate Spring Example
In this section, you will learn how to Integration Hibernate with Spring with an example
integration with struts 2.0 & spring 2.5 - Framework
integration with struts 2.0 & spring 2.5 Hi All,
The total integration is
Client (JSP Page) --- Struts 2.0--Spring 2.5 --- Hibernate 3.0--MySQL... for more information.
update
){
System.out.println(e);
}
}
}
For the above code, we have...update Predict and justify the output of the following code snippet written by the developer
to update the Status table:
String str = "UPDATE
Struts Articles
Struts and Hibernate is necessary.
NetUI Page Flows: An Evolution...;
Strecks is built on the existing Struts 1.2 code base, adding a range of productivity... is isolated from the user).
Bridge the gap between Struts and Hibernate
example
example i need ex on struts-hibernate-spring intergration example
Struts Spring Hibernate Integration
struts - Struts
://
Hope that the above links...struts Hi,
I am new to struts.Please send the sample code for login and registration sample code with backend as mysql database.Please send
Update Profile
Update Profile
Update Profile
This facility is used for updating user information in the database. Such as login id, name, password, email address etc.
When 4 MVC Login form Example with source code
MVC login form
example.
Download the source code of the
Spring 4 MVC login...Spring 4 MVC Login form Example: Learn how to make a Login form in Spring
MVC... you the code for creating the Login
form in Spring MVC. You can validate
Struts Hibernate Integration
In this section we will write Hibernate Struts Plugin Java code...
Struts Hibernate
... are displayed to the user.
Full example code is provided with the tutorial
recovery of password through mail
recovery of password through mail hi i want code for login page... user account should be verified and if they forgot password then security question should be provided ,if they answer then their password must be sent
The UPDATE Statement, SQL Tutorial
= 'Amar';
The output of the above code will be :
emp... = 'Amar';
The output of the above code will be:
emp...
The UPDATE Statement
Struts Spring Hibernate
In this section, you will learn about the Struts, Spring and Hibernate Integration
spring mvc3 and hibernate with annotation example
spring mvc3 and hibernate with annotation example please send an example which is used to store the form data into database using spring with hibernate using annotations
Spring Login Example
Spring Login Example
In this section , we have shown a simple Spring loging Example. In this
example, we have accepted
two values Username and password... Spring Login Example:
Logger.java
UserLogin.java
dispatcher-servlet.xml
Spring MVC Tutorials
Spring MVC Tutorials and example code
In this section I will provide you.... This
example of Spring MVC also make you understand about writing the
code....
Spring 3 MVC and Hibernate 3 Example - Learn how
hibernate code - Hibernate
hibernate code sir,
i have tried ur hibernate 3.0 firt example code, i have done as u have told in the tutorial like import all the related...(FirstExample.java:37)
/hibernate.cfg.xml not found.
plz provide me some sol
An introduction to spring framework
.
Just as Hibernate
attacks CMP as primitive ORM technology, Spring attacks... application code.
2. Spring Context/Application Context:
The Spring context... programming to Spring.
Using this we can add annotation to the source code
Complete Hibernate 4.0 Tutorial
Hibernate
Spring Example
Hibernate Classes
Hibernate Maven repo...
Spring Hibernate Annotations
Hibernate hello world
Struts Spring...
hibernate by example
hibernate repository
hibernate sample code
: "
+ insurance. getInsuranceName());
}
in the above code,the hibernate...hibernate code problem String SQL_QUERY =" from Insurance... that is dynamically assigned like this
lngInsuranceId="'+name+'"
plz help me out
Hibernate code - Hibernate
Hibernate code firstExample code that you have given for hibernate...
true
org.hibernate.dialect.MySQLDialect
update... whether u created FirstExample.java file or not. Because in this example value
Form for Update Customization.
Form for Update Customization
In this example, you will learn to make a update customization form in PHP. The code for customization update will be called... logo, theme, contact us, support, home page text, and RSS feed.
<?php
Update / Edit data
Update / Edit data Hello, i want to create a page to edit or update... on the reference number.
image
help me plz!! ASAP!!
The given code retrieve data..., that data will get shown in another page and allow the user to update
Struts Code - Struts
Struts Code Hi
I executed "select * from example" query... using struts . I am placing two links Update and Delete beside each record .
Now I want to apend the id of the record with the Url using html:link
Spring MVC Hello World Example
Spring MVC Hello World Example
Spring MVC Hello World Example in Spring 2.5
In this tutorial we will develop the Spring MVC Example and then run on the
Tomcat server. Batch Example
Spring Batch Example
JDBC Template Batch update example, In the tutorial we
have discussed... on a single JDBC Statement. The example given below consists
of the code to delete
JSP code for forget password
JSP code for forget password I need forget password JSP code..
example
Spring Security Customized Access Denied Page
Spring Security Customized Access Denied Page
In this section, you will learn about Customized Access Denied Page in Spring
Security.
Access denied page.../section , try to view it using their login & password. For
example, when
Spring MVC Login Example
it. The result
is:
Download Code of Spring MVC Login Example discussed here...Spring MVC Login example
Spring 2.5... using Spring MVC
module. In this login example we are not connecting to any
Hibernate in java - Hibernate
is an example in Hibernate.
" How to configure a login page (user name and password...Hibernate in java Hi,
This is Gopi, I am working as developer(fresher) in bangalore.I have been given a task in ( Spring first example - Struts
Struts first example Hi!
I have field price.
I want to check require and input data type is int.
can you give me an example code of validation.xml file to resolve it?
thanks you so much! Hi friend,
Plz specify
Spring Batch Example
Spring Batch Example
JDBC Template Batch update example, In the tutorial we
have discussed... on a single JDBC Statement. The example given below consists
of the code to delete
|
http://www.roseindia.net/tutorialhelp/comment/23451
|
CC-MAIN-2015-14
|
refinedweb
| 1,835
| 58.08
|
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
On Fri, Feb 16, 2018 at 11:31:15AM +0100, Richard Biener wrote: > > After Jakubs fixes we can restore bootstrap with GCC 4.2.1 as provided > by older Darwin hosts with the following patch. > > Bootstrap using a GCC 4.2.1 host compiler on x86_64-unknown-linux-gnu > is now in stage2. > > Ok for trunk? > > Thanks, > Richard. > > 2018-02-16 Richard Biener <rguenther@suse.de> > > PR bootstrap/82939 > * line-map.c (linemap_init): Avoid broken value-init when compiling > with GCC 4.2. > > Index: libcpp/line-map.c > =================================================================== > --- libcpp/line-map.c (revision 257728) > +++ libcpp/line-map.c (working copy) > @@ -344,7 +344,12 @@ void > linemap_init (struct line_maps *set, > source_location builtin_location) > { > +#if __GNUC__ == 4 && __GNUC_MINOR__ == 2 Please add && !defined (__clang__), because clang claims to be exactly GCC 4.2 in all versions and as clang is system compiler these days on Darwin and some BSDs, it is common enough we avoid the memset for those, the memset is pedantically incorrect, just happens to work fine with older GCCs where the value initialization doesn't. Ok with that nit fixed. > + /* PR33916, needed to fix PR82939. */ > + memset (set, 0, sizeof (struct line_maps)); > +#else > *set = line_maps (); > +#endif > set->highest_location = RESERVED_LOCATION_COUNT - 1; > set->highest_line = RESERVED_LOCATION_COUNT - 1; > set->location_adhoc_data_map.htab = Jakub
|
https://gcc.gnu.org/legacy-ml/gcc-patches/2018-02/msg00973.html
|
CC-MAIN-2020-40
|
refinedweb
| 224
| 68.87
|
On Sun, 26 Mar 2006, Simon McVittie wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Forwarding my follow-up to bug 356933: since this was reported on a > package which no longer exists, my follow-up went to the BTS but not > to debian-kernel. > > Bug 358816 appears to be another report of the same problem. > > Date: Thu, 23 Mar 2006 23:55:31 +0000 > From: Simon McVittie <snd-powermac-060323.10.smcv@spamgourmet.org> > To: Debian Bug Tracking System <356933@bugs.debian.org> > Subject: linux-2.6: snd-powermac should depend on i2c-powermac > > Package: linux-2.6 > Followup-For: Bug #356933 > > I also experienced the modprobe segfault and kernel Oops reported in bug > 356933. > >. > snd_pmac_tumbler_init() in sound/ppc/tumbler.c contains: > > #ifdef CONFIG_KMOD > if (current->fs->root) > request_module("i2c-keywest"); > #endif /* CONFIG_KMOD */ > > which presumably at least needs to be amended to "i2c-powermac". A hard > dependency or a more graceful failure mode would seem to be a better solution, > though. The fix is correct; the same patch needs to be applied to sound/ppc/dacas.c and at least one file in the old OSS dmasound_pmac driver. Michael
|
https://lists.debian.org/debian-powerpc/2006/03/msg00442.html
|
CC-MAIN-2015-32
|
refinedweb
| 189
| 58.18
|
Gapp::Manual - An introduction to GUI development with Gapp
Gapp is a framework for creating GUI applications in perl. Gapp is based on Moose and Gtk2-Perl. Gapp brings the post-modern feel of Moose to Gtk+ application development in perl.
Gapp is a Moose-enabled layer over Gtk2-Perl.
Each Gapp::Widget constructs an underlying Gtk2::Widget which is accessible through the
gobject attribute.
You can apply roles and traits to the widgets just like any other Moose based class.
Layouts are extensible classes that are used to define the positioning, spacing, and appearance of widgets in an application. By defining these properties in a sinlge location it is easy to maintain a consistent look and feel across an entire application. There is no need to hand configure every widget. Making a change to the layout propogates across the entire application.
Actions are reusable blocks of code that know how to display themselves on various widgets. Actions loosely separate business logic and interface design.
Forms keep your widgets and your data synchronised transparently. There is no need to manually move data between the object and the widgets or vice-versa when displaying or saving a form.
Gapp uses both Moose and Gtk2 extensively and assumes that the user has basic understanding of these modules as well. Any of the Gapp documentation that you find lacking is probably covered by these modules.
The documentation for Moose can be found on the CPAN at.
The documentation for Gtk2-Perl can be found at.
The documentation for Gtk+ can be found at.
use Gapp; $w = Gapp::Window->new( properties => { title => 'Gapp application', }, signal_connect => [ [ 'delete-event' => sub { Gapp->quit } ], ], content => [ Gapp::Label->new( text => 'Hello World!' ), ] ); $w->show_all; Gapp->main;
Worth noting is the abscence of the
use Gtk2 '-init' line. Gapp calls this for you already.
One of the first things you will notice is that we can define widget properties, connect to signals, and pack widgets all within the constructor of the widget. This lends to code that is cleaner and easier to read and modify.
A Gapp::Widget is used to manage the construction of a Gtk2::Widget.
The Gtk2::Widget is created on the first call to
Gapp::Widget::gobject. Make all configurations to your widget before this happens; any change you make to the Gapp::Widget will not be reflected in the GtkWidget once it has been constructed.
Any properties you set here will be applied to the Gtk+ widget upon construction. You may find valid properties by referencing the corresponding Gtk+ documentation at.
You may connect to signals using the
signal_connect parameter using the format in the example.
You can add widgets to containers using the
content parameter. No formatting options are specified here, just the hierarchy of the widgets. Spacing and other rendering details are resolved by the layout. The layout will be discussed in more detail later in this manual.
In our example program, the call to
gobject was made implicitly by calling
show_all all on the window. This is because
show_all is set up to delegate to the Gtk widget's
show_all method. The documentation for the Gapp widget will provide more information on methods that have been setup for delegation.
The layout determines how widgets are displayed on the screen. It has control over things like spacing, alignment, borders, etc. By centralizing the code the determines the appearance of widgets, it is is possible to achieve a consistent look GUI. By making changes to the layout, you can affect the appearance of your whole application. You can subclass layouts too!
Layouts are referenced using their class names. You can specify which layout to use when constructing your widget. All widgets accept the
layout parameter.
Gapp::Window->new( layout => 'My::Custom::Layout', content => ... );
You should see Gapp::Layout for information on creating layouts.
Actions can be performed and know how to display themselves in menu's and on on buttons. You can call them directly or connect them to signals.
use Gapp::Actions::Basic qw( Quit ); # call directly do_Quit; # connect to signal $w = Gapp::Window->new; $w->signal_connect( 'delete-event' => Quit ); # display as menu item Gapp::MenuItem->new( action => Quit ); # display as button Gapp::Button->new( action => Quit );
You should see Gapp::Actions for information on creating and using actions.
Apply traits and roles to your widgets to change their behavior!
Gapp::Entry->new( traits => [qw( MyCustomTrait )] );
Advanced form handling allows you to easily get form data from widgets and vice versa. You don't manually need to update each field in the form. To create a form, add the Form trait to any widget.
$form = Gapp::VBox->new( traits => [qw( Form )], content => [ Gapp::Entry->new( field => 'user.name' ) ], );
Now you can pull values from the form using the stash.
$form->stash->fetch('user.name');
You can also set values in the form using the stash.
$form->stash->store('user.name', 'anonymous' );
You have to call update on the form before changes to the stash will be displayed.
$form->update;
Using a context you can sync data between objects and you form.
$user = Foo::User; $cx = Gapp::Form::Context->new; $cx->add( 'user' => $user, writer_prefix => 'set_', reader_prefix => '', ); $form->set_context( $cx ); # update the form from the context $form->update_from_context; # update the stash and context $form->apply
Gapp::Moose provides sugar for creating classes that have widgets as attributes.
package Foo::Bar; use Gapp::Moose; widget 'window' => ( is => 'ro', traits => [qw( GappWindow GappDefault )], construct => sub { title => 'Gapp Application', signal_connect => [ [ 'delete-event' => sub { Gapp->main_quit } ] ], }, );
Gapp extensions provide added functionality. The GappX:: namespace is the official place to find Gapp extensions. These extensions can be found on the CPAN.
Jeffrey Ray Hallock <jeffrey.hallock at gmail dot com>
Copyright (c) 2011-2012 Jeffrey Ray Hallock. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~jhallock/Gapp-0.484/lib/Gapp/Manual.pod
|
CC-MAIN-2017-04
|
refinedweb
| 978
| 57.57
|
Hello!
I am having a real trouble with implementing QR code scanning. I tried to use cordova-plugin-qrscanner (here). It does scan the qr codes reliably, but I cannot figure out, how can I constrain the preview.
Currently it works as follows:
Upon clicking “Try camera show”, the QRscanner show command is fired. My goal is to get it inside div that says “Camera preview should go here”:
But Instead the preview takes over the whole body, like so:
Code for component:
export default () => { useEffect(() => { if (Meteor.isCordova) { QRScanner.prepare(); } }, []) return ( <div> . . <div className="upper-center camera-preview">Camera preview should go here</div> . . <div className="bottom"> <button onClick={() => Meteor.isCordova && QRScanner.show()} >Try camera show</button> <button onClick={() => Meteor.isCordova && QRScanner.hide()} >Try camera hide</button> <button onClick={() => Meteor.isCordova && QRScanner.destroy()} >Try camera destroy</button> </div> </div> ) }
Any ideas, how could I find a solution to my problem? May be different QR scanner Cordova package or some ideas, how could I get this one to work?
For all the answers, thanks in advance! Cheers!
|
https://forums.meteor.com/t/qr-scan-with-meteor-cordova/55710
|
CC-MAIN-2022-27
|
refinedweb
| 177
| 59.09
|
/*** 21 * A class used to signify the occurrence of an error in the creation of 22 * a TFTP packet. It is not declared final so that it may be subclassed 23 * to identify more specific errors. You would only want to do this if 24 * you were building your own TFTP client or server on top of the 25 * {@link org.apache.commons.net.t} 26 * class if you 27 * wanted more functionality than the 28 * {@link org.apache.commons.net.t receiveFile()} 29 * and 30 * {@link org.apache.commons.net.t sendFile()} 31 * methods provide. 32 * <p> 33 * <p> 34 * @see TFTPPacket 35 * @see TFTP 36 ***/ 37 38 public class TFTPPacketException extends Exception 39 { 40 41 private static final long serialVersionUID = -8114699256840851439L; 42 43 /*** 44 * Simply calls the corresponding constructor of its superclass. 45 ***/ 46 public TFTPPacketException() 47 { 48 super(); 49 } 50 51 /*** 52 * Simply calls the corresponding constructor of its superclass. 53 ***/ 54 public TFTPPacketException(String message) 55 { 56 super(message); 57 } 58 }
|
http://commons.apache.org/proper/commons-net/xref/org/apache/commons/net/tftp/TFTPPacketException.html
|
CC-MAIN-2014-49
|
refinedweb
| 165
| 54.42
|
Hi, devs.
I'd like to discuss input validation.
I like the validate_doc_update() system, however I think there is some
missing information for it to approve/reject a potential update. Two
examples:
1. A document was updated in a valid way, but this is the 10th time in one
second the update was made.
2. A form was posted to CouchDB but there is no CAPTCHA information to
validate.
Note that these updates, particularly #1, can comprise a denial-of-service
attack against CouchDB, because we can make "valid" updates until the disk
is full.
I have a couple of ideas and would like to hear thoughts.
## reCAPTCHA support
I am thinking about some sort of /_config setting, where you can input some
reCAPTCHA settings (I have not used reCAPTCHA yet so forgive the ignorance.)
Next, maybe there is a /_recaptcha URL or namespace where CouchDB serves
whatever it needs to to add a captcha to forms (maybe saving state in
memory or in a config or database). When a form is submitted, Couch checks
the captcha and sends the pass/fail value to validate_doc_update, either
inside userCtx, secObj, or a new argument. (None seem ideal; I would go
with userCtx or a new argument).
## Rate limiting
CouchDB could keep state about how frequently "clients" are making updates.
(A "client" is not defined. It could be an IP address, or an {IPAddr,
Username} tuple, or something else.)
Then in validate_doc_update, the rate of updates is passed in somehow.
(Again, in userCtx, or maybe a new argument.) Maybe it looks like this:
update_rate:
{ "second": 0
, "minute": 3
, "hour: 5
, "day": 8
, "week": 10
, "year": 50
}
Maybe week and year is overkill. But if your policy is only 10 updates per
minute, then that's easy to enforce now:
if(userCtx.update_rate.minute > 10)
throw {forbidden: "Too many updates in the past minute"}
## Banning
This is a more distant plan. I have always loved fail2ban.
Maybe Couch has a /_config setting to ban "clients" (as defined above) if
they fail validation too much. Bans could have several features:
* Temporary, expires automatically after some time
* Perhaps immediately reject the client for all writes, and maybe reads too
* Maybe some kind of "bog" where responses take 29 seconds to return
* Maybe a "hellban: where we return 200 but silenly reject the update
* Maybe some whitelists so that trusted clients can still replicate to you
regardless of validation failures. (Some people may use validation as a
server-side replication filter.)
|
http://mail-archives.apache.org/mod_mbox/couchdb-dev/201303.mbox/%3CCAN-3CBJ7b_32Apx7Hk_GJgFTz49mc1LDN_i6TXTDEgnS=scM1g@mail.gmail.com%3E
|
CC-MAIN-2015-06
|
refinedweb
| 415
| 70.53
|
Do you use bootstrap and Django together? If so, I have 2 powerful templatetags which may come in handy when developing your bootstrap enabled project. One filter turns a BooleanField result into a compatible bootstrap icon. The other filter can be used to display a specific amount of stars or any icon you can think of, I use this for rating stars.
If you do not have already, create a new template library, which is basically a templatetags Python package inside your app. In this Python package, create a new Python file which will contain the filter, that you can load into your templates. In this file, enter in the following code:
from django import template from django.utils.safestring import mark_safe register = template.Library() @register.filter def yesnoicon(value):</i>' % icon) @register.filter def ratingicon(value): return mark_safe('<i class="icon-star"></i>' * value)
There you have it, now in your templates you can load the template library and use the filters like so:
{% load bootstrap_filters %} {{object.is_available|yesnoicon}} {{object.rating|ratingicon}}
A super simple filter interface to make using bootstrap in Django that much easier. Enjoy!
|
http://pythondiary.com/blog/Aug.24,2012/bootstrap-template-filters-django.html
|
CC-MAIN-2016-40
|
refinedweb
| 188
| 55.64
|
I have health bar textures and a working health system but how do i make it into a bar? so how would i change between the images according to how much health is left out of 100??
You can try this tutorial if you want to create a dynamic health bar that also changes color according to the amount of health left. This is all done with the new UI from UNITY 4.6: Health bar turorial
Answer by duck
·
May 13, 2010 at 03:14 PM
Have a look at this answer, which gives source code for display health bars using the new GUI system.
works great thanks I had to change it a bit to make it for health and get rid of the box around it but otherwise it works perfectly!
the link doesn't seem to work anymore. can you make a new one? Thanks!
duck's link probably went to this question:
fixed the link!
If you would like to use the new Unity UI system to create a Health bar. Go to this link
Health Bar Unity 4.6
Answer by spinaljack
·
May 13, 2010 at 02:10 PM
There's plenty of tutorials with health bars in them. Take a look at the 3D platformer tutorial, it has a round health bar but I'm pretty sure you can work out how to make a straight one. Also take a look at UnityGUI guide.
Answer by RyanZimmerman87
·
Feb 08, 2013 at 10:37 AM
Here is a way to set this up for the player's health in case anyone is looking for that solution in the future (in C#).
This will work perfectly if you have a script attached to your main character (in this example PlayerMoveScript) with two public static variables for the player's current health (playerHealth), and their maximum health (playerHealthTotal). Just make sure that their maximum health is a float or the division will not work properly.
You can create two textures in Photoshop and size them to be identical to the Vector 2 (size) variable. Once you import the textures into Unity (.PSD extension works fine) you can switch the texture type from Texture to GUI in the Inspector.
Hope this helps someone!
using UnityEngine;
using System.Collections;
public class PlayerHealthBarScript : MonoBehaviour
{
public GUIStyle progress_empty;
public GUIStyle progress_full;
//current progress
public float barDisplay;
Vector2 pos = new Vector2(10,50);
Vector2 size = new Vector2(250,50);
public Texture2D emptyTex;
public Texture2D fullTex;
void OnGUI()
{
//draw the background:
GUI.BeginGroup(new Rect(pos.x, pos.y, size.x, size.y), emptyTex, progress_empty);
GUI.Box(new Rect(pos.x, pos.y, size.x, size.y), fullTex, progress_full);
//draw the filled-in part:
GUI.BeginGroup(new Rect(0, 0, size.x * barDisplay, size.y));
GUI.Box(new Rect(0, 0, size.x, size.y), fullTex, progress_full);
GUI.EndGroup();
GUI.EndGroup();
}
void Update()
{
//the player's health
barDisplay = PlayerMoveScript.playerHealth/PlayerMoveScript.playerHealthTotal;
}
}
Answer by awplays49
·
Jan 23, 2015 at 01:24 PM
I know this has been answered a ton of times, but I thought I'd answer it :)
My way of doing this:
Make a UI slider and leave the canvas properties the same. Turn off interactable in the properties of the slider.
Make a health variable in a script attached to the player.
Make a public GameObject variable in a script attached to the healthbar. Drag the player in.
Write
using UnityEngine.UI;
After
using UnityEngine;
Make a public Slider and do
SliderVar = GetComponent ();
SliderVar being the slider type variable.
Write this code in update:
SliderVar.value = PlayerVar.GetComponent ().HealthVar;
PlayerVar being the public GameObject you wrote in the health script, PlayerScript being the player's script, and HealthVar being the health variable in the player's script.
Hope this helps :-)
Answer by Deatrock
·
Jul 30, 2014 at 05:13 AM
And you know, you can use horizontal slider its just 1 line Scroll View Width GUILayout
1
Answer
Can someone help me fix my Javascript for Flickering Light?
4
Answers
CS0116 "A namespace can only contain type and namespace declarations"
1
Answer
Scripting error #2!
2
Answers
Rotating a Sphere with the mouse?
1
Answer
|
http://answers.unity3d.com/questions/17255/how-do-i-make-a-health-bar.html
|
CC-MAIN-2017-34
|
refinedweb
| 696
| 72.87
|
0
Like the title says I need help to display a doubly linked list in a forward order. No matter what I do, I can just get it to display in its regular reverse order.
Here's the code:
public class DoubleLinkedList { private Node h; public DoubleLinkedList() { h = new Node(); h.back = null; h.l = null; h.next = null; } public void showAll() { Node p = h.next; while (p != null) { System.out.println(p.l.toString( )); p = p.next; } } public class Node { private StudentListing l; private Node next; private Node back; public Node() { } } }
And thanks for taking the time to help me.
PS: As you may have noticed I can't use the linkedlist class for this program.
|
https://www.daniweb.com/programming/software-development/threads/273664/i-need-to-display-a-doubly-linked-list-in-reverse-and-forward-order
|
CC-MAIN-2016-50
|
refinedweb
| 117
| 86.2
|
14 March 2011 05:57 [Source: ICIS news]
SINGAPORE (ICIS)--Japan’s Sun Allomer’s 67,000 tonne/year polypropylene (PP) unit at Kawasaki may remain shut even after maintenance at the site is completed on 22 March because its propylene supply was cut off after last week’s devastating earthquake, a source close to the company said on Monday.
The company’s other PP unit at Kawasaki with the same capacity was shut since last Friday as a precaution after the earthquake.
Both plants did not suffer any damage by the earthquake, the source said.
SunAllomer gets propylene feedstock from one of its stakeholders, JX Nippon Oil & Energy Corporation whose cracker at ?xml:namespace>
LyondellBasell and Showa Denko are the other shareholders in SunAllomer..
|
http://www.icis.com/Articles/2011/03/14/9443311/japans-sunallomers-kawasaki-pp-unit-may-remain-shut-longer.html
|
CC-MAIN-2013-20
|
refinedweb
| 125
| 56.39
|
. Let's do '1000' to '100'.
An in depth look at make files and the make utility are beyond the scope of this article. However, two points of immediate relevance are the
bsd.kmod.mk make file and the ability to include other make files within each other.
The
bsd.kmod.mk makefile resides in
/usr/src/share/mk/bsd.kmod.mk and takes all of the pain out of building and linking kernel modules properly. As you are about to see, you simply have to set two variables:
- the name of the kernel module itself via the “KMOD” variable;
- the source files configured via the intuitive “SRCS” variable;
Then, all you have to do is include <bsd.kmod.mk> to build the module. This elegant setup lets you build your kernel module with only the following skeletal make file and a simple invocation of the “make” utility.
The Makefile for our introductory kernel module looks like this:
# Note: It is important to make sure you include the <bsd.kmod.mk> makefile after declaring the KMOD and SRCS variables. # Declare Name of kernel module KMOD = hello_fsm # Enumerate Source files for kernel module SRCS = hello_fsm.c # Include kernel module makefile .include <bsd.kmod.mk>
Create a new directory called
kernel, under your home directory. Copy and paste the text above into a file called
Makefile. This will be your working base going forward.
Creating a module
Now that you have a clue about the build environment, it's time to take a look at the actual code behind a FreeBSD kernel module and the mechanisms for inserting and removing a module from a running kernel.
A kernel module allows dynamic functionality to be added to a running kernel. When a kernel module is inserted, the “load” event is fired. When a kernel module is removed, the “unload” event is fired. The kernel module is responsible for implementing an event handler that handles these cases.
The running kernel will pass in the event in the form of a symbolic constant defined in the
/usr/include/sys/module.h (
<sys/module.h>) header file. The two main events you are concerned with are
MOD_LOAD and
MOD_UNLOAD.
How does the running kernel know which function to call and pass an event type as a parameter to? The module is responsible for configuring that call-back as well by using the
DECLARE_MODULE macro.
The
DECLARE_MODULE macro is defined in the
<sys/module.h> header on line 117. It takes four parameters in the following order:
name. Defines the name.
data. Specifies the name of the
moduledata_tstructure, which I've named
hello_confin my implementation. The
moduledata_ttype is defined at line 55 of
<sys/module.h>. I'll talk about this briefly.
sub. Sets the subsystem interface, which defines the module type.
order. Defines the modules initialization order within the defined subsystem
The
moduledata structure contains the name defined as a
char variable and the event handler routine defined as a
modeventhand_t structure which is defined at line 50 of
<sys/module.h>. Finally, the
moduledata structure has void pointer for any extra data, which you won't be using.
If your head is about to explode from the overview without any code to put in context, fear not. That is the sum of what you need to know to start writing your kernel module, and so with that, “once more into the breach dear friends”. Before you get started, make sure you are in the same
kernel directory where you previously created the
Makefile file. Fire up your text editor of choice and open a file called
hello_fsm.c.
First include the header files required for the data types used. You've already seen
<sys/module.h> and the other includes are supporting header files.
#include <sys/param.h> #include <sys/module.h> #include <sys/kernel.h> #include <sys/systm.h>
Next, you are going to implement the event_handler function. This is what the kernel will call and pass either
MOD_LOAD or
MOD_UNLOAD to via the
event parameter. If everything runs normally, it will return a value of 0 upon normal completion. However, you should handle the possibility that something will go wrong and if the
event parameter is neither MOD_LOAD or MOD_UNLOAD, you will set
e, your error tracking variable, to
EOPNOTSUPP.
/* The function called at load/unload. */ static int event_handler(struct module *module, int event, void *arg) { int e = 0; /* Error, 0 for normal return status */ switch (event) { case MOD_LOAD: uprintf("Hello Free Software Magazine Readers! \n"); break; case MOD_UNLOAD: uprintf("Bye Bye FSM reader, be sure to check !\n"); break; default: e = EOPNOTSUPP; /* Error, Operation Not Supported */ break; } return(e); }
Next, you're going to define the second parameter to the
DECLARE_MODULE macro, which is of type
moduledata_t. This is where you set the name of the module and expose the
event_handler routine to be called when loaded and unloaded from the kernel.
/* The second argument of DECLARE_MODULE. */ static moduledata_t hello_conf = { "hello_fsm", /* module name */ event_handler, /* event handler */ NULL /* extra data */ };
And finally, you're going to make a call to the much talked about
DECLARE_MODULE with the name of the module and the
hello_conf structure.
DECLARE_MODULE(hello_fsm, hello_conf, SI_SUB_DRIVERS, SI_ORDER_MIDDLE);
All that is left to do is build the module. Double check that you are in the same directory as the module's makefile you saw earlier and run:
make
To load the module, you have two options: the
kldload utility or the load make target via the
<bsd.kmod.mk> makefile. You must use both options via the "sudo" utility as loading and unloading modules requires root privileges.
sudo kldload ./hello_fsm.ko # or # sudo make load
You should see the message "Hello Free Software Magazine Readers!" on your console. To view all loaded modules, use the
kldstat utility with no arguments.
kldstat does not require root privileges and you can verify that the module is indeed loaded.
kldstat Id Refs Address Size Name 1 8 0xc0400000 926ed4 kernel 2 1 0xc0d27000 6a1c4 acpi.ko 3 1 0xc317e000 22000 linux.ko 4 1 0xc4146000 2000 hello_fsm.ko
To unload the module, use
kldunload or the unload target in the
<bsd.kmod.mk> make file. You should see the message printed on the
MOD_UNLOAD case, which is "Bye Bye FSM reader, be sure to check!"
sudo kldunload hello_fsm or sudo make unload
Conclusion
There you have it, a basic, skeletal kernel module. It prints a message when loaded and a separate message when being unloaded from the kernel. This article covered the mechanics of building, inserting, and removing the module. You know have the basic building blocks to take on more advanced projects: I would recommend looking at writing a character device writer as it is probably the next simplest device driver.
I hope this has been as much fun for you as it has been for me!
Resources
Books:
The Design and Implementation of the FreeBSD Operating System, by Marshall Kirk McKusick and George V. Neville-Neil
Designing BSD Rootkits, an Introduction to Kernel Hacking, by Joseph Kong
So, how do you download as pdf!?!
I click the "download as pdf" and it just reloads the page. Help?
|
http://www.freesoftwaremagazine.com/node/2620/pdf
|
CC-MAIN-2016-22
|
refinedweb
| 1,195
| 65.22
|
SMS User Authentication With Vapor and AWS
In this SMS user authentication tutorial, you’ll learn how to use Vapor and AWS SNS to authenticate your users with their phone numbers.
Version
- Swift 5, macOS 10.15, Xcode 11
There are many reasons why you’d want to verify your app’s users and identify them by phone number. SMS-based authentication is one of the options for a quick login experience that doesn’t require remembering passwords.
Nowadays, there are many services that provide SMS — aka short message service — authentication on your behalf. Using one might save you some time writing backend code, but it adds another dependency to your server and all your clients.
Writing your own solution is simpler than you think. If you already have a Vapor server for your app, or if you want to build a microservice for it, then you’ve come to the right place!
In this tutorial, you’ll learn how to build your own SMS authentication with Vapor and Amazon Web Services’ SNS. SNS, or Simple Notification Service, is the AWS service for sending messages of various types: push, email and of course, SMS. It requires an AWS account and basic knowledge of Vapor and Swift.
By the end of this tutorial, you’ll have two HTTP APIs that will allow you to create a user for your app.
Getting Started
Download the materials for this tutorial using the Download Materials button at the top or bottom of this page. Navigate to the materials’ Starter directory in your favorite terminal application and run the following command:
open Package.swift
Once your project is open in Xcode, it’ll fetch all the dependencies defined in the manifest. This may take a few minutes to complete. Once that’s finished, build and run the Run scheme to make sure the starter project compiles. As a last step before you start coding, it’s always a great idea to browse through the starter project’s source code to get a sense of the layout and various pieces.
How SMS Auth Works Behind the Curtain
You’ve most likely used an app with SMS authentication before. Insert your phone number, move to another screen, enter the code received in the SMS and you’re in. Have you ever thought about how it works behind the scenes?
If you haven’t, fear not: I’ve got you covered!
- The client asks the server to send a code to a phone number.
- The server creates a four- or six-digit code and asks an SMS provider to deliver it to the phone number in question.
- The server adds an entry in the database associating the sent code with the phone number.
- The user receives the SMS and inputs it in the client.
- The client sends the code back to the server.
- The server queries the database for the phone number and tries to match the code it saved before to the code it received.
- If they match, the server looks in the database to see if a user is associated with the phone number. If it doesn’t find an existing user, it creates a new one.
- The server returns the user object, along with some sort of authentication token, to the client.
You can see the steps detailed above in this diagram:
Interacting With AWS SNS
To execute step two in the diagram above, you’ll need to create a class that asks SNS to send the text message. In the Sources ► App folder, create a new Swift file named SMSSender.swift. Make sure you’re creating this file in the App target. Next, add the following:
import Vapor // 1 protocol SMSSender { // 2 func sendSMS( to phoneNumber: String, message: String, on eventLoop: EventLoop) throws -> EventLoopFuture<Bool> }
There are a few things to notice here:
- You define a protocol called
SMSSender, which creates an abstraction around sending an SMS. This means it can potentially be used to create many classes, each with its own mechanism for SMS delivery.
sendSMS(to:message:on:)receives a destination phone number, a text message and the current
EventLoop, and it returns an
EventLoopFuture<Bool>. This is a future value that indicates if sending the message succeeded or failed. You can learn more about
EventLoopFutureand asynchronous programming in this article or Vapor’s documentation.
Next, you’ll create the class that implements this protocol. Under the Sources ► App folder, create a file named AWSSNSSender.swift and add the following code to it:
import Vapor import SNS class AWSSNSSender { // 1 private let sns: SNS // 2 private let messageAttributes: [String: SNS.MessageAttributeValue]? init(accessKeyID: String, secretAccessKey: String, senderId: String?) { // 3 sns = SNS(accessKeyId: accessKeyID, secretAccessKey: secretAccessKey) // 4 messageAttributes = senderId.map { sender in let senderAttribute = SNS.MessageAttributeValue( binaryValue: nil, dataType: "String", stringValue: sender) return ["AWS.SNS.SMS.SenderID": senderAttribute] } } }
This is the class definition and initialization. Here’s an overview of what the code above does.
- This keeps a private property of the
SNSclass. This class comes from the AWSSDKSwift dependency declared in Package.swift. Notice that in the second line, you need to import the
SNSmodule.
- SNS allows setting specific message attributes. You’re interested in
SenderIDso that the SMS messages arrive with the sender name of your app. The class will use
messageAttributeswhenever a message is sent as part of the payload.
- The initializer receives your AWS access key and the matching secret. You pass these on to the
SNSclass initializer.
- The initializer may also receive an optional
senderId. Use the
mapmethod on the
Optionalargument to map it to the
messageAttributesdictionary. If
senderIdis
nil,
messageAttributeswill also be
nil. If it has a value,
mapwill transform the string into the needed dictionary.
For security, and to allow for easier configuration, don’t hardcode your AWS keys into your app. Instead, a best practice is to use environment variables. These variables are set in the environment in which the server process runs, and they can be accessed by the app at runtime.
To add environment variables in Xcode, edit the Run scheme:
You can also edit the current scheme by typing Command + Shift + ,
Then, select the Arguments tab. Under Environment Variables, click the + button to add a new variable.
You’ll need two variables: AWS_KEY_ID and AWS_SECRET_KEY. Add the corresponding value for each one:
Add the values of the variables in Xcode.
Next, add an extension below the code you just wrote to make
AWSSNSSender conform to the
SMSSender protocol:
extension AWSSNSSender: SMSSender { func sendSMS( to phoneNumber: String, message: String, on eventLoop: EventLoop) throws -> EventLoopFuture<Bool> { // 1 let input = SNS.PublishInput( message: message, messageAttributes: messageAttributes, phoneNumber: phoneNumber) // 2 return sns.publish(input).hop(to: eventLoop).map { $0.messageId != nil } } }
This protocol conformance is straightforward. It delegates the request to publish a message to the AWS SNS service like so:
- First, you create a
PublishInputstruct with the message, the attributes created in the initialization and the recipient’s phone number.
- Next, you ask the
SNSinstance to publish the input. Because it returns an
EventLoopFuture<PublishResponse>in another
EventLoop, use
hop(to:)to get back to the request’s event loop. Then map the response to a Boolean by making sure its
messageIdexists. The existence of the
messageIdmeans that the message has been saved and Amazon SNS will try to deliver it.
Finally, you still need to initialize an instance of
AWSSNSSender and register it in the configuration. In Vapor 4, services can be registered to the
Application instance using
storage. Open SMSSender.swift and add the following code:
// 1 private struct SMSSenderKey: StorageKey { typealias Value = SMSSender } extension Application { // 2 var smsSender: SMSSender? { get { storage[SMSSenderKey.self] } set { storage[SMSSenderKey.self] = newValue } } }
To allow registering a service, you need to:
- Declare a type that conforms to
StorageKey. The only requirement is having a
typealiasfor the type of the value you’ll store — in this case, a
SMSSender.
- Extending
Application, add a property for
SMSSenderand implement the getter and the setter, which each use the application’s
storage.
Now it’s time to initialize and register the service. Open configure.swift and add this block of code after
try app.autoMigrate().wait():
// 1 guard let accessKeyId = Environment.get("AWS_KEY_ID"), let secretKey = Environment.get("AWS_SECRET_KEY") else { throw ConfigError.missingAWSKeys } // 2 let snsSender = AWSSNSSender( accessKeyID: accessKeyId, secretAccessKey: secretKey, senderId: "SoccerRadar") // 3 app.smsSender = snsSender
Here’s what you’re doing in the code above:
- You retrieve the AWS keys from your environment variables, throwing an error if your app can’t find them.
- You initialize
AWSSNSSenderwith those keys and the app’s name. In this case, the name is SoccerRadar.
- You register the
snsSenderas the application’s
SMSSender. This uses the setter you defined in the
Applicationextension in the previous code block.
Once you have the sender configured, initialized and registered, it’s time to move on to actually using it.
Your First API: Sending the SMS
When looking at the initial code in the starter project, you’ll find the kinds of definitions outlined below.
Models and Migrations
In Vapor 4, for each model you define, you need to perform a migration and create — or modify — the entity in the database. In the starter project, you’ll find the models and migrations in the folders with the same names.
User/
CreateUser: This entity represents your users. Notice how the migration adds a unique index in the
phoneNumberproperty. This means the database won’t accept two users with the same phone number.
SMSVerificationAttempt/
CreateSMSVerificationAttempt: The server saves every verification attempt containing a code and a phone number.
Token/
CreateToken: Whenever a user successfully authenticates, the server generates a session, represented by a token. Vapor will use it to match and authenticate future requests by the associated user.
Others
UserController: This controller handles the requests, asks SNS to send the messages, deals with the database layer and provides adequate responses.
- A
Stringextension with a method and a computed property.
randomDigitsgenerates an
n-digit numeric code, and
removingInvalidCharactersreturns a copy of the original
Stringthat has had any character which is not a digit or a
+removed.
Before creating your API method, it’s important to define which data will flow to and from the server. First, the server receives a phone number. After sending the SMS, it returns the phone number — formatted without dashes — and the verification attempt identifier.
Create a new file named UserControllerTypes.swift with the following code:
import Vapor extension UserController { struct SendUserVerificationPayload: Content { let phoneNumber: String } struct SendUserVerificationResponse: Content { let phoneNumber: String let attemptId: UUID } }
Vapor defines the
Content protocol, which allows receiving and sending request and response bodies. Now, create the first request handler. Open UserController.swift and define the method that will handle the request in the
UserController class:
private func beginSMSVerification(_ req: Request) throws -> EventLoopFuture<SendUserVerificationResponse> { // 1 let payload = try req.content.decode(SendUserVerificationPayload.self) let phoneNumber = payload.phoneNumber.removingInvalidCharacters // 2 let code = String.randomDigits(ofLength: 6) let message = "Hello soccer lover! Your SoccerRadar code is \(code)" // 3 return try req.application.smsSender! .sendSMS(to: phoneNumber, message: message, on: req.eventLoop) // 4 .flatMap { success -> EventLoopFuture<SMSVerificationAttempt> in guard success else { let abort = Abort( .internalServerError, reason: "SMS could not be sent to \(phoneNumber)") return req.eventLoop.future(error: abort) } let smsAttempt = SMSVerificationAttempt( code: code, expiresAt: Date().addingTimeInterval(600), phoneNumber: phoneNumber) return smsAttempt.save(on: req) } .map { attempt in // 5 let attemptId = try! attempt.requireID() return SendUserVerificationResponse( phoneNumber: phoneNumber, attemptId: attemptId) } }
Here’s a breakdown of what’s going on:
- The method expects a
Requestobject, and it tries to decode a
SendUserVerificationPayloadfrom its body, which contains the phone number.
- Extract the phone number and remove any invalid characters.
- Create a six-digit random code and generate the text message to send with it.
- Retrieve the registered
SMSSenderfrom the
applicationobject. The force unwrap is acceptable in this case, as you previously registered the service in the server configuration. Then call
sendSMSto send the SMS, passing the request’s event loop as the last parameter.
- The
sendSMSfunction returns a future Boolean. You need to save the attempt information, so you convert the type of the future from Boolean to
SMSVerificationAttempt. First, make sure the SMS send succeeded. Then, create the attempt object with the sent code, phone number and an expiration of 10 minutes from the request’s date. Finally, store it in the database.
- After sending the SMS and saving the attempt record, you create and return the response using the phone number and the ID of the attempt object. It’s safe to call
requireID()on the attempt after it’s saved and has an ID assigned.
Alright — time to implement your second method!
Your Second API: Authenticating the Received Code
Similar to the pattern you used for the first API, you need to define what the second API should receive and return before implementing it.
Open UserControllerTypes.swift again and add the following structs inside the
UserController extension:
struct UserVerificationPayload: Content { let attemptId: UUID // 1 let phoneNumber: String // 2 let code: String // 3 } struct UserVerificationResponse: Content { let status: String // 4 let user: User? // 5 let sessionToken: String? // 6 }
In the request payload, the server needs to receive the following to match the values and verify the user:
- The attempt ID
- The phone number
- The code the user received
Upon successful validation, the server should return:
- The status
- The user
- The session token
If validation fails, only the status should be present, so
user and
sessionToken are both optional.
As a quick recap, here’s what the controller needs to do:
- Query the database to check if the codes match.
- Validate the attempt based on the expiration date.
- Find or create a user with the associated phone number.
- Create a new token for the user.
- Wrap the user and the token’s value in the response.
This is a lot to handle in a single method, so you’ll split it into two parts. The first part will validate the code, and the second will find or create the user and their session token.
Validating the Code
Add this first snippet to UserController.swift, inside the
UserController class:
private func validateVerificationCode(_ req: Request) throws -> EventLoopFuture<UserVerificationResponse> { // 1 let payload = try req.content.decode(UserVerificationPayload.self) let code = payload.code let attemptId = payload.attemptId let phoneNumber = payload.phoneNumber.removingInvalidCharacters // 2 return SMSVerificationAttempt.query(on: req.db) .filter(\.$code == code) .filter(\.$phoneNumber == phoneNumber) .filter(\.$id == attemptId) .first() .flatMap { attempt in // 3 guard let expirationDate = attempt?.expiresAt else { return req.eventLoop.future( UserVerificationResponse( status: "invalid-code", user: nil, sessionToken: nil)) } guard expirationDate > Date() else { return req.eventLoop.future( UserVerificationResponse( status: "expired-code", user: nil, sessionToken: nil)) } // 4 return self.verificationResponseForValidUser(with: phoneNumber, on: req) } }
Here’s what this method does:
- It first decodes the request body into a
UserVerificationPayloadto extract the three pieces needed to query the attempt. Remember that it needs to remove possible invalid characters from the phone number before it can use it.
- Then it creates a query on the
SMSVerificationAttempt, and it finds the first attempt record that matches the code, phone number and attempt ID from the previous step. Notice the usefulness of Vapor Fluent’s support for filtering by key path and operator expression.
- It attempts to unwrap the queried attempt’s
expiresAtdate and ensures that the expiration date hasn’t yet occurred. If any of these guards fail, it returns a response with only the
invalid-codeor
expired-codestatus, leaving out the user and session token.
- It calls the second method, which will take care of getting the user and session token from a validated phone number, and it wraps them in the response.
If you try to compile the project now, it’ll fail. Don’t worry — that’s because
verificationResponseForValidUser is still missing.
Returning the User and the Session Token
Right below the code you added in UserController.swift, add this:
private func verificationResponseForValidUser( with phoneNumber: String, on req: Request) -> EventLoopFuture< UserVerificationResponse> { // 1 return User.query(on: req.db) .filter(\.$phoneNumber == phoneNumber) .first() // 2 .flatMap { queriedUser -> EventLoopFuture<User> in if let existingUser = queriedUser { return req.eventLoop.future(existingUser) } return User(phoneNumber: phoneNumber).save(on: req) } .flatMap { user -> EventLoopFuture<UserVerificationResponse> in // 3 return try! Token.generate(for: user) .save(on: req) .map { UserVerificationResponse( status: "ok", user: user, sessionToken: $0.value) } } } }
There’s a lot going on here, but it’ll all make sense if you look at everything piece by piece:
- First, look for an existing user with the given phone number.
queriedUseris optional because the user might not exist yet. If an existing user is found, it’s immediately returned in
EventLoopFuture. If not, create and save a new one.
- Finally, create a new
Tokenfor this user and save it. Upon completion, map it to the response with the user and the session token.
Build and run your server. It should compile without any issues. Now it’s time to call your APIs!
Testing the APIs With cURL
In the following example, you’ll use
curl in the command line, but feel free to use another GUI app you might feel comfortable with, such as Postman or Paw.
Now open Terminal and execute the following command, replacing
+1234567890 with your phone number. Don’t forget your country code:
curl -X "POST" "" \ -H 'Content-Type: application/json; charset=utf-8' \ -d $'{ "phoneNumber": "+1234567890" }'
Oops. This request returns an HTTP 404 error:
{"error":true,"reason":"Not Found"}.
Registering the Routes
When you see a 404 error, it’s most likely because the functions weren’t registered with the
Router in use, or the HTTP method used doesn’t match the registered method. You need to make
UserController conform to
RouteCollection so you can register it in the routes configuration. Open UserController.swift and add the following at its end:
extension UserController: RouteCollection { func boot(routes: RoutesBuilder) throws { // 1 let usersRoute = routes.grouped("users") // 2 usersRoute.post("send-verification-sms", use: beginSMSVerification) usersRoute.post("verify-sms-code", use: validateVerificationCode) } }
The code above has two short steps:
- First, it groups the routes under the
userspath. This means that all routes added to
usersRoutewill be prefixed by
users— for example,.
- Then it registers two HTTP POST endpoints, providing each endpoint with one of the handler methods you defined above.
Now, open routes.swift and add this line inside the only existing function. This function registers your app’s routes:
try app.register(collection: UserController())
Calling the First API
Build and run your project again and try the previously failing
curl command by pressing the up arrow key followed by Enter. You’ll get the following response with a new UUID:
{ "attemptId": "477687D3-CA79-4071-922C-4E610C55F179", "phoneNumber": "+1234567890" }
This response is your server saying that sending the SMS succeeded. Check for the message on your phone.
Excellent! Notice how the sender ID you used in the initialization of
AWSSNSSender is working correctly.
Calling the Second API
Now you’re ready to test the second part of the authentication: verifying the code. Take the
attemptId from the previous request, the phone number you used in the previous step, and the code you received and place them into the following command. Then run the command in Terminal:
curl -X "POST" "" \ -H 'Content-Type: application/json; charset=utf-8' \ -d $'{"phoneNumber": "+1234567890", "attemptId": "<YOUR_ATTEMPT_ID>", "code": "123456" }'
If you replaced each parameter correctly, the request will return an object with three properties: the status, the user object and a session token:
{ "status": "ok", "user": { "id": "31D39FAD-A0A9-46E7-91CF-AEA774EA0BBE", "phoneNumber": "+1234567890" }, "sessionToken": "lqa99MN31o8k43dB5JATVQ==" }
Mission accomplished! How cool is it to build this yourself, without giving up on your users’ privacy or adding big SDKs to your client apps?
Where to Go From Here?
Download the completed project files by clicking the Download Materials> button at the top or bottom of this tutorial.
Save the session token in your client apps as long as the user is logged in. Check out Section III: Validation, Users & Authentication of the Server-Side Swift with Vapor book to learn how to use the session token to authenticate other requests. The chapters on API authentication are particularly helpful.
You can also read the documentation of Vapor’s Authentication API to better understand where you should add the session token in subsequent requests.
Do you want to continue improving your SMS authentication flow? Try one of these challenges:
- Start using a PostgreSQL or MySQL database instead of in-memory SQLite and make changes to your app accordingly.
- To avoid privacy issues and security breaches, hash phone numbers before saving and querying them, both in the
Userand the
SMSVerificationAttemptmodels.
- Think of ways to improve the flow. For example, you could add a
isValidBoolean to make sure the code is only used once, or delete the attempt upon successful verification.
- Implement a job that deletes expired and successful attempts.
We hope you enjoyed this tutorial. If you have any questions or comments, please join the forum discussion below!
|
https://koenig-assets.raywenderlich.com/13508424-sms-user-authentication-with-vapor-and-aws
|
CC-MAIN-2020-45
|
refinedweb
| 3,474
| 56.45
|
Hello, I am a little stuck. I have grasped the concept of methods, understanding thier purpose but still unsure on how to code it. Below is part of my code I kindly need guidance on. I am trying to create a method which can hold roles and depending on which switch option is selected it will return the the amount. I have four employement roles which all have different payscales. I have commented the errors on the role method. IfI can correct this and make it work then I can complete this program :)
import java.util.Scanner;//allows input from the keyboard import java.text.*; class Wage_Method_2 // a class is a collection of static and non-static methods and variables/class name { public static void main(String[]args)//The main method; starting point of the program. { Scanner input = new Scanner(System.in);//declaring and creating scanner; an inputstream / start of class DecimalFormat fmt = new DecimalFormat("0.00"); //DecimalFormat Currency = new DecimalFormat("###,###,##0.00"); double normHrs = 0, overHrs = 0, bonusHrs = 0, actualHrs = 0, cash=0, overPay =0, bonusPay = 0, grossWage = 0,vat23 = 0, netWage =0; String empName,nextEMP = "y", manager, superVisor, teamLead, general; // declaring variables System.out.println ("\t\t************************"); System.out.println ("\t\tHUMAN RESOURCES: PAYROLL"); // displays title of program System.out.println ("\t\t************************"); while (nextEMP.equalsIgnoreCase("y"))// this loops around the entire program to allow the user to repeat adding in emp hours { System.out.print("Please enter an employee's first name: "); // name of employee empName = input.next();//their name System.out.print("Please enter "+empName+"'s employment ID: "); // name of employee int job = input.nextInt();//their name System.out.print("Please enter their total hours worked: ");// hours they have worked actualHrs = input.nextDouble();// their hours double basicPay = rate (5.75)*actualHrs; System.out.println("Basic Pay: "+basicPay); if (nextEMP.equalsIgnoreCase("n")) // if "n" is entered... { System.out.println("Thank you, the hours have been successfully logged.");// polite exit message } }// end of whileloop }// end of class public static double rate (double cash String jobrole)// when compiling these errors appear: error: ')' expected and error: <identifier> expected { switch (jobrole) { case 1: cash = 5.75; case 2: cash = 5.75*1.1; case 3: cash = 5.75*1.25; case 4: cash = 5.75*1.42; break; } return (cash); } }// end of program
|
https://www.daniweb.com/programming/software-development/threads/476872/could-i-have-some-guidance-on-introducing-methods-into-my-wages-program
|
CC-MAIN-2021-43
|
refinedweb
| 382
| 52.66
|
Recursion is an important programming technique. It's used to have a function call itself from within itself. One handy example is the calculation of factorials. The factorials of 0 and 1 are both defined specifically to be 1. The factorials of larger numbers are calculated by multiplying 1 * 2 * ..., incrementing by 1 until you reach the number for which you're calculating the factorial.
The following paragraph is a function, defined in words, that calculates a factorial.
"If the number is less than zero, reject it. If it isn't an integer, round it down to the next integer. If the number is zero or one, its factorial is one. If the number is larger than one, multiply it by the factorial of the next smaller number."
To calculate the factorial of any number that is larger than 1,.
Clearly, there is a way to get in trouble here. You can easily create a recursive function that doesn's any chance of an infinite recursion, you can have the function count the number of times it calls itself. If the function calls itself too many times, however many you decide that should be, it automatically quits.
Here's the factorial function again, this time written in JavaScript code.function factorial(aNumber) { aNumber = Math.floor(aNumber); // If the number is not an integer, round it down. if (aNumber < 0) { // If the number is less than zero, reject it. return "not a defined quantity"; } if ((anumber == 0) || (anumber == 1)) { // If the number is 0 or 1, its factorial is 1. return 1; } else return (anumber * factorial(anumber - 1)); // Otherwise, recurse until done. }
|
http://www.yaldex.com/wjscript/recurse.htm
|
CC-MAIN-2016-50
|
refinedweb
| 271
| 58.89
|
- .com or self-managed
There are some differences in how a subscription applies, depending if you use GitLab.com or a self-managed instance:
- GitLab.com: The GitLab. associated namespace
- Change customers portal account password
Change
To change your company details, including company name and VAT number:
- Log in to the Customers Portal.
- Select My account > Account details.
- Expand the Company details section.
- Edit the company details.
- Click Save changes.
Change
To change the GitLab.com account
With a linked GitLab.com account:
- Log in to the Customers Portal.
- Navigate to the Manage Purchases page.
- Click Change linked namespace.
-, your account is charged for the additional users.
Change Customers Portal account password..
|
https://docs.gitlab.com/13.7/ee/subscriptions/
|
CC-MAIN-2021-10
|
refinedweb
| 112
| 55.81
|
Embedding PNG Images
From WxWiki
[edit] Embedding PNG images into executables
XPM graphics files are really simple to embed into C++ source files, because they are made of plain C code, and you don't need to convert them (just #include them). The bad thing about XPM inclusion is the inflating of executables size and the lacking of alpha transparency.
PNG files do support alpha transparency, but are more difficult to embed, because we need to convert them into a C vector.
[edit] bin2c
First of all, you will need a program to convert the PNG binary files into a C vector,, // ... };
You can use a small program to do this converting for you:
- Embedding_PNG_Images-Bin2c In C, (slightly modified version using header guards)
- Embedding_PNG_Images-Bin2c In Perl
- Embedding_PNG_Images-Bin2c In PHP
- Embedding_PNG_Images-Bin2c In Python
[edit] png2wx
png2wx is an alternative which also embeds files as a C string, but does so in a more compact way (by only escaping what is necessary). Running optipng beforehand is recommended. It is available as a perl script; a python version is also available, but it is a lot slower than Perl. The utility creates a .cpp file containing the images (provided directly as
wxBitmap *) and a .hpp for making the names available to other .cpp files. (The
-M flag allows you to specify a custom reinclusion guard name.)
png2wx.pl -C images.cpp -H images.hpp -M IMAGES_HPP images/*
Although it is called png2wx, it just embeds any files you specify (
images/* in this case) and the rest is up to the wxWidgets image handlers.
[edit] wxInclude
This tool will enable you to convert more images into one header.
Example:
wxInclude.exe --const --input-file=mydata1.bin --input-type=.png --input-type=.bmp --output-file=myheader.h mydata2.bin myimage.png
Most usefull is --input-type=.png this will convert all png files in the directory.
You can get the source code and static linked windows executable here.
[edit] Including
Once you have converted your graphics files with bin2c, you need to include them into your source code, e.g.:
#include <myimage_png.cpp>
For converting the images at runtime, you might use the wxMemoryInputStream class:
wxMemoryInputStream istream(myimage_png, sizeof myimage_png); wxImage myimage_img(istream, wxBITMAP_TYPE_PNG);
Or you can define a simple inline function like in your header file:
); }
That's all! :)
[edit] Notes
Remember to add the support for the PNG format in your wxApp::OnInit() function:
wxImage::AddHandler(new wxPNGHandler);
This technique of embedding a PNG is also used by the Audacity sound editor (latest CVS only). It has the bin2c code built into it. It uses wxWidgets code to combine and split pngs, so that a theme consists of a single large png rather than many small ones.
[edit] Authors
Sandro Sigala <sandro AT sigala DOT it>
Nicola Leoni <nicola AT exilo DOT net>
|
http://wiki.wxwidgets.org/Embedding_PNG_Images
|
crawl-002
|
refinedweb
| 474
| 53.51
|
#include <serial_port.h>
#include <serial_port.h>
List of all members.
RS422 class provides a common interface for RS-422 serial ports. Suppors for RS-422 ports is Linux-specifi, and is not for general uses.
0
B500000
A constructor.
It opens a serial port, and set it up to a given baudrate. Refer open(const char*,speed_t) for parameter description.
A destructor.
[inline]
Check how many bytes are available in the seria port buffer.
Close the serial port, and re-store the old configurations.
Check if the last operation was failed.
[inline, protected]
Retrieve the current time in floating-point format.
Check if the last operation was successful.]
Current baudrate.
Mutex for synchronized access.
Storage for old configuration.
Result of a last operation.
File descriptor for a serial port.
|
http://robotics.usc.edu/~boyoon/bjlib/df/d91/classbj_1_1RS422.html
|
CC-MAIN-2016-36
|
refinedweb
| 129
| 55.4
|
Feb 18 2019
08:39 AM
I read somewhere that you could do the following hybrid deployment.
1) Two dedicated Exchange 2016 "hybrid servers" that are F5 load balanced
2) Create a new namespace called hybrid.contoso.com. (Why would we need a new namespace?)
3) Create internal and external DNS A record for hybrid.contoso.com (same IP addresses?)
4) Publish hybrid.contoso.com through the F5 load balancer. (Is this done on the external F5 or both the internal and external F5. We also have BlueCoat device)
4) Point the existing autodiscover record to hybrid.contoso.com (external). (Again why would we do this? Will that mean clients need to be re-configured?). Can we just use a CNAME to redirect autodiscover to hybrid?
5) Point the existing EWS services to hybrid.contoso.com (external). (I supposed this is used for mailbox migration path?)
6) Create two A records called smtp1.contoso.com and smtp2.contoso.com and configure send and receive connectors in Exchange online to send contoso.com mails to these smart host addresses. (I don't know why this is needed cause we are enabling centralised transport and I though this would be created automatically)
Thank you.
|
https://techcommunity.microsoft.com/t5/office-365/exchange-hybrid-namespaces/m-p/352638
|
CC-MAIN-2022-27
|
refinedweb
| 202
| 61.63
|
, social networking applications, network routers where also a shortest path is searched, telephone networking and etc.
Dijkstra’s algorithm is a greedy algorithm which in each step finds the best solution and when we sum those solutions we find the best solution in general. The algorithm doesn’t work for edges with negative weights so in this case is used another algorithm which is called Bellman–Ford algorithm.
Let’s create an example weighted graph and explain how the algorithm works.
Let’s say we want to find the shortest path from vertex A to vertex F.
Initially we assign infinite value to all vertices as we don’t know the shortest distance to them. We assign 0 to starting vertex A as distance to self is 0.
We take the starting vertex A and we mark it as visited and then we observe all connected vertices and if the value of the visited vertex plus weight of the edge to the neighbor vertex is smaller than current value of the neighbor vertex then we update the neighbor vertex’s value with the weight of the edge plus value of the current visited vertex.
So in our example the value of A is 0 and the distance to B is 1.0. The current value of B is infinity.
(0 + 1.0) < infinity, then we update the value of B to 1.0
The value of A is 0 and the distance to C is 2.0. The current value of C is infinity.
(0 + 2.0) < infinity, then we update the value of C to 2.0
The next step is to chose which vertex to visit. The key point here is to take vertex with minimum value from unvisited vertices. In our case this is vertex B which has value 1.0 and it is less than the value of vertex C which is 2.0. From implementation point of view this is the point where we use priority queue. We enqueue vertices in a priority queue and then when we need the vertex with min value we just call dequeue and we get the minimum vertex.
So now we are on vertex B. We repeat the same operations here. We need to check the values of all unvisited neighbor vertices and update them if the current value + edge weight to neighbor vertex is smaller than neighbor vertex’s value. Be careful not to perform this operation to the visited neighbors.
So in our case B is connected to A and D. As A is already visited we will observe only D.
D’s current value is infinity, B’s current value is 1.0 and distance to D is 6.5.
(1.0 + 6.5) < infinity, then we update the value of D to 7.5
In the next step we take the vertex with min value from unvisited vertices. In our case this is vertex C
C (2.0) < D (7.5)
C has the following neighbors: A, E, D. From those neighbors only D and E are unvisited so we only observe them.
D has a value of 7.5. C has value of 2.0 and distance to D is 0.5.
(2.0 + 0.5) < 7.5, then we update the value of D to 2.5
E has value of infinity. C has value of 2.0 and the distance to E is 10.5
(2.0 + 10.5) < infinity, then we update the value of E to 12.5
In the next step we take the unvisited vertex with minimum value which is D.
D(2.5) < E(12.5)
D has the following neighbors: C, E, F. From those neighbors only E and F are unvisited so we only observe them.
E has value of 12.5. D has value of 2.5 and the distance to E is 2.0
(2.5 + 2.0) < 12.5, then we update the value of E to 4.5
F has value of infinity, D has value of 2.5 and the distance to F is 0.5
(2.5 + 0.5) < infinity, then we update the value of F to 3.0
Next we take the vertex with minimum value from unvisited vertices which is F.
F(3.0) < E(4.5)
F has two neighbors: D and E. From those neighbors only E is unvisited so we observe only E.
E has value of 4.5. D has value of 3.0 and the distance to E is 1.0
(3.0 + 1.0) < 4.5, then we update the value of E to 4.0
We have only one vertex which is unvisited, we visit vertex E. As all neighbors of E are already visited we are done with the algorithm at this stage. All vertices are visited and they have assigned values.
Note that in this graph example we had to always update the values of neighbor vertices but take into account if:
(current vertex’s value + distance from current to neighbor) > neighbor’s value
we don’t update the neighbor vertex’s value.
So looking into the initial graph where we applied Dijkstra’s algorithm we can say that the shortest path from A to F is: A–> C–> D–> F and the total distance is 3.0.
Implementation
Let’s have a look how we can implement Dijkstra’s algorithm with C# code. We we will use a priority queue which we explained in one of the previous articles here
First we will create a Vertex class which will represent our vertices in the graph described above
public class Vertex : IComparable<Vertex> { // We could have defined Vertex<T> // as generic to store any data inside // it, as you would probably do in // real world, here we miss it for // simplicity //public T Value { get; set; } public string Name { get; set; } public decimal Weight { get; set; } public bool IsVisited { get; set; } public Vertex Parent { get; set; } public Vertex(string name) { this.Name = name; } public int CompareTo(Vertex other) { if (this.Weight < other.Weight) return -1; else if (this.Weight > other.Weight) return 1; else return 0; } }
We will keep a reference to the parent which last updated the current vertex’s value in order to be able to backtrack the shortest path for any given vertex.
Next we will create class Dijkstra
public class Dijkstra { private List<Vertex> vertices; private decimal[][] adjMatrix; public Dijkstra(decimal[][] graph, List<Vertex> vertices) { if (graph.Length != vertices.Count) { throw new InvalidOperationException("Mismatch between graph and vertices"); } this.adjMatrix = graph; this.vertices = vertices; } }
We use adjacency matrix to represent the graph, basically the two most popular approaches to represent a graph are adjacency matrix and adjacency list, you can learn more about this topic here
Next we will add a method to Dijkstra class called ApplyDijkstra which will apply the algorithm.
private void ApplyDijkstra(int source) { PriorityQueue<Vertex> pq = new PriorityQueue<Vertex>(PriroityQueueType.Min); // Initialize all vertices with infinite value foreach (var v in this.vertices) { v.Weight = decimal.MaxValue; } // Source vertex's distance to self is zero this.vertices[source].Weight = 0.0m; pq.Enqueue(this.vertices[source]); while (pq.Count() > 0) { // get the vertex with minimum wight // from unvisited vertices to process var minVertex = pq.Dequeue(); minVertex.IsVisited = true; var minVIndex = vertices.IndexOf(minVertex); // Go trough all unvisited neighbors // and try to update their weight for (var i = 0; i < adjMatrix[minVIndex].Length; i++) { // if weight is zero no connection between two vertices if (this.adjMatrix[minVIndex][i] == 0.0m) continue; // we are interested only in unvisited vertices if (this.vertices[i].IsVisited) continue; // calculate neighbor's weight by // adding current weight and distance to the neighbor // from current vertex var calculatedWeight = adjMatrix[minVIndex][i] + minVertex.Weight; // if calculated distance/weight is less then // we found better path so update the neighbor's weight if (calculatedWeight < this.vertices[i].Weight) { this.vertices[i].Weight = calculatedWeight; this.vertices[i].Parent = minVertex; // Add neighbor to the queue to process it later pq.Enqueue(this.vertices[i]); } } } }
With this method we traverse all vertices in vertices list and looking at the graph matrix we update the values of each vertex. Next we will add public method which takes source and destination parameters, applies the algorithm and then backtracks the shortest path from distance to source and last it returns an information about the shortest path and total distance.
public string PrintShortestPath(string sourceName, string destinationName) { // We miss guard clauses for simplicity var sVert = vertices.First(v => v.Name == sourceName); int source = vertices.IndexOf(sVert); var dVert = vertices.First(v => v.Name == destinationName); int destination = vertices.IndexOf(dVert); // Calculate all vertices' weights ApplyDijkstra(source); var destinationVertex = vertices[destination]; // Using stack to backtrack the path from // destination to source by iterating parents Stack<Vertex> backTrackShortestPath = new Stack<Vertex>(); bool pathExists = false; while (destinationVertex != null) { if (destinationVertex.Name == sourceName) { backTrackShortestPath.Push(destinationVertex); pathExists = true; break; } backTrackShortestPath.Push(destinationVertex); destinationVertex = destinationVertex.Parent; } if (!pathExists) { return $"No path exists from vertex:{source} to vertex:{destination}"; } StringBuilder shortestPath = new StringBuilder(); var currentWeightArrow = ""; while (backTrackShortestPath.Count > 0) { var currentVertex = backTrackShortestPath.Pop(); shortestPath.Append($"{currentWeightArrow}({currentVertex.Name})"); currentWeightArrow = "--->"; } shortestPath.AppendLine("\nTotal " + $"distance from vertex:{sourceName} " + $"to vertex:{destinationName} is " + $"{vertices[destination].Weight}"); return shortestPath.ToString(); }
Next let’s construct the graph we explained above and use the code above to find the shortest path
static void Main(string[] args) { List<Vertex> vertices = new List<Vertex> { new Vertex("A"), new Vertex("B"), new Vertex("C"), new Vertex("D"), new Vertex("E"), new Vertex("F") }; decimal[][] graph = { new[] { 0.0m, 1.0m, 2.0m, 0.0m, 0.0m, 0.0m }, new[] { 1.0m, 0.0m, 0.0m, 6.5m, 0.0m, 0.0m }, new[] { 2.0m, 0.0m, 0.0m, 0.5m, 10.5m, 0.0m }, new[] { 0.0m, 6.5m, 0.5m, 0.0m, 2.0m, 0.5m }, new[] { 0.0m, 0.0m, 10.5m, 2.0m, 0.0m, 1.0m }, new[] { 0.0m, 0.0m, 0.0m, 0.5m, 1.0m, 0.0m } }; var dijkstra = new Dijkstra(graph, vertices); var shortestPath = dijkstra.PrintShortestPath("A", "F"); Console.WriteLine("The shortest path:"); Console.WriteLine(shortestPath); Console.ReadLine(); }
The result in the console is:
The algorithm’s time complexity is
O(E Log V)
E – number of edges
V – number of vertices.
With this we conclude our explanation about Dijkstra’s algorithm. You can read more about the topic here:
|
https://paircoders.com/2021/04/29/dijkstras-algorithm/
|
CC-MAIN-2022-05
|
refinedweb
| 1,736
| 58.08
|
14. Networking and communications¶
Individual assignment: design, build, and connect wired or wireless node(s) with network or bus addresses.
Group assignment: Send a message between two projects.
Bluetooth¶
Learning How to Use Bluetooth Modules
Bluetooth is a communication protocol that exchanges data wirelessly. It operates the same way a serial bus works, by sending and receiving data over TX and RX via a COM port, but wirelessly. I want to be able to send messages between a processing application on my computer and a motor board for my final project via the bluetooth HC-05 module . So, I started by watching this video:
Note: The video is technically about the HC-06 module, not the HC-05, but the differences (for my purposes) were negligible, and I found this video to be very helpful and easy to comprehend.
I started by following the tutorial with a breadboard and creating basic communication. The wiring diagram from the video:
The image above shown the differences between HC-05 VS HC-06
The built in LED of the bluetooth module should be blinking rapidly. This indicates that it is being powered. Next, I paired the bluetooth module to my windows surface computer.. The steps to do that are as follows: Go to settings. Navigate to “Bluetooth & Other Devices.”
Click add devices and wait for a black screen to pop up. Click “bluetooth.”
It will buffer for a while. Click on the bluetooth module when it pops up. Enter the default password “1234.”
You need to know what COM port the bluetooth device is associated with. To do this, scroll to the bottom of the page. Select “More Bluetooth Options.” Wait for the pop up menu to appear on screen. Click on the tab that says “COM Ports.” There are 2 COM ports indicated for the HC device. I will use the Outgoing COM port, the one used to transmit data from my computer to the bluetooth chip. The Incoming data COM is for transmitting data from the hardware the bluetooth chip is connected to over to a computer.
code:
import processing.serial.*; Serial sonic_port; //outgoing btSONIC port String string; void setup () { //open ports sonic_port = new Serial(this, "COM11", 9600); sonic_port.bufferUntil('\n'); } void draw() { printArray(string); //prints (string) to processing text area } void keyPressed() { if (key =='1'){sonic_port.write('1');} if (key =='0'){sonic_port.write('0');} } void serialEvent(Serial sonic_port) { string = sonic_port.readStringUntil('\n');}
Arduino Code:
int x; void setup() { Serial.begin(9600); } void loop() { if(Serial.available()>0){ x = Serial.read(); } if (x== "1") { Serial.println("Hello Processing"); } else if (x == "0") { Serial.println("by processing"); } }
You can do this in powershell by following the instructions on How do I read/write data from a Serial Port After you type in powershell, type in the following line to create a port variable and assign it to the com port of the bluetooth module. Note- the arduino (attached to the same com port as the bluetooth module) should not be attached to the computer you paired the bluetooth module with. Processing should be running on one computer, the one that is paired to the bluetooth module/arduino, and the module/arduino should be plugged into another computer, where you run powershell.
Line 1:
$port= new-Object System.IO.Ports.SerialPort COM3,9600,None,8,one, hit enter
This should initialize the port variable and attach it to the port “8” that the bluetooth module is attached to. Hit enter
Line 2:
$port.Close()
This should close the port. Hit enter.
Line 3:
$port.open()
Opens the port. Hit enter.
Line 4:
PS> $port.WriteLine(“Hello world”)
Writes “Hello world” to arduino serial port, which is connected to tx and rx of bluetooth, allowing the bluetooth to transmit this to processing.
It does kind of ignore most of the code that tells the arduino to print “Hello world” if key 1 is pressed, so I just deleted that part. With this test, I really only wanted to test if I could initiate contact anyways. After I closed the port with powershell, It worked:
Connection Setup¶
The main Idea of my assignment was to use the Attiny board that was previously designed and manufactured as the first node and Arduino Uno as the Second node. with the help of a smartphone and App designed in the application programming week, I used the HC06 Bluetooth module, to send different numbers like 1 for node 1 and 2 for node 2 and the respective nodes would blink and Send a message via Bluetooth that says which node it is.
As shown in the image below The ATtiny is connected to the hardware serial port of the Arduino Uno and the Bluetooth.
Programming¶
As shown in the code of Node 2 the Bluetooth module Communicates using Software serial communication named serial (notice that hardware serial is Serial with capital ‘S’) hence the difference. When a message (in our case address) is sent via Bluetooth, Node 2 checks if the address is ‘2’ then it corresponds to its address and responds by Lighting the LED for 2 sec as well as sending a message to the Bluetooth saying “This is Node 2”. The message gets printed on the smartphone’s screen. However, If the address sent is not ‘2’ then Node 2 forwards the Address to Node 1 (as shown in the code). When it Reaches at Node 1 if the address is ‘1’ then Node 1 Lights up the LED for 2 sec and forwards a message “This is Node 1” to Node 2 which as seen from the code also sends it to the Bluetooth in order to be printed on the smartphone’s screen. This is achieved by using the String Variable Text which has been put in a for loop in order to combine all characters sent by Node 1 ( Notice that here Hardware serial is used which doesn’t interfere with the Software serial of the Bluetooth). The characters are converted from decimal to Characters by using the Typecast char() .
Finally, the Message gets printed via bluetooth by using serial.print(Text) (notice the small ‘s’ as the bluetooth uses Software Serial).
Codes for Node1 and Node2
Output¶
The output is as shown in the video below.
Group assignment¶
For this week the group assignment is Send a message between two projects. Two projects means needing to send a message between any combination of boards, computers and/or mobile devices.
The group assignemnt is between a microcontroller board, a smart phone and a PC. The mobile phone sends morse code by flashing the mobile flash light to a certain message, The microcontroller with a light sensor attached interprets the signals and displays them on the PC screen via serial.
morse code:. Its a fascinating method of communication used historically as an easy way of encoding and sending text over long distances.
Morse Code Torch
Morse Code Torch is an app that allows you to encode your messages into morse code using your phone’s flashlight. Used on an iphone here. I tried other applications but none of them allowed me to adjust the transmission rate time or even mentioned what rate they were using. This application offers flexibility when it comes to doing that which comes in handy for writing the code.
The application is pretty easy to use, you just write a message and the phone flashlight flashes in dashes and dots cooresponding to the message.
The microcontroller
The microcontroller board used is an Arduino Mega 2560. Which. The Arduino mega is frankly an overkill for this project as any attiny44 based microcontroller would easily be able to do this, but it was the microcontroller board available.
connection
A photoresistor (acronymed LDR for Light Decreasing Resistance) was used to detect the light signals. An LDR works by decreasing resistance with respect to receiving luminosity (light) on the component’s sensitive surface. The LDr can be connected to a microcontroller board via a voltage divder configuration where the voltage that appears at the analog input will vary depending on the amount of light hitting the LDR.
code The code is largly based on krobro‘s code meant for the Duinobot board with some adjustments.
// morse code // The value of THRESHOLD is dependent on your ambiant light ... it // may need to be adjusted for your environment. // original code by krobro // edited by Duaa #define DEBUG_PRINTLN(str,data) Serial.print(F("_DEBUG: ")); Serial.print(str); Serial.println(data); #define DEBUG_PRINTLN(str,data) // DO NOTHING #define THRESHOLD 200 #define DOT 200 #define lightPin A1 char morse[6]; int morse_index = 0; boolean _state; unsigned long _millisStart; unsigned long _millisEnd; static char *_list[26] = {".-", // 'a' "-...", // 'b' "-.-.", // 'c' "-..", // 'd' ".", // 'e' "..-.", // 'f' "--.", // 'g' "....", // 'h' "..", // 'i' ".---", // 'j' "-.-", // 'k' ".-..", // 'l' "--", // 'm' "-.", // 'n' "---", // 'o' ".--.", // 'p' "--.-", // 'q' ".-.", // 'r' "...", // 's' "-", // 't' "..-", // 'u' "...-", // 'v' ".--", // 'w' "-..-", // 'x' "-.--", // 'y' "--..",}; // 'z' // convert_morse // // convert morse code (such as "--..") to a // letter ('z') // // code - the morse code to convert // // Returns - converted letter or '?' on failure // // Notes: inefficient linear search ... too slow? // char convert_morse(char *code) { DEBUG_PRINTLN("CONVERT: ", code); for (int x = 0; x < 26; x++) { if (strcmp(_list[x], code) == 0) { DEBUG_PRINTLN("GOT: ", (char)(x+'a')); return (char)(x + 'a'); } } // could not decode what was read return '?'; } void setup() { Serial.begin(115200); delay(2000); _millisStart = _millisEnd = 0; _state = LOW; Serial.println (" "); Serial.println ("Finished setup ..."); } void loop() { /* uncomment the line below to test and find the light threashold in your enviorment. * comment it again after adjusting the threashold value */ // Serial.println (analogRead(A1)); boolean state; // read the light sensor ; sensor can be HIGH or LOW if (analogRead(lightPin) <= THRESHOLD) { state = LOW; } else { state = HIGH; } // check to see if the current state of the sensor // matches the last state of the sensor. if (state != _state) { _millisEnd = millis(); if (_state == HIGH) { // Just finished a HIGH state and transitioned to a low state // did we just process a dot or a dash? if (_millisEnd - _millisStart < DOT + DOT/10 ) { morse[morse_index] = '.'; Serial.print("."); } else { morse[morse_index] = '-'; Serial.print("-"); } // just finished reading another dot or dash // append the dot or dash to the morse to decode morse_index++; morse[morse_index] = '/0'; } else { // just finished a low state and transitioned to a high state // Was a character, letter or word just finished? if ( _millisEnd - _millisStart < DOT + DOT/10 ) { // single dot low state ... finished a dot or a dash } else if ( _millisEnd - _millisStart > DOT*3 - DOT/10 ) { // 3 dot low state or 7 dot low state ... finished a letter char c = convert_morse(morse); if (c =='?') { Serial.println(" *** Failed to decode properly ... retrying ..."); } else { Serial.println(c); } morse_index = 0; if ( _millisEnd - _millisStart > DOT*7 - DOT/10 ) { // 7 dot low state ... finished a word Serial.print(' '); } } } // set the current state _state = state; _millisStart = _millisEnd; } }
The first step is to check the light thrashold for normal room lighting vs flashlight pointed at the LDR. This value changes from enviornment to enviornment. A quick test showed at at normal condition the value is around 90, and with the flashlight pointed it was around 300. So the thrashold was set at 200.
Results
With that, the microcontroller was ready to decode morse code. Here is a video of the microcontroller decoding “hello world” in morse code.
Hero Shoot!!
|
https://fabacademy.org/2021/labs/techworks/students/ammar-alkhatib/assignments/week14/
|
CC-MAIN-2022-05
|
refinedweb
| 1,864
| 62.98
|
![if gte IE 9]><![endif]>
I think that the fields available for extending modules are in this namespace "Telerik.Sitefinity.Web.UI.Fields". You can find the "ImageField" class there, but if you try to add it it throws exception saying that the "FieldDefinitionElement" attribute is missing. Also, in the control template "Telerik.Sitefinity.Resources.Templates.Fields.ImageField.ascx" there is "alert('In process of implementation.')". So, I guess the field is not complete. I need to add image and video selector for some of the existing modules like you and I will welcome a solution from Telerik instead of taking the hard and painful path of workarounds.
|
https://community.progress.com/community_groups/sitefinity/general-discussions/f/295/t/43732
|
CC-MAIN-2018-39
|
refinedweb
| 108
| 57.57
|
Functional testing for Web applications
Using Selenium, Windmill, and twill to test Google App Engine applications
As applications move further away from an individually hosted model into the cloud, reliability and predictability become even more important. The cloud moves many factors outside of our control, so having solid, tested code is more important than ever before.
Most developers, whether they test their code or not, have at least gotten a lecture about testing code at some point. Web developers—more so than most developers—need to deliver applications quickly, so unit testing often takes a backseat to a deadline. In some circles, it is never all right to skip unit testing any code, as a unit test tests actual components of the application and provides a way of explaining the internal workings of the code to other developers. Functionally testing your Web application is quite a different story though, and for some reason hasn't gotten as much of a buzz.
In this article, I explore several different tools that help you perform functional testing on your Web application. I use Google App Engine here, but the testing techniques will apply to any Web application. I will also argue that it is never all right to forgo functional testing, because it is so quick and so easy to perform, at least a minimal level of functional testing. In this article, I explore three functional testing tools: Windmill, Selenium, and twill. Both Windmill, and Selenium are functional Web testing frameworks that allow automation of user interface testing in a Web browser for JavaScript and Asynchronous JavaScript and XML (Ajax) applications. Twill is a lightweight Web scripting tool that deals with non-JavaScript functional testing.
Functional testing with twill
I'll start the discussion of functional testing with the lightweight command-line Web browser and scripting tool, twill, and a default Google App Engine project.
The first thing you do is establish a connection to your application. To do
that, use the
go command, as shown in Listing 1.
Note, if you enter
show,
it then shows the actual output.
Listing 1. Sample show output
# twill-sh -= Welcome to twill! =- >> go localhost:8087 ==> at current page: >> show Hello World! current page:
Another interesting feature of twill is its ability to check
http status
codes.
Listing 2. Sample twill output of http status codes.
> go ==> at current page: >> code 200 current page: >> code 400 ERROR: code is 200 != 400 current page:
As you can see from the output of the command, it only returns an error if it
gets a status code that doesn't match what is expected. Twill also supports
the ability to run these actions as a script. You can name the file anything
you want and pass it to
twill-sh. If you place those commands in a file called
test_twill.script, you will see something like Listing 3.
Listing 3. More sample twill output
# twill-sh test_twill.script >> EXECUTING FILE test_twill.script AT LINE: test_twill.script:0 ==> at AT LINE: test_twill.script:1 -- 1 of 1 files SUCCEEDED.
Twill is a handy tool for automated testing of the non-JavaScript portions of your Web application, and it has more features than what I covered, such as the ability to work with variables, cookies, forms, http authentication, and more. If you would like to know about more advanced use cases, see Related topics.
Functional testing with Selenium
Selenium is a heavier-weight testing tool that allows cross-platform testing in the browser. Writing cross-platform JavaScript code is a regrettable cross that Web developers must bear. Writing Web applications is difficult enough, and just when you think you're done, you inevitably run into some obscure bug that is only present on one browser.
Unfortunately, a unit test does not catch this type of bug. In fact, these bugs often make a Web developer skeptical of doing any testing in the first place. They figure that testing is yet another hoop to jump through, that it gets in the way of their already tight deadline, and it doesn't even reliably work. So why bother?
Selenium (as well as other browser testing tools) is an answer to this problem. You can write functional tests that run in each browser, and then implement some form of continuous integration system that runs the functional tests upon each check-in of the source code. This allows potential bugs in the browser to quickly get caught, and has an immediate payoff.
One of the most basic things Selenium does is record browser actions so they can be replayed as a test. Looking at the sample in Figure 1, you see a window where you place a base url. From there you simply record the actions of the Web site you are testing. In this case, I am testing a Google App Engine site:, which is a demo Ajax Python interpreter. After the session has been recorded, you can then export the tests in Python and run them using Selenium.
Figure 1. Selenium IDE window
To run the tests, simply save your tests as Python code (or whatever language you choose), download and run the Selenium RC test server, and then run your tests. Listing 4 shows an example of what that test might look like.
Listing 4. Example Selenium test
from selenium import selenium import unittest, time, re class NewTest(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*chrome", "") self.selenium.start() def test_new(self): sel = self.selenium sel.open("/") sel.click("link=source") sel.wait_for_page_to_load("30000") def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main()
You can launch Selenium RC, which acts as a proxy for testing, on multiple browsers, and then run your functional test. Listing 5 shows what the output of Selenium RC looks like.
Listing 5. Sample of Selenium RC output
# java -jar selenium-server.jar 01:18:47.909 INFO - Java: Apple Inc. 1.5.0_16-133 01:18:47.910 INFO - OS: Mac OS X 10.5.6 i386 01:18:47.915 INFO - v1.0-beta-1 [2201], [1994] 01:18:48.044 INFO - Version Jetty/5.1.x 01:18:48.045 INFO - Started HttpContext[/,/] 01:18:48.047 INFO - Started HttpContext[/selenium-server] 01:18:48.047 INFO - Started HttpContext[/selenium-server/driver] 01:18:48.055 INFO - Started SocketListener on 0.0.0.0:4444 [output suppressed for space]
I encourage you to read the full Selenium RC FAQ to understand how it interacts with multiple browsers (see Related topics). As you can see, using Selenium to automate cross-platform functional tests is quite easy, and there is support for many languages including plain HTML, Java™ code, C#, Perl, PHP, Python, and Ruby.
Windmill
Windmill is a Web testing framework that is very similar to Selenium, although there are some differences. One main difference is that Windmill is written in Python and JavaScript, and was originally developed to test the Chandler Web client, an ambitious open source competitor to Microsoft® Outlook, but has continued on as an independent open source project.
To get started using Windmill, simply run the command:
sudo
easy_install windmill. This installs the windmill testing framework. Next,
if you enter
windmill firefox, the Windmill IDE
opens (see Figure 2). Then the testing page opens, as shown in Figure 3. In Figure 2 you can see that the IDE records
actions much like Selenium. You can then save the test file, which looks
like the output in Listing 6.
Figure 2. Windmill IDE window
Figure 3. Windmill tutorial
Listing 6. Sample test file output
# Generated by the windmill services transformer from windmill.authoring import WindmillTestClient def test_recordingSuite0(): client = WindmillTestClient(__name__) client.click(id=u'recordedClickId') client.click(id=u'textFieldOne') client.type(text=u'foo bar', id=u'textFieldOne') client.click(id=u'btnSub')
There is an option to save a test as JSON or Python, and in this case I save it as Python. Next, to actually run the test, you simply run the test file from the command line with the test option. Listing 7 shows how this looks.
Listing 7. Running the test
# windmill firefox test=windmill_test.py Started ['/Applications/Firefox.app/Contents/MacOS/firefox-bin', '-profile', '/var/folders/1K/ 1KgyCzqJHButzT6vq8vwHU+++TI/-Tmp-/tmp_ovtnN.mozrunner', ''] Server running...
You can easily adapt this example to your Google App Engine application or any other Web application, for that matter. Windmill also has great documentation that explains how to do more advanced tasks such as extending windmill with plug-ins.
Conclusion
Functional testing is essential to the process of Web development. Without functional testing, Web development becomes a guessing game that can be filled with frantic, error-prone deployments and refactoring.
So, should functional testing be mandatory for all Web developers? I would argue, yes, all Web applications should be tested, especially if they are destined for a cloud environment. Lack of functional testing for Web applications is a definite red flag, considering how easy it is to doâat least a minimum of testingâwith Selenium, Windmill, or twill. To borrow a tag line from the author of twill, Dr. Titus Brown, "if you don't test your code, how you can say it works?".
Downloadable resources
- PDF of this content
- Sample Code For This Article (functional_testing_code.zip | 7KB)
Related topics
- Python For Unix and Linux System Administration: Use this book to develop your own set of command-line utilities with Python to tackle a wide range of problems.
- "Getting Started With Google App Engine:" This article gives you a good introduction to the Google App Engine.
- Google App Engine in Action: Use this book to create applications with a custom feel to them using Python.
- Google App Engine: Read the official documentation.
- Hello World Google App Engine: Read a more detailed article on getting a "Hello World" application running from scratch.
- Python Tutorial: Take this tutorial to learn the basic concepts and features of the Python language and system.
- Selenium RC Server: Get more information on the Selenium RC Server.
- twill: Get more information on the twill scripting language.
- Windmill- Test Automation : Find out more about the Windmill testing framework.
- An Introduction to Testing Web Applications With twill and Selenium: Use this resource to quickly start writing your own functional tests.
- developerWorks Cloud Computing space: Discover why cloud computing is important, how to get started, and where to learn more about it.
|
https://www.ibm.com/developerworks/web/library/wa-aj-testing/index.html
|
CC-MAIN-2017-30
|
refinedweb
| 1,741
| 56.86
|
The QMovie class is a convenience class for playing movies with QImageReader. More...
#include <QMovie>
Inherits QObject.
The QMovie class is a convenience class for playing movies with QImageReader..
This enum describes the different cache modes of QMovie.
This enum describes the different states of QMovie., QMovie can be instructed to cache the frames, at the added memory cost of keeping the frames in memory for the lifetime of the QMovie.
Access functions:
See also QMovie::CacheMode.
This property holds the movie's speed.
The speed is measured in percentage of the original movie speed. The default speed is 100%. Example:
QMovie movie("racecar.gif"); movie.setSpeed(200); // 2x speed
Access functions:
Constructs a QMovie object, passing the parent object to QObject's constructor.
See also setFileName(), setDevice(), and setFormat().().
This signal is emitted when the frame number has changed to frameNumber. You can call currentImage() or currentPixmap() to get a copy of the frame.
This function was introduced in Qt 4.1.
Returns the number of frames in the movie.
Certain animation formats do not support this feature, in which case 0.
Returns the scaled size of frames.
This function was introduced in Qt 4.1.
See also setScaledSize() and QImageReader::scaledSize().().
If paused is true, QMovie will enter Paused state and emit stateChanged(Paused); otherwise it will enter Running state and emit stateChanged(Running).
See also paused() and state().
Sets the scaled frame size to size.
This function was introduced in Qt 4.1.
See also scaledSize() and QImageReader::setScaledSize().().
This signal is emitted after QMovie::start() has been called, and QMovie has entered QMovie::Running state.
Returns the current state of QMovie.
See also MovieState and stateChanged().
This signal is emitted every time the state of the movie changes. The new state is specified by state.
See also QMovie::state().
Stops the movie. QMovie enters NotRunning state, and stops emitting updated() and resized(). If start() is called again, the movie will restart from the beginning.
If QMovie is already in the NotRunning state, this function does nothing.
See also start() and setPaused().
Returns the list of image formats supported by QMovie.
This function was introduced in Qt 4.1.
See also QImageReader::supportedImageFormats().
This signal is emitted when the rect rect in the current frame has been updated. You can call currentImage() or currentPixmap() to get a copy of the updated frame.
|
https://doc.qt.io/archives/4.3/qmovie.html
|
CC-MAIN-2019-26
|
refinedweb
| 395
| 70.8
|
I have Ubuntu 12.04 (Precise Pangolin) installed and some package installed, PIL. Now I want to use Pillow, but that cannot be installed at the same time as PIL.
I looked at virtualenv, but there are other packages I don't want to have to install.
Is there another way to set this up without the clash?
You should install Pillow from the Git clone with (choose
/opt/pillow as you want):
python setup.py install --prefix /opt/pillow
And then it include in your code,
import sys sys.path.insert(0, "/opt/pillow")
before doing the import of Pillow with
from PIL import Image
This will search the
/opt/pillow directory first and anything without that insert will never see Pillow.
|
https://codedump.io/share/wfkwaaz7bKRA/1/installing-pillow-and-pil
|
CC-MAIN-2016-50
|
refinedweb
| 123
| 74.08
|
PRANG - XML graph engine - XML to Moose objects and back!
# step 1. define a common role for nodes in your XML language package XML::Language::Node; use Moose::Role; sub xmlns { "" } # step 2. define the root node(s) of your language package XML::Language; use Moose; use PRANG::Graph; sub root_element { "envy" }; has_attr 'laziness' => is => "ro", isa => "Str", ; has_element 'lust' => is => "ro", isa => "XML::Language::Lust", ; with 'PRANG::Graph', 'XML::Language::Node'; # step 3. define further elements in your schema package XML::Language::Lust; use Moose; use PRANG::Graph; use PRANG::XMLSchema::Types; has_attr 'gluttony' => is => "ro", isa => "PRANG::XMLSchema::byte", ; has_element 'sins' => is => "ro", isa => "ArrayRef[XML::Language::Lust|Str]", xml_nodeName => { 'lust' => 'XML::Language::Lust', 'anger' => 'Str', }, ; has_element 'greed' => is => "ro", isa => "Bool", ; with 'XML::Language::Node'; # step 4a. parse! my $object = XML::Language->parse(<<XML); <envy laziness="Very"> <lust gluttony="127"> <anger>You wouldn't like me when I'm angry</anger> <lust> <anger>You've done it now!</anger> <greed /> </lust> </lust> </envy> XML # Parsing the above would give you the same structure as this: XML::Language->new( laziness => "Very", lust => XML::Language::Lust->new( gluttony => 127, sins => [ "You wouldn't like me when I'm angry", XML::Language::Lust->new( sins => [ "You've done it now!" ], greed => 1, ), ], ) ); # step 4b. emit! $format = 1; print $object->to_xml($format);
PRANG is an XML Graph engine, which provides post-schema validation objects (PSVO).
It is designed for implementing XML languages for which a description of the valid sets of XML documents is available - for example, a DTD, W3C XML Schema or Relax specification. With PRANG (and, like XML::Toolkit), your class structure is your XML Graph.
XML namespaces are supported, and the module tries to make many XML conventions as convenient as possible in the generated classes. This includes XML data (elements with no attributes and textnode contents), and presence elements (empty elements with no attributes which indicate something). It also supports mixed and unprocessed portions of the XML, and "pluggable" specifications.
Currently, these must be manually constructed as in the example - details on this are to be found on the PRANG::Graph::Meta::Element and PRANG::Graph::Meta::Attr perldoc. There is also a cookbook of examples - see PRANG::Cookbook.
However, eventually it should be possible to automatically process schema documents to produce a class structure (see "KNOWN LIMITATIONS").
Once the PRANG::Graph has been built, you can:
The PRANG::Marshaller takes any well-formed document parsable by XML::LibXML, and constructs a corresponding set of Moose objects.
A shortcut is available via the
parse method on the starting point of the graph (indicated by using the role 'PRANG::Graph').
You can also parse documents which have multiple start nodes, by defining a role which the concrete instances use.
eg, for the example in the SYNOPSIS; define a role 'XML::Language::Family' - the root node will be parsed by the class with a matching
root_element (and
xmlns) value.
package XML::Language::Family; use Moose::Role; with 'PRANG::Graph'; package XML::Language; use Moose; with 'XML::Language::Family'; # later ... my $marshaller = PRANG::Marshaller->get("XML::Language::Family"); my $object = $marshaller->parse($xml);
note the
PRANG::Marshaller API will probably go away in a future release, once the "parse" role method is made to work correctly.
A PRANG::Graph structure also has a
to_xml method, which emits XML (optionally indented).
There are some (well, one at the moment) global options which can be set via:
$PRANG::OPTION = 'value';
Setting this to true will emit all text nodes as CDATA elements rather than just text. Default is false.
Note, for parsing, CDATA and text are treated as the same.
The term XML Graph is from the paper, "XML Graphs in Program Analysis", Møller and Schwartzbach (2007).
The difference between an Graph and a Tree, is that a Graph can contain cycles, whereas a Tree cannot - there is only one correct way to follow a tree, whereas there can be many correct ways to follow a graph.
So, XML documents are considered to be trees, and the mechanisms which describe allowable forms for those trees XML graphs.
They are graphs, because they can contain cycles - cycles in an XML graph might point back to the same element (indicating an "any number of this element" condition), or point to a different element closer to the initial element (indicating an arbitrary level of nesting).
Support for these features will be considered as tuits allow. If you can create a patch for any of these features which meets the coding standards, they are very likely to be accepted. The authors will provide guidance and/or assistance through this process, time and patience permitting.
Validating/Parsing schema documents, and transforming those to a PRANG::Graph structure, could well be a valid approach to address these issues and may be addressed by later releases and/or modules which implement those XML languages using PRANG.
This is a bit more shonky an approach, but can be very useful for ad-hoc XML conventions for which no rigid definition can be found. Currently, XML::Toolkit is the best module for this.
It's possible that at a given point in time, a graph may be followed in more than one direction, and the correct direction cannot be determined based on the currently input token. However, few if any XML languages are this indeterminate, so while many schema languages may allow this to be specified, they should (hopefully) not correspond to major standards.
Source code is available from Catalyst:
git://git.catalyst.net.nz/PRANG.git
And Github:
git://github.com/catalyst/PRANG.git
Please see the file SubmittingPatches for information on preferred submission format.
Suggested avenues for support:
Development commissioned by NZ Registry Services, and carried out by Catalyst IT -
Copyright 2009, 2010, NZ Registry Services. This module is licensed under the Artistic License v2.0, which permits relicensing under other Free Software licenses.
|
http://search.cpan.org/dist/PRANG/lib/PRANG.pm
|
CC-MAIN-2018-26
|
refinedweb
| 985
| 51.18
|
I am on dynamo 1.0.Can a "python script" node in dynamo be used to execute notepad.exe with a text file preloaded? and if so how?Thanks.
Are you trying to use some Python code that is saved as TXT and then run it in Dynamo? Is that what the question is? If that's the question I suggest that you download a package called Ladybug and see how that is set up because I think @Mostapha is doing exactly that.
Ps. I don't think you want to save that code as TXT. I would rather use the proper PY extension, but it should not matter if the TXT is properly formatted.
Hi Konrad. Thanks for the reply.Ideally I would like the ability to execute notepad.exe with an already existing text file, so I could refer to it and edit that text file from within dynamo. I know that Excel can read and write from dynamo, but that seems like overkill for just a few paragraphs.
You need to be clearer about what you mean by "execute". Do you want to use Dynamo to launch Notepad.exe and then write some lines or read some lines from it?
Please be specific about what you want to achieve, show a minimal sample file or a desired result.
Hi Nigel,Did you try using notepad++. Its good for writing and modifying programming scripts.
?
Hi All.Yes. I would like Dynamo (within revit) to launch Notepad.exe with a pre existing text file open within notepad so I can edit the text file. Then I would view and possibly even edit the text file. Then I would close the text file (and notepad would exit.) I would be left with dynamo still open with the current graph (.dyn) still loaded.I see the use of this (if it's possible) as something used for reference e.g a list of sizes or keyboard shortcuts etc., and obviously only maintain one copy of the information ( the text file). There are a lot of easier ways or work arounds, but I am curious if dynamo would be suited. I am not needing to write the contents of the textfile for further manipulation or use within dynamo.
You can simply use notepad.exe and pass the file name. Lines below opens up C:\test.txt
import os
os.system("notepad {}".format(r"C:\test.txt"))
Mostapha, thanks for the reply.I have copied your 2 lines into a "python script" node and when I run the (dyn) graph, the node turns orange with error message
Warning: IronPythonEvaluator.EvaluateIronPythonScript operation failed. Traceback (most recent call last): File "", line 1, in ImportError: No module named os
Not sure how to proceed further.Thanks
That's because you'll need to add the path to the python library first. It might be simpler if you use the dotNet equivalent:
This is how you add the path, but this only works if python is installed in the default location.
Hi EinarThank you. Your code with only 3 nodes above works great. It loads up notepad with the selected file already open. My first use of it will be to maintain a list of dynamo keyboard shortcuts.Thank you to all who contributed.
Node to take specific lists formatting and convert them into strings and write to a text file; then open the text file asynchronously to allow the script to continue.
py.INFORM.py
import clr
clr.AddReference("RevitAPIUI")
from Autodesk.Revit.UI import TaskDialog ##Import task dialog
##from datetime import datetime ##Formatting date time strings for files
import time ##Time as in NOW or tiem string format functions
import sys ##Validate thie is here and avaiable#########
sys.path.append(r'C:\Program Files (x86)\IronPython 2.7\Lib')
import subprocess ##Launching asynchronus processess like notepad
import os ##Operating system - CASE SENSITIVE
msg=IN[0] ##DAta in 0
##msgbox=TaskDialog ##Task Dialog
astr="" ##STR for formatting
for i in msg: ##BIG MESSAGE LIST
sstr="" ##SUB STRING
for j in i:
for k in j:
sstr=sstr + str(k) + "\t" ##ADD TAB SEPARATORS
sstr=sstr + "\n" ##ADD CARRIAGE RETURN to substring
astr=astr + sstr + "\n" ##ADD substring to main and spacer carriage return
##Set file name using date-time in C:\TEMP####
fn="C:\\temp\\DynamoError-" + time.strftime("%Y-%m-%dt%H%M%Z") + ".txt"
f= open (fn, "w+") ##Open file wor Write
f.write(astr) ##Write out ASTR
subprocess.Popen('notepad {}'.format(fn)) ##Open the file in Notepad Asynchronously
##msgbox.Show("Message", astr ) ##Will show message dialog in revit - WARNING - Synchronus- script will have to wait.
OUT=astr ##Pass formatted string out
|
https://forum.dynamobim.com/t/python-script-node-for-notepad-exe-possible/4578
|
CC-MAIN-2017-43
|
refinedweb
| 777
| 75.61
|
On Sun, 4 Oct 1998 address@hidden wrote: > > My win32 problem was as simple as the value not being memcpy'd into the > > soc_in structure. I just figured it out though so now I need to thread > > through why. My brain is molasses though... > ok - then we'll see a patch soon, I hope. (That will leave the NSL_FORK > problems that Bela's looking into). Well, the following gets me lookups, but I am not sure it is "the right thing to do." *** HTTCP.org Sun Oct 4 12:38:14 1998 --- HTTCP.c Sun Oct 4 12:06:02 1998 *************** *** 646,651 **** --- 646,653 ---- #endif /* !NSL_FORK */ #ifndef _WINDOWS_NSL FREE(host); + #else + memcpy((void *)&soc_in->sin_addr, phost->h_addr, phost->h_length); #endif /* _WINDOWS_NSL */ } > > -- > Thomas E. Dickey > address@hidden > >
|
http://lists.gnu.org/archive/html/lynx-dev/1998-10/msg00147.html
|
CC-MAIN-2014-52
|
refinedweb
| 127
| 84.57
|
In this section, you will learn how to search a file on the server by using java.
Find File on FTP server : If you want to check existence of any file on the server, FTPClient class helps you to find the file by providing various methods. For that we need the file name which we want to search.
Here is some steps to check existence of any file -
1. List all the files in the ftp server by using method listFiles() as - FTPFile[] files = client.listFiles();
2. Take the searched file as a String variable. String checkFile = FtpTest.txt";
3. Iterate all the files of ftp server and check it matches to the searched file name or not. if (fileName.equals(checkFile))
Example : This example demonstrate the above steps -
import java.io.IOException; import org.apache.commons.net.; import org.apache.commons.net.; import org.apache.commons.net.; class FtpCheckFile {Files(); int flag = 0; String checkFile = "FtpTest.txt"; System.out.println("Checking file existence on the server ... "); for (FTPFile file : files) { String fileName = file.getName(); if (fileName.equals(checkFile)) { flag = 1; break; } else { flag = 0; } } if (flag == 1) { System.out.println("File exists on FTP server. "); } else { System.out .println("Sorry! your file is not presented on Ftp server."); } } catch (FTPConnectionClosedException e) { System.out.println(e); } finally { try { client.disconnect(); } catch (FTPConnectionClosedException e) { System.out.println(e); } } } }
Output :
User successfully logged in. Checking file existence on the server ... File exists on FTP server.
Advertisements
Posted on: December: Checking file existence on Ftp Server
Post your Comment
|
http://www.roseindia.net/java/javaftp/FtpCheckFile.shtml
|
CC-MAIN-2017-09
|
refinedweb
| 256
| 62.04
|
On Friday 11 July 2008 23:12:21 Mathieu Desnoyers wrote:> * Rusty Russell (rusty@rustcorp.com.au) wrote:> > On Thursday 10 July 2008 10:30:37 Max Krasnyansky wrote:> > > You mentioned (in private conversation) that you were going to add some> > > logic that checks whether CPU is running user-space code and not> > > holding any locks to avoid scheduling stop_machine thread on it. > >> > Will play with it again and report,> > Rusty.>> Hrm, I must be missing something, but using the fact that other CPUs are> running in userspace as a guarantee that they are not running within> critical kernel sections seems kind of.. racy ? I'd like to have a look> at this experimental patch : does it inhibit interrupts somehow and/or> does it take control of userspace processes doing system calls at that> precise moment ?The idea was to try this ipi-to-all-cpus and fall back on the current threadmethod if it doesn't work. I suspect it will succeed in the vast majority ofcases (with CONFIG_PREEMPT, we can also let the function execute if in-kernelbut preemptible). Something like this:+struct ipi_data {+ atomic_t acked;+ atomic_t failed;+ unsigned int cpu;+ int fnret;+ int (*fn)(void *data);+ void *data;+};++static void ipi_func(void *info)+{+ struct ipi_data *ipi = info;+ bool ok = false;++ printk("get_irq_regs(%i) = %p\n", smp_processor_id(), get_irq_regs());+ goto fail;++ if (user_mode(get_irq_regs()))+ ok = true;+ else {+#ifdef CONFIG_PREEMPT+ /* We're in an interrupt, ok, but were we preemptible+ * before that? */+ if ((hardirq_count() >> HARDIRQ_SHIFT) == 1) {+ int prev = preempt_count() & ~HARDIRQ_MASK;+ if ((prev & ~PREEMPT_ACTIVE) == PREEMPT_INATOMIC_BASE)+ ok = true;+ }+#endif+ }++fail:+ if (!ok) {+ /* Mark our failure before acking. */+ atomic_inc(&ipi->failed);+ wmb();+ }++ if (smp_processor_id() != ipi->cpu) {+ /* Wait for cpu to call function (last to ack). */+ atomic_inc(&ipi->acked);+ while (atomic_read(&ipi->acked) != num_online_cpus())+ cpu_relax();+ } else {+ while (atomic_read(&ipi->acked) != num_online_cpus() - 1)+ cpu_relax();+ /* Must read acked before failed. */+ rmb();++ /* Call function if noone failed. */+ if (atomic_read(&ipi->failed) == 0)+ ipi->fnret = ipi->fn(ipi->data);+ atomic_inc(&ipi->acked);+ }+}++static bool try_ipi_stop(int (*fn)(void *), void *data, unsigned int cpu,+ int *ret)+{+ struct ipi_data ipi;++ /* If they don't care which cpu fn runs on, just pick one. */+ if (cpu == NR_CPUS)+ ipi.cpu = any_online_cpu(cpu_online_map);+ else+ ipi.cpu = cpu;++ atomic_set(&ipi.acked, 0);+ atomic_set(&ipi.failed, 0);+ ipi.fn = fn;+ ipi.data = data;+ ipi.fnret = 0;++ smp_call_function(ipi_func, &ipi, 0, 1);++ printk("stop_machine: ipi acked %u failed %u\n",+ atomic_read(&ipi.acked), atomic_read(&ipi.failed));+ *ret = ipi.fnret;+ return (atomic_read(&ipi.failed) == 0);+}Hope that clarifies!Rusty.
|
https://lkml.org/lkml/2008/7/12/7
|
CC-MAIN-2019-26
|
refinedweb
| 412
| 58.89
|
Wouldn’t it be nice to have a search algorithm to locate those things you need in your day-to-day life? You want to be able to quickly find car keys, books, pens, chargers, specific contacts from our phone records. That is what search algorithms do in a computer. In this article we look at Python search algorithms that retrieve information stored within some data structure or a specific search space.
Searching is the technique of selecting specific data from a collection of data based on some condition. It is the process of finding a particular item in a collection. It can also be described as:
“Given a list of values, a function that compares two values and a desired value, find the position of the desired value in the list.”
In this post, we’ll focus on two main search algorithms:
- Linear search – checks the items in sequence until the desired item is found
- Binary search – requires a sorted sequence in the list, and checks for the value in the middle of the list, repeatedly discarding the half of the list that contains values that are definitely either all larger or all smaller than the desired value.
Linear Search in Python
Linear or sequential search is the simplest solution. It simply iterates over the list, one item at a time, until the specific item is found or all items have been examined.
In Python, a target item can be found in a sequence using the in operator:
if key in theArray : print( "The key is in the array." ) else : print( "The key is not in the array.")
To determine if an item is in the array, the search begins with the value in the first element. If the first element does not contain the target value, the next element in sequential order is compared to the item. This process is repeated until the item is found. If the item is not in the array, we still have to iterate over the list comparing every item in the array to the target value. This is because it cannot be determined that the item is not in the sequence until the entire array has been traversed.
def Linear_Search(arr,target): #iterate to the end of the list for i in range(len(arr)): #match found, return true if arr[i]==target: return True return False
Searching a Sorted Sequence using Linear Search
If we know the list is ordered, we only have to check until we have found the element or an element greater than it
def ordered_seq_search(arr,target): # Start at position 0 pos= 0 # Target becomes true if item is in the list found = False # Stop marker stopped = False # go until end of list while pos< len(arr) and not found and not stopped: # If match if arr[pos] == target: found = True else: # Check if element is greater if arr[pos] > target: stopped = True # Otherwise move on else: pos = pos+1 return found
Finding the Smallest Item in a List
Instead of searching for a specific value in an unsorted sequence, suppose we wanted to search for the smallest value, which is equivalent to applying Python’s min() function to the sequence.
In the Python search algorithm, a linear search is performed as before. But this time we must keep track of the smallest value found for each iteration through the loop, as shown in the sample code:
def smallestItem(arr): # Assume the first item is the smallest value. smallest=arr[0] #loop through the list and check for any smaller value for i in range(len(arr)): if arr[i]==smallest: smallest=arr[i] return smallest #return the smallest value
Binary Search in Python
Binary Search follows a divide-and-conquer methodology. It is faster than linear search but requires that the array be sorted before the algorithm is executed.
How it works:
We first check the MIDDLE element in the list. If . . .
- . . . it is the value we want, we can stop.
- . . . we find a HIGHER than the value we want, we repeat the search process with the portion of the list BEFORE the middle element.
- . . . a LOWER than the value we want is in middle, we repeat the search process with the portion of the list AFTER the middle element.
The binary Python search algorithm can be written either recursively or iteratively. Recursion is generally slower in Python because it requires the allocation of new stack frames.
Iterative Implementation of Binary Search
def Binary_Search(arr,target): ''' Binary Search Algorithm implemented Iteratively ''' # First and last index values left=0 right=len(arr)-1 while left<=right: mid=(left+right)//2 # Match found if arr[mid]==target: return True # Set new midpoints up or down depending on comparison else: if target>arr[mid]: #set up left = mid+1 else: #set down right = mid-1 return False
Recursive Implementation of Binary Search
def rec_bsearch(arr,target): ''' Binary Search Algorithm implemented Recursively ''' # Base Case! if len(arr) == 0: return False # Recursive Case else: mid = len(arr)//2 # If match found if arr[mid]==target: return True else: # Call again on second half if target<arr[mid]: return rec_bsearch(arr[:mid],target) # Or call on first half else: return rec_bsearch(arr[mid+1:],target)
Run-Time Analysis
- Sequential (Linear) Search
The time complexity of linear search is O(n). This means that the time taken to execute increases with the number of items in our input list.
Linear search is not often used in practice. That’s because the same efficiency can be achieved by using inbuilt methods or existing operators. And it is not as fast or efficient as other search algorithms.
When we need to find the first occurrence of an item in an unsorted collection, linear search is a good approach. Because unlike most other search algorithms, it does not require that a collection be sorted before searching begins.
- Binary Search Algorithm
We can only pick one possibility per iteration. The input size gets divided by two, in each iteration. This makes the time complexity of binary search O(log n), which is more efficient than the linear search.
One drawback of binary search is that if there are multiple occurrences of an element in the array, it does not return the index of the first element, but rather the index of the element closest to the middle.
Conclusion
Various search problems us different search algorithms in computer science. It depends on the nature of the search space of a problem domain, either with discrete or continuous values. Usually, the appropriate search algorithm depends on the data structure being searched, and may also require one to have prior knowledge about the data. Nevertheless, the applications of search algorithms are quite broad: from simple information retrieval to SEO optimization. Thank you for reading and keep exploring more use cases.
The full code can be found here.
|
https://sweetcode.io/python-search-algorithms/
|
CC-MAIN-2021-17
|
refinedweb
| 1,147
| 56.08
|
Unsuccessful append to an empty NumPy array
Solution 1
numpy.append is pretty different from list.append in python. I know that's thrown off a few programers new to numpy.
numpy.append is more like concatenate, it makes a new array and fills it with the values from the old array and the new value(s) to be appended. For example:
import numpy old = numpy.array([1, 2, 3, 4]) new = numpy.append(old, 5) print old # [1, 2, 3, 4] print new # [1, 2, 3, 4, 5] new = numpy.append(new, [6, 7]) print new # [1, 2, 3, 4, 5, 6, 7]
I think you might be able to achieve your goal by doing something like:
result = numpy.zeros((10,)) result[0:2] = [1, 2] # Or result = numpy.zeros((10, 2)) result[0, :] = [1, 2]
Update:
If you need to create a numpy array using loop, and you don't know ahead of time what the final size of the array will be, you can do something like:
import numpy as np a = np.array([0., 1.]) b = np.array([2., 3.]) temp = [] while True: rnd = random.randint(0, 100) if rnd > 50: temp.append(a) else: temp.append(b) if rnd == 0: break result = np.array(temp)
In my example result will be an (N, 2) array, where N is the number of times the loop ran, but obviously you can adjust it to your needs.
new update
The error you're seeing has nothing to do with types, it has to do with the shape of the numpy arrays you're trying to concatenate. If you do
np.append(a, b) the shapes of
a and
b need to match. If you append an (2, n) and (n,) you'll get a (3, n) array. Your code is trying to append a (1, 0) to a (2,). Those shapes don't match so you get an error.
Solution 2
I might understand the question incorrectly, but if you want to declare an array of a certain shape but with nothing inside, the following might be helpful:
Initialise empty array:
>>> a = np.zeros((0,3)) #or np.empty((0,3)) or np.array([]).reshape(0,3) >>> a array([], shape=(0, 3), dtype=float64)
Now you can use this array to append rows of similar shape to it. Remember that a numpy array is immutable, so a new array is created for each iteration:
>>> for i in range(3): ... a = np.vstack([a, [i,i,i]]) ... >>> a array([[ 0., 0., 0.], [ 1., 1., 1.], [ 2., 2., 2.]])
np.vstack and np.hstack is the most common method for combining numpy arrays, but coming from Matlab I prefer np.r_ and np.c_:
Concatenate 1d:
>>> a = np.zeros(0) >>> for i in range(3): ... a = np.r_[a, [i, i, i]] ... >>> a array([ 0., 0., 0., 1., 1., 1., 2., 2., 2.])
Concatenate rows:
>>> a = np.zeros((0,3)) >>> for i in range(3): ... a = np.r_[a, [[i,i,i]]] ... >>> a array([[ 0., 0., 0.], [ 1., 1., 1.], [ 2., 2., 2.]])
Concatenate columns:
>>> a = np.zeros((3,0)) >>> for i in range(3): ... a = np.c_[a, [[i],[i],[i]]] ... >>> a array([[ 0., 1., 2.], [ 0., 1., 2.], [ 0., 1., 2.]])
Solution 3
This error arise from the fact that you are trying to define an object of shape (0,) as an object of shape (2,). If you append what you want without forcing it to be equal to result[0] there is no any issue:
b = np.append([result[0]], [1,2])
But when you define result[0] = b you are equating objects of different shapes, and you can not do this. What are you trying to do?
Solution 4
Here's the result of running your code in Ipython. Note that
result is a
(2,0) array, 2 rows, 0 columns, 0 elements. The
append produces a
(2,) array.
result[0] is
(0,) array. Your error message has to do with trying to assign that 2 item array into a size 0 slot. Since
result is
dtype=float64, only scalars can be assigned to its elements.
In [65]: result=np.asarray([np.asarray([]),np.asarray([])]) In [66]: result Out[66]: array([], shape=(2, 0), dtype=float64) In [67]: result[0] Out[67]: array([], dtype=float64) In [68]: np.append(result[0],[1,2]) Out[68]: array([ 1., 2.])
np.array is not a Python list. All elements of an array are the same type (as specified by the
dtype). Notice also that
result is not an array of arrays.
Result could also have been built as
ll = [[],[]] result = np.array(ll)
while
ll[0] = [1,2] # ll = [[1,2],[]]
the same is not true for result.
np.zeros((2,0)) also produces your
result.
Actually there's another quirk to
result.
result[0] = 1
does not change the values of
result. It accepts the assignment, but since it has 0 columns, there is no place to put the
1. This assignment would work in result was created as
np.zeros((2,1)). But that still can't accept a list.
But if
result has 2 columns, then you can assign a 2 element list to one of its rows.
result = np.zeros((2,2)) result[0] # == [0,0] result[0] = [1,2]
What exactly do you want
result to look like after the
append operation?
Solution 5
numpy.append always copies the array before appending the new values. Your code is equivalent to the following:
import numpy as np result = np.zeros((2,0)) new_result = np.append([result[0]],[1,2]) result[0] = new_result # ERROR: has shape (2,0), new_result has shape (2,)
Perhaps you mean to do this?
import numpy as np result = np.zeros((2,0)) result = np.append([result[0]],[1,2])
- Cupitor over 2 years
I am trying to fill an empty(not np.empty!) array with values using append but I am gettin error:
My code is as follows:
import numpy as np result=np.asarray([np.asarray([]),np.asarray([])]) result[0]=np.append([result[0]],[1,2])
And I am getting:
ValueError: could not broadcast input array from shape (2) into shape (0)
|
https://9to5answer.com/unsuccessful-append-to-an-empty-numpy-array
|
CC-MAIN-2022-40
|
refinedweb
| 1,033
| 76.93
|
FilterReader is an abstract class that defines a
null filter; it reads characters from a specified
Reader and returns them with no modification. In
other words, FilterReader defines no-op
implementations of all the Reader methods. A
subclass must override at least the two read( )
methods to perform whatever sort of filtering is necessary. Some
subclasses may override other methods as well. Example 3-6 shows RemoveHTMLReader,
which is a custom subclass of FilterReader that
reads HTML text from a stream and filters out all of the HTML tags
from the text it returns.
In the example, we implement the HTML
tag filtration in the three-argument version of read(
), and then implement the no-argument version in terms of
that more complicated version. The example includes an inner
Test class with a main( )
method that shows how you might use the
RemoveHTMLReader class.
Note that we could also define a RemoveHTMLWriter
class by performing the same filtration in a
FilterWriter subclass. Or, to filter a byte stream
instead of a character stream, we could subclass
FilterInputStream and
FilterOutputStream.
RemoveHTMLReader is only one example of a filter
stream. Other possibilities include streams that count the number of
characters or bytes processed, convert characters to uppercase,
extract URLs, perform search-and-replace operations, convert
Unix-style LF line terminators to Windows-style CRLF line
terminators, and so on.
package je3.io;
import java.io.*;
/**
* A simple FilterReader that strips HTML tags (or anything between
* pairs of angle brackets) out of a stream of characters.
**/
public class RemoveHTMLReader extends FilterReader {
/** A trivial constructor. Just initialize our superclass */
public RemoveHTMLReader(Reader in) { super(in); }
boolean intag = false; // Used to remember whether we are "inside" a tag
/**
* This is the implementation of the no-op read( ) method of FilterReader.
* It calls in.read( ) to get a buffer full of characters, then strips
* out the HTML tags. (in is a protected field of the superclass).
**/
public int read(char[ ] buf, int from, int len) throws IOException {
int numchars = 0; // how many characters have been read
// Loop, because we might read a bunch of characters, then strip them
// all out, leaving us with zero characters to return.
while (numchars == 0) {
numchars = in.read(buf, from, len); // Read characters
if (numchars == -1) return -1; // Check for EOF and handle it.
// Loop through the characters we read, stripping out HTML tags.
// Characters not in tags are copied over previous tags
int last = from; // Index of last non-HTML char
for(int i = from; i < from + numchars; i++) {
if (!intag) { // If not in an HTML tag
if (buf[i] == '<') intag = true; // check for tag start
else buf[last++] = buf[i]; // and copy the character
}
else if (buf[i] == '>') intag = false; // check for end of tag
}
numchars = last - from; // Figure out how many characters remain
} // And if it is more than zero characters
return numchars; // Then return that number.
}
/**
* This is another no-op read( ) method we have to implement. We
* implement it in terms of the method above. Our superclass implements
* the remaining read( ) methods in terms of these two.
**/
public int read( ) throws IOException {
char[ ] buf = new char[1];
int result = read(buf, 0, 1);
if (result == -1) return -1;
else return (int)buf[0];
}
/** This class defines a main( ) method to test the RemoveHTMLReader */
public static class Test {
/** The test program: read a text file, strip HTML, print to console */
public static void main(String[ ] args) {
try {
if (args.length != 1)
throw new IllegalArgumentException("Wrong number of args");
// Create a stream to read from the file and strip tags from it
BufferedReader in = new BufferedReader(
new RemoveHTMLReader(new FileReader(args[0])));
// Read line by line, printing lines to the console
String line;
while((line = in.readLine( )) != null)
System.out.println(line);
in.close( ); // Close the stream.
}
catch(Exception e) {
System.err.println(e);
System.err.println("Usage: java RemoveHTMLReader$Test" +
" <filename>");
}
}
}
}
|
http://books.gigatux.nl/mirror/javaexamples/0596006209_jenut3-chp-3-sect-7.html
|
CC-MAIN-2018-43
|
refinedweb
| 644
| 52.6
|
I am trying to 'broadcast' a date column from df1 to df2.
In df1 I have the names of all the users and their basic information.
In df2 I have a list of purchases made by the users.
df1 and df2 code
Assuming I have a much bigger dataset (the above created for sample) how can I add just(!) the df1['DoB'] column to df2?
I have tried both concat() and merge() but none of them seem to work:
code and error
The only way it seems to work is only if I merge both df1 and df2 together and then just delete the columns I don't need. But if I have tens of unwanted columns, it is going to be very problematic.
The full code (including the lines that throw an error):
import pandas as pd
df1 = pd.DataFrame(columns=['Name','Age','DoB','HomeTown'])
df1['Name'] = ['John', 'Jack', 'Wendy','Paul']
df1['Age'] = [25,23,30,31]
df1['DoB'] = pd.to_datetime(['04-01-2012', '03-02-1991', '04-10-1986', '06-03-1985'], dayfirst=True)
df1['HomeTown'] = ['London', 'Brighton', 'Manchester', 'Jersey']
df2 = pd.DataFrame(columns=['Name','Purchase'])
df2['Name'] = ['John','Wendy','John','Jack','Wendy','Jack','John','John']
df2['Purchase'] = ['fridge','coffee','washingmachine','tickets','iPhone','stove','notebook','laptop']
df2 = df2.concat(df1) # error
df2 = df2.merge(df1['DoB'], on='Name', how='left') #error
df2 = df2.merge(df1, on='Name', how='left')
del df2['Age'], df2['HomeTown']
df2 #that's how i want it to look like
I think you need
merge with subset
[['Name','DoB']] - need
Name column for matching:
print (df1[['Name','DoB']]) Name DoB 0 John 2012-01-04 1 Jack 1991-02-03 2 Wendy 1986-10-04 3 Paul 1985-03-06 df2 = df2.merge(df1[['Name','DoB']], on='Name', how='left')
Another solution with
map by Series
s:
s = df1.set_index('Name')['DoB'] print (s) Name John 2012-01-04 Jack 1991-02-03 Wendy 1986-10-04 Paul 1985-03-06 Name: DoB, dtype: datetime64[ns] df2['DoB'] = df2.Name.map(s)
|
https://codedump.io/share/loQEpfUkeboi/1/adding-dates-series-column-from-one-dataframe-to-the-other-pandas-python
|
CC-MAIN-2016-50
|
refinedweb
| 338
| 72.87
|
Hoss:
> The fundemental issue is really wether Lucy, as a project, is "alive".
There is one extremely active participant, and as this thread attests, there
is both a small community and demand for the product. Futhermore, the work
that is being done to move Lucy forward is benefitting Java Lucene.
In my view, the amount of raw activity dedicated to advancing Lucy clearly
exceeds the threshold that would justify a mothballing. It's not abandoned.
Things are moving more slowly than they would if Balmain were actively
contributing, but *plenty* of important work is being accomplished.
Nevertheless, I understand why the project has an appearance of dormancy --
please see my reply to Jukka for an explanation.
It is my belief that the most useful thing we can do for Lucy right now is to
release an API-stable library that implements some extensibility enhancements
that further Lucy's high-level mission of empowering the users of its
bindings, and study how those enhancements perform in the real world. To
reach this goal as quickly as possible, I have performed a torso transplant on
KinoSearch -- arguably to KinoSearch's *detriment* as a CPAN module, since the
massive code churn has introduced bugs (e.g. the Highlighter broke) and
siphoned dev time away from other priorities. It's true that "KS isn't Lucy",
but my top priority is Lucy, not KS.
Additionally, rather than foster discussion within Lucy forums, I went abroad,
and without my voice as a catalyst, the forums stagnated. I calculated that
contributing to other lucene.apache.org forums would suffice. This was
an error.
To remedy the situation, the main thing we need to figure out is how to move
the extensibility prototyping work under the ASF umbrella. If those commits
had been flowing into the Lucy repository, we wouldn't be having this
conversation.
Incubating KinoSearch is one possible approach, but the point is to advance
Lucy, not bless KinoSearch. Is that really the right way to go?
> If you believe that the best long term strategy towards making Lucy into a
> solid project is to first focus on KinoSearch, then I have faith in your
> judgement -- but from my perspective, that seems like a strong argument in
> favor of archiving Lucy at this time and reviving it at a future date when
> you feel the time is right to bring he apprpriate code from KS into the
> apache fold via software grant.
The point of focusing on something named "KinoSearch" in the short term is to
test an API-stable release which implements Lucy extensibility enhancements
without spoiling the "Lucy" namespace. The API stability is important because
it allows people to migrate on their own schedule from "KinoSearch" to "Lucy"
and gives them the confidence that their elaborate extensions won't suddenly
get blasted by a "Lucy 2.0" core update that breaks backwards compatibility.
If sane versioning was supported by all of Lucy's target platforms, this
two-step maneuver wouldn't be necessary: we could release a short-lived Lucy
1.0, then follow it up with a solid Lucy 2.0. Unfortunately, that's not the
case, so I think it makes sense to hijack KS and use it as a vehicle.
Marvin Humphrey
|
http://mail-archives.apache.org/mod_mbox/lucene-general/200903.mbox/%3C20090309194851.GB6224@rectangular.com%3E
|
CC-MAIN-2016-30
|
refinedweb
| 542
| 59.64
|
Github, the most popular open source repository for open source software, offers a feature that let’s us view repositories by language. In this post, I want to dissect some of the most popular Swift repositories as of June 5th, 2015. So, let’s kick it off with the most starred Swift repository, Alamofire.
Alamofire
Alamofire is an “Elegant HTTP Networking in Swift”, written by Matt Thompson, a well known Objective-C developer responsible for the AFNetworking library. The library takes the built-in iOS networking features and abstracts them away in to a simpler, and more Swift-y API.
Take for example the case of a simple GET request that returns JSON:
Alamofire.request(.GET, "") .responseString { (_, _, string, _) in println(string) }
It’s the most popular library for Swift to date, so you should probably be using it, right?
Well, maybe… The library makes some common tasks simpler and less verbose, but if you don’t know the basics of networking in Swift (or Objective-C), it’s probably best to get a good understanding of the existing APIs first. After understanding what’s going on under the hood, you can make a more informed decision about whether or not you need the extra layer of abstraction. Alamofire is a big framework, and networking is a huge topic, so I don’t want to get too far in to the details on this library. But, suffice to say if you are working with complex networking requests with lots of back and forth with a web server, and/or working with a complicated authentication process, using Alamofire might reduce some of the repetitive coding tasks. If you are new to iOS development, I would recommend just stick to the APIs provided by Apple for now.
SwiftyJSON
SwiftyJSON is “The better way to deal with JSON data in Swift” according to it’s Github entry. This framework is one of the first I saw when Swift was first released, that combined with the fact that JSON parsing is such a common problem is how it became a top repository, helping to deal with the messiness of Apple’s built-in JSON parser. In particular, the static typing of Swift and optional syntax led to a lot of verbose JSON parsing code, guessing and checking each step of the way for each key and checking every cast. The truth is though, using this library is VERY similar to just using optional chaining and Swift’s normal casting syntax. There is not much benefit here, and from what I’ve seen in production, SwiftyJSON has some performance problems, as a result I’m not sure I would recommend using it right now, except in prototype apps, or during the learning phase.
Take a look at the example they give as the standard approach to parsing JSON, which they describe as “not good”:
let JSONObject: AnyObject? = NSJSONSerialization.JSONObjectWithData(data, options: nil, error: nil) if let statusesArray = JSONObject as? [AnyObject], let status = statusesArray[0] as? [String: AnyObject], let user = status["user"] as? [String: AnyObject], let username = user["name"] as? String { // Finally we got the username }
They then present their alternative version of the syntax:
let json = JSON(data: dataFromNetworking) if let userName = json[0]["user"]["name"].string{ //Now you got your value }
There’s a few issues here, first of which is that the simplifications they are showing are partially just taking advantage of language features that would actually work with the regular parser. Second, it seems like their example actually would not work.
Based on the example code shown above, the example JSON they are parsing looks something like this:
{ "statuses": [ { "user": { "name": "Bob" } } ] }
One issue with this sample right off the bat is that they are casting the initial value of the JSON to an array, which would suggest that the root element is an array, which is invalid JSON. The type of the root object in valid JSON is always going to be a key/value. Or equivalently in Swift, a Dictionary of type [String:AnyObject]. Additionally, it’s good practice to actually check if the JSON parsing succeeded or not.
Once we start going through and fixing all the issues with the sample code, assuming we want to explicitly cast everything as they have shown, we end up with something like this:
if let JSONObject = NSJSONSerialization.JSONObjectWithData(data, options: nil, error: nil) as? [String:AnyObject], statusesArray = JSONObject["statuses"] as? [[String:AnyObject]] { let status = statusesArray[0] as [String: AnyObject] if let user = status["user"] as? [String: AnyObject], let username = user["name"] as? String { println("username: \(username)") } } else { println("Failed to parse JSON, handle this problem") }
Now, this is in fact pretty verbose, and could be reduced quite a bit, but let’s stop a think about what we’re doing here. In this example, we are trying to download a list of statuses which are owned by users, and they have names. In an actual Swift application, I would expect there to be a model representing these objects. Maybe something like this:
struct User { let name: String } struct Status { let user: User }
Assume we are using SwiftyJSON for a moment, how would we add these records to the model of our app? Maybe with a bit of code like this…
struct User { let name: String } struct Status { let user: User } let parsedJson = JSON(data: data) for (key, val) in parsedJson["statuses"] { if let username = val["user"]["name"].string { let owner = User(name: username) let newStatus = Status(user: owner) } }
This works relatively well, assuming we are just creating objects from a JSON feed rather than synchronizing them. But what if there is a server error and the JSON comes back invalid? For example if there is a server error which changes the JSON to present an “error” key, and it no longer includes “statuses”, this loop simply would not be executed. Failing silently is better than crashing, but it would be nice to check for issues and try again, or adjust something in the app.
Since we need to check for the presence of statuses, and this for loop doesn’t actually do that, we need to check the count of statuses first, which means we need to cast it to an array, and *then* check the count…
if(parsedJson["statuses"].count<1) { println("Oops! An error occurred") }
And that's that! Right?
Well, no...
If the key isn't defined, this count property evaluates to 0, which could just mean there is no new statuses to see. The count really should not be zero, it should be null.. but SwiftyJSON is telling us it's 0. This seems like the kind of thing I really *don't* want a JSON parser to be doing. They really seem to not like the optional syntax in Swift, and instead reinvented it with these type properties. Why not just stick with convention?
Our final code might look something like this:
struct User { let name: String } struct Status { let user: User } let parsedJson = JSON(data: data) for (key, val) in parsedJson["statuses"] { if let username = val["user"]["name"].string { let owner = User(name: username) let newStatus = Status(user: owner) } } if(parsedJson["statuses"].count<1) { println("Oops! An error occurred") } if let err = parsedJson["error"].string { println(err) }
Our code is starting to grow, and this doesn't cover a ton of things we would need in a real-world application, such as updating the model, including more properties, checking for equality, enforcing uniqueness, cascading relationship changes, and a host of other things. Core Data can handle much of this, and it's common practice to implement models as Core Data models, but that still creates a situation where we have to custom implement all kinds of methods for converting the entire model object (such as Status) *back* in to JSON to update the server.
In the Objective-C world there is Mantle, a great library for handling such things. Before that there was RestKit. RestKit however made some ...interesting... design decisions a few years ago in a big update, and haven't ever really recovered since then. Unfortunately I haven't found a good solution for Swift just yet, and trying to work with Mantle proves to be problematic in it's current form, unless you implement all your models in Obj-C, something I'm not sure we all want to do at this stage.
I know this isn't all problems with SwiftyJSON, but they ironically break a lot of Swift conventions in dealing with optional values. SwiftyJSON is really a terrible name, they are very much not Swifty at all. However, the syntax is a little easier on the eyes. Personally, I don't use the library in my projects.
Spring
Spring is "A library to simplify iOS animations in Swift." How does it do this? Let's take a look.
Trying out some sample code I threw together this quick demo UIViewController that adds a blue square to the screen and animates it in, give it a try yourself, it's pretty nifty:
import UIKit import Spring class ViewController: UIViewController { var square = SpringView(frame: CGRectMake(0, 0, 200, 200)) override func viewDidLoad() { super.viewDidLoad() square.center = self.view.center square.backgroundColor = UIColor.blueColor() square.animation = "squeezeDown" square.animate() self.view.addSubview(square) } }
The SpringView seems to basically just be a UIView subclass with the animations added in. I don't know if I really like the idea of having to use their UIView, but I suppose most of the time I just use the basic UIView, and even if I didn't, I could just subclass SpringView instead.
Spring sports quite a few animation types, set as a string. The square.animation = "squeezeDown" here is what's determining the animation to play. The library goes beyond this, and in fact allows simple animations to be created in storyboards. So in theory you could put Spring in your Xcode project, and then pass it off to a designer to set up some nifty animations using this library. Very interesting idea, I would like to hear from someone who has tried to do exactly this.
Quick
Quick is "The Swift (and Objective-C) testing framework."
Really? It's THE testing framework? Let's take a look at how Quick works as opposed to XCTest, or expecta.
In XCTest, you might define an assertion that you're testing against like this:
class JSONSwiftTContrivedExample() { let name = "jameson" XCTAssertEqual(name, "jameson", "Name should be \"jameson\"") } }
This is okay, it makes the test basically confirm that name is equal to "jameson". It's simple enough, but there is a common trend/desire among developers to instead express test cases in terms of desired behavior, rather than specifically implementing what the desired behavior causes. Those may sound like the same thing, but take a look at how Quick (due to it's usage of the library Nimble) expresses the same thing like this:
import Quick import Nimble class AQuickTest: QuickSpec { override func spec() { describe("the user") { it("has the name 'Jameson'") { let name = "Jameson" expect(name).to(equal("Jameson")) } } } }
More than anything else, this framework encourages behavioral tests, which is why this example includes more information about our expectations.
Quick also eases some of the pain of asynchronous testing. In XCTest I personally tend to use XCTestAsync, although Xcode 6 does introduce a way to do this using XCTestExpectation. The basic way that works is you can create an expectation object, and then fulfill it when the async operation is complete. It's not a bad approach.
import Quick import Nimble @objc class AsyncExample { var longTaskIsDone = false var timer: NSTimer? func startLongTask() { timer = NSTimer.scheduledTimerWithTimeInterval(2, target: self, selector: "taskIsDone", userInfo: nil, repeats: false) } func taskIsDone() { println("task done") longTaskIsDone = true } } class AQuickTest: QuickSpec { override func spec() { describe("the user") { it("has the name 'Jameson'") { let name = "Jameson" expect(name).to(equal("Jameson")) } } describe("Async Example") { describe("its long task") { it("should finish in 5 seconds") { let asyncExample = AsyncExample() asyncExample.startLongTask() expect(asyncExample.longTaskIsDone).toEventually(beTruthy(), timeout: 3, pollInterval: 0.4) } } } } }
In this example we just create an NSTimer that fires in 2 seconds, as a simulated async example. Then in the Async Example test, we can use the .toEventually() method to wait around and keep checking in to the asyncExample.longTaskIsDone property. This is slightly cleaner in that using expectations, because with this method we don't need to change our code to make sure the test is notified of this variable changing. Having an ongoing timer keep checking is great (just be careful not to have it calling methods with side effects!)
Overall Quick seems pretty interesting, the approach is sure to appeal to those in professional environments, or working with an Agile team where specs change fast.
That's it for this time, if you would like to see any of these libraries covered in greater detail be sure to let me know. You can find me on Twitter.
|
http://jamesonquave.com/blog/open-source-swift-a-look-at-the-top-swift-repositories/
|
CC-MAIN-2017-22
|
refinedweb
| 2,165
| 59.64
|
I use bass.dll to play music files from
I try to use function called BASS_ChannelSetPosition. I use ctypes to access the Library. Below is the part of documentation (comes with bass as help file)
I need somebody to tell me whether the mode (BASS_POS_BYTE, BASS_MUSIC_POSRESET...etc) is a function or constant, and how do I set it as a flag in function. Below is the actual erroronous code
def onSetPosition(self, handle, position, mode): #QWORD BASS_ChannelSetPosition(DWORD handle, QWORD pos DWORD mode, ); - older version bass.BASS_ChannelSetPosition.argtypes = [ct.c_uint, ct.c_int64, ct.c_char_p] bass.BASS_ChannelSetPosition.restype = BOOL successful2 = bass.BASS_ChannelSetPosition(handle, position, mode) error = bass.BASS_ErrorGetCode()# don't worry, this is for debugging purposes only successful = (successful2, error)# don't worry, this is for debugging purposes only return successful # don't worry, this is for debugging purposes only
Sets the playback position of a sample, MOD music, or stream.
BOOL BASS_ChannelSetPosition(
DWORD handle,
QWORD pos,
DWORD mode
);
Parameters
handle The channel handle... a HCHANNEL, HSTREAM or HMUSIC.
pos The position, in units determined by the mode.
mode How to set the position. One of the following, with optional flags.
BASS_POS_BYTE The position is in bytes, which will be rounded down to the nearest sample boundary.
BASS_POS_MUSIC_ORDER The position is in orders and rows... use MAKELONG(order,row). (HMUSIC only)
BASS_MUSIC_POSRESET Flag: Stop all notes. This flag is applied automatically if it has been set on the channel, eg. via BASS_ChannelFlags. (HMUSIC)
BASS_MUSIC_POSRESETEX Flag: Stop all notes and reset bpm/etc. This flag is applied automatically if it has been set on the channel, eg. via BASS_ChannelFlags. (HMUSIC)
other modes & flags may be supported by add-ons, see the documentation.
Return value
If successful, then TRUE is returned, else FALSE is returned. Use BASS_ErrorGetCode to get the error code.
Error codes
BASS_ERROR_HANDLE handle is not a valid channel.
BASS_ERROR_NOTFILE The stream is not a file stream.
BASS_ERROR_POSITION The requested position is invalid, eg. it is beyond the end or the download has not yet reached it.
BASS_ERROR_NOTAVAIL The requested mode is not available. Invalid flags are ignored and do not result in this error.
BASS_ERROR_UNKNOWN Some other mystery problem!
|
https://www.daniweb.com/programming/software-development/threads/170228/is-this-a-function-or-constant
|
CC-MAIN-2016-44
|
refinedweb
| 359
| 51.55
|
Handling date time in code becomes little tricky when the project is used internationally because then there comes additional term Timezone. Timezone is a property of a location which needs to be considered while comparing that time with the time in another location. For example – there are two villages A and B. One day Ram from village A calls his friend Shyam in village B at 8:00 am to wish “good morning”. But Shyam receives Ram’s call at 6pm on same day and he replies “good evening”. That means village A’s timezone is 10 hrs behind village B’s timezone. So here we need some reference timezone about which all other timezones can be declared. This is where UTC (Coordinated Universal Time) comes into play. UTC is reference timezone by which all the timezones are declared. For example – Indian timezone is 5 hrs and 30 mins ahead of UTC which is denoted as UTC+05:30. In languages, these timezones are declared in date time library using constants such as ‘Asia/Kolkata’ which is Indian Standard Time. I will be talking about working with date time in python in this blog. In the FOSSASIA’s Open Event project since it is event management system, handling date-time with the timezone is one of the important tasks.
Here is the relevant code:
>>> import datetime >>> import pytz >>> now = datetime.datetime.now()
datetime.datetime.now() returns naive datetime as of the time setting of the machine on which the code is running. Naive date means it doesn’t contain any info about the timezone. It just contains some number values of year, month, hours etc. So just by looking at naive date we cannot actually understand the time. There comes aware datetime which contains timezone info.
>> now datetime.datetime(2017, 5, 12, 21, 46, 16, 909983) >>> now.isoformat() '2017-05-12T21:46:16.909983' >>> aware_now = pytz.timezone('Asia/Kolkata').localize(now) >>> aware_now datetime.datetime(2017, 5, 12, 21, 46, 16, 909983, tzinfo=<DstTzInfo 'Asia/Kolkata' IST+5:30:00 STD>
Pytz provides timezone object which takes string argument for timezone which has localize method which adds timezone info to the datetime object. Hence now aware datetime has timezone info too. Now if we print the time.
>>> aware_now.isoformat() '2017-05-12T21:46:16.909983+05:30
We get the +05:30 extra string at the end which gives timezone info. +05:30 means the timezone is 5 hrs and 30 mins ahead of UTC timezone. The comparison between datetimes can be made between naive-naive and aware-aware. If we try to compare between naive and aware,
>>> now < aware_now Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can't compare offset-naive and offset-aware datetimes >>> now2 = datetime.datetime.now() >>> now2 datetime.datetime(2017, 5, 15, 9, 44, 25, 990666) >>> now < now2 True >>> aware_now.tzinfo <DstTzInfo 'Asia/Kolkata' IST+5:30:00 STD>
tzinfo carries timezone info of the datetime object. We can make aware date to unaware by method replacing tzinfo to None.
>>> unaware_now = aware_now.replace(tzinfo=None) >>> unaware_now.isoformat() '2017-05-12T21:46:16.909983'
Formating datetime is done by mostly these two methods. One of which takes string format in which the result is required and another returns datetime in iso format.
>>> now.strftime('%Y-%m-%dT%H:%M:%S%z') '2017-05-12T21:46:16' >>> aware_now.strftime('%Y-%m-%dT%H:%M:%S%z') '2017-05-12T21:46:16+0530' >>> now.time().isoformat() '21:46:16.909983' >>> now.date().isoformat() '2017-05-12' >>> now_in_brazil_east = datetime.datetime.now(pytz.timezone('Brazil/East')) >>> now_in_brazil_east.isoformat() '2017-05-15T06:49:51.311012-03:00'
We can pass timezone argument to the now method to get current time in the passed timezone. But the care must be taken as this will use the timezone setting of the machine on which code is running to calculate the time at the supplied timezone.
Application
In the FOSSASIA’s Open Event Project, date-time is taken from the user along with the timezone like in one of the example shown below.
This is part of the event creation page where user has to provide date-time along with the timezone choice. At back-end the date-time and timezone are stored separately in the database. The event model looks like
class Event(db.Model): """Event object table""" ... start_time = db.Column(db.DateTime, nullable=False) end_time = db.Column(db.DateTime, nullable=False) timezone = db.Column(db.String, nullable=False, default="UTC") ...
The comparison between the stored date-time info with the real time cannot be done directly. Since the timezones of the both times need to be considered along with the date-time values while comparison. Likewise there is one case in the project code where tickets are filtered based on the start time.
def get_sales_open_tickets(event_id, event_timezone='UTC'): tickets = Ticket.query.filter(Ticket.event_id == event_id).filter( Ticket.sales_start <= datetime.datetime.now(pytz.timezone(event_timezone)).replace(tzinfo=None)).filter( Ticket.sales_end >= datetime.datetime.now(pytz.timezone(event_timezone)).replace(tzinfo=None)) …
In this case, first current time is found out using timezone method in the timezone which is stored as a separate data field in the database. Since comparison cannot be done between aware and naive date-time. Hence once current date-time is found out in the user’s timezone, it is made naive using replace method which makes the aware date-time into naive again. Hence can be compared with the naive date-time stored already.
|
https://blog.fossasia.org/handling-date-time-in-the-open-event-project/
|
CC-MAIN-2019-09
|
refinedweb
| 917
| 57.87
|
import
By sea or air
Whether it's full container loads, the
smallest of shipments or over sized
project cargo your dedicated Air Sea
Global Logistics represtative will
personally research, quote in detail
and manage your job... every step
of the way. read more
export
Anywhere in the world
Cargo from your door, customs
cleared and delivered anywhere
(yes anywhere), with ease and peace
of mind. Your Air Sea Global Logistics
representative is the only contact you
need to make this happen... every
step of the way. read more
tracking
Transparency of cargo
Wherever it is in the world, your
dedicated Air Sea Global Logistics
representative will keep you
informed on the progress and
delivery of your cargo...
every step of the way.
read more
customs & quarantine
Imports
Air Sea Global Logistics can
expedite delivery of your goods
by clearing them prior to arrival
whenever possible. Our experienced
Customs Brokers will save you
unexpected delays...every
step of the way. read more
transport
Door to door
Air Sea Global Logistics has
all your transportation needs
covered, from wharf cartage,
interstate or local transport from
the airport to your door. We're here
to depend on at the best rates...
every step of the way. read more
warehousing
Storage solutions
Whether it's pick and pack storage
facilities, long-term, short-term or
in-transit, Air Sea Global Logistics
will provide the answer to suit you
specific needs... every step of the
way. read more
|
http://airseaglobal.com.au/
|
CC-MAIN-2018-09
|
refinedweb
| 244
| 63.9
|
Subject: [Boost-users] Boost BCP namespace renaming : dispelling the FUD ?
From: BernardH (gmane.comp.lib.boost.user_at_[hidden])
Date: 2011-07-19 08:43:05
Hi,
When implementing a library, one might want to neither require nor prevent the
library users to also use any version of Boost.
I believed that BCP used with namespace renaming would allow the use of
(namespace-renamed) Boost in the implementation of such a library without *any*
concern about conflict at link time and / or rub-time errors if the library user
chooses to also use another Boost version.
However, I have been reading that Boost brings some symbols that cannot be
renamed because they are out of any namespace, with "C" linkage[*].
This brings the following questions :
1) can a list all such names be found ?
2) To be *completely safe*, must I :
2.1) only avoid using those symbols in my library
2.2) avoid the boost libaries that provide such symbols
2.3) avoid boost libraries that depend on such symbols (how am I to find out
which ?)
2.4) avoid boost libraries that depend on the boost libraries providing such
symbols (easier to do than 2.3 with bcp, I believe).
2.5) Despair ! (i.e. forgo boost usage under such constraints).
Boost is a great set of libraries, I think that it is a pity that a legitimate
concern (not imposing a specific Boost version on users) would prevent it's use
for implementing libraries : please, help me dispel the Fear, Uncertainty and
Doubt over this use case !
Best Regards,
Bern
|
https://lists.boost.org/boost-users/2011/07/69470.php
|
CC-MAIN-2021-21
|
refinedweb
| 262
| 72.05
|
need to Write a Java Applet as a small airline reservation system. The system has only one airplane with 10 seats: 1 to 5 for first class and 6 to 10 for economy. The applet has two buttons and a text field for boarding pass. If the user clicked the First Class button, a currently available first class seat is printed in the boarding pass. If the user clicked the Economy button, a currently available economy seat is printed in the boarding pass. If the user selected section has no available seat, your program should ask if the user is OK with the other section seats. If the user clicked OK, you print the boarding pass of the assigned seat. If the user clicked Cancel, print "Next flight leaves in 3 hours"
when i run my applet and click firstclass or economy all i get in the outputField is "Next flight leaves in 3 hours." it doesn't go to seats 1 through 5 for firstclass or 6 through 10 for economy.
also as for the my program should ask if the user is OK with the other section seats. If the user clicked OK, you print the boarding pass of the assigned seat. If the user clicked Cancel, print "Next flight leaves in 3 hours"
i know its
Code Java:
choice = Integer.parseInt(JOptionPane.showInputDialog("Are you OK with First Class. (click OK or Cancel")); if (choice == (not sure what i put here??) else JOptionPane.showMessageDialog("Next flight leaves in 3 hours."); { }
here is my code
Code Java:
import javax.swing.*; import java.awt.*; import java.awt.event.*; public class Assign7 extends JApplet implements ActionListener { JTextField outputField; JButton firstClassButton, economyButton; int section, seats[], firstclass; int economy, people, choice; public void init() { Container container = getContentPane(); container.setLayout(new FlowLayout()); firstClassButton = new JButton( "First Class" ); firstClassButton.addActionListener( this ); container.add(firstClassButton); economyButton = new JButton( "Economy" ); economyButton.addActionListener( this ); container.add(economyButton); outputField = new JTextField ( 17 ); outputField.setEditable(false); container.add(outputField); seats = new int[ 11 ]; firstclass = 1; economy = 6; } public void actionPerformed( ActionEvent actionEvent ) { if ( actionEvent.getSource() == firstClassButton && people <=10) { if ( firstclass <= 5 && seats[ firstclass ] == 0 ) { outputField.setText ("Boarding Pass: Seat " + firstclass+ " FirstClass"); seats[ firstclass++ ] = 1; people++; } else if ( firstclass > 0 && economy <= 6) { outputField.setText ("FirstClass is full. Economy?"); } else outputField.setText("Next flight leaves in 3 hours."); } else if (actionEvent.getSource() == economyButton && people <=10) { if ( economy >= 5 && seats[ economy ] == 0) { outputField.setText ("Boarding Pass: Seat " +economy+ " Economy"); seats[ economy++ ] = 1; people++; } else if ( economy > 0 && firstclass <= 6 ) { outputField.setText("Economy is full. FirstClass?"); } else outputField.setText ("Next flight leaves in 3 hours."); } } } { }
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/8389-problem-choice-airline-reservation-printingthethread.html
|
CC-MAIN-2016-36
|
refinedweb
| 432
| 50.33
|
Agenda
See also: IRC log
whatever
yeah - but zakim wasn't on the channel yet
<oedipus> ah - i should stick to my own systems' problems
<markbirbeck> Hello all. I am sending my apologies, because if I don't get my changes to the RDFa draft finished in the next couple of hours, I won't be able to do anything with them until late on tomorrow.
<markbirbeck> I'll stay on IM though, in case there is anything I can help with.
<scribe> scribe: ShaneM
<Roland> Agenda:
dont have a quorum to resolve to publish it. Roland suggests that we poll via e-mail.
no one on the call has a problem taking it to last call.
RESOLUTION: Those of us on the call are happy.
<scribe> ACTION: Roland will send an email poll out to get us to last call on CURIEs. [recorded in]
Yam is not here, but he has sent a report to Steven. We need to get Steven to deal with it.
the role attribute is on everything, so it is already on object.
One reason alessio wants this is to help improve content negotiation in objects.
Rich: are they asking that we extend our
collection of XHTML values?
... Its improtant that we remember our existing values are about document sections.
... The values listed here are sort of a random selection. If we are going to look at this we should do a comprehensive stufy.
Gregory: Alessio is really asking if this is something he should investigate further
Rich: is this really something that the XHTML 2 Working Group should be defining
Roland: Not a big fan of object. Seems a bit of
a bucket.
... might be useful to learn from the work that HTML 5 people are doing.
Rich: would need to spend a lot more time on this to do it justice.
There are no outstanding comments.
<oedipus> rich:;statetype=-1;upostype=-1;changetype=-1;restype=-1
There are 10 open issues
<oedipus>
<oedipus> RESOLUTION: allow role to be chameleon ()
Issue 8017 - what should a user agent do when looking at multiple role values on an element
<oedipus> "allow role to be chameleon" is the only item noted as resolved in f2f minutes
Rich: browser vendors are going to use the
first one they recognize for navigation...
... There might be an extension mechanism for this in ARIA in the future. Role is also used in middle ware, so it is not about the user agent all the time.
something like this: The role attribute specification does not require any behavior by user agents. So the interpretation of values by user agents is outside the scope of this specification.
Issue 8019 - do not use CURIEs for role values
scoped values are an extension mechanism, and we need some mechanism for ready extension of roles. If CURIEs don't work out, we will have to consider other mechanisms, and full URIs are worthy of consideration.
<scribe> ACTION: Send formal reply to 8019 [recorded in]
ensure the submitter knows that the CURIE spec is separate and will be referenced by role in its next draft.
<oedipus> "I wonder in which namespace that non-qualified role values are interpreted." "we'll have hard time finding which one's the namespace or which one's the Class identifier"
issue 8020: shane will deal with
issue 8021: editorial
issue 8023: al gilman's comments
Issues 1 and 2 are dealt with via chameleon namespacing.
Issue 3 has been closed through negotiation.
<scribe> ACTION: Gregory to respond to issue 8023 formally so we have closure. [recorded in]
issue 8024:
There is an issue about CURIEs and how they behave when there are errors. That should be split out into a separate issue.
<scribe> ACTION: Shane to split 8024 into two issues. [recorded in]
CURIE spec does not deal with the issue of what happens when CURIEs are broken.
Editorial comments in the issue are valid, and will be addressed.
<oedipus> quote from PF telecon meeting 26 November 2007 (): AlGilman: "AG: tell XHTML2 WG not to worry about title as predefined role, PF will do it"
Issue 8025:
Issue 8026:
removing CURIE defintiion from role.... sure.
comment 2 - overcome by events
<oedipus> take-away comment: "
comment 3 - define a schema datatype for URIorCURIE. will do.
roland: we should include the schema implementation; possibly as non-normative for now
<scribe> ACTION: Shane to add a non-normative schema implementation to CURIE spec [recorded in]
issue 4 - correct, the definitions of CURIE were inconsistent and the one in rdfa-syntax is the right one. Overcome by events.
issue 5: the default is not in the xhtml vocab space for CURIEs generically, just with regard to the role attribute when used in the XHTML namespace.
issue 8027: agreed to allow chameleon
issue 8028: tag comment about curies. We have said the comment is not in scope because it is about CURIEs, not about how @role uses CURIEs.
This is scribe.perl Revision: 1.128 of Date: 2007/02/23 21:38:13 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/alession/alessio/ Succeeded: s/soemthing/something/ Succeeded: s/8042/8024/ Found Scribe: ShaneM Inferring ScribeNick: ShaneM Default Present: +1.763.767.aaaa, Gregory_Rosmaita, [IBM] Present: +1.763.767.aaaa Gregory_Rosmaita [IBM] WARNING: Replacing previous Regrets list. (Old list: yam, steven, alessio) Use 'Regrets+ ... ' if you meant to add people without replacing the list, such as: <dbooth> Regrets+ Yam, Steven Regrets: Yam Steven Agenda: Got date from IRC log name: 12 Dec 2007 Guessing minutes URL: People with action items: an email formal gregory out poll reply roland send shane will WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
|
http://www.w3.org/2007/12/12-xhtml-minutes.html
|
CC-MAIN-2014-35
|
refinedweb
| 961
| 71.75
|
I have searched a lot but not able to find particular solution. There are also some question regarding this on stackoverflow but i am not able to find satisfactory answer so i am asking it again.
I have a class as follow in java . I know how to use threads in java.
//please do not consider syntax if there is printing mistake, as i am typing code just for showing the concept in my mind public class myclass{ private List<String> mylist=new ArrayList<String>(); public addString(String str){ //code to add string in list } public deleteString(String str){//or passing an index to delete //code to delete string in list } }
now i want to do these two operations simultaneously. for that i have created two thread class one performs addString() logic in run and another perform deleteString() logic.i am passing mylist in the constructor of each thread but how can i return an object after performing addition and deletion to mylist?
Before i was thinking that “If i am passing the mylist in constructor of thread it passes the address of the mylist to thread and thread performs operations on it that changes refer to mylist object” But it is not like that as the changes are not reflacted to mylist object . can any one elaborate this?
what is the best way to achieve this?
the requirement is like that if a thread is inserting an element at last another thread should be able to delete some element at other index say 2nd simultaneously.
EDIT
i have done it as follow: thanx to Enno Shioji
public class myClass { private List<String> mylist = Collections.synchronizedList(new ArrayList<String>()); public myClass(){ mylist.add("abc"); mylist.add("def"); mylist.add("ghi"); mylist.add("jkl"); } public void addString(String str) { mylist.add(str); } public void displayValues() { for (int i = 0; i < mylist.size(); i++) { System.out.println("value is " + mylist.get(i) + "at " + i); } } public void deleteString(int i) { mylist.remove(i); } } class addThread { public static void main(String a[]) { final myClass mine = new myClass(); Thread t1 = new Thread() { @Override public void run() { mine.displayValues(); mine.addString("aaa"); mine.displayValues(); } }; Thread t2 = new Thread() { public void run() { mine.displayValues(); mine.deleteString(1); mine.displayValues(); } }; t1.start(); t2.start(); } }
is there any other way to do so?
Use Synchronized List , It would be thread safe
Use
Collection.synchronizedList(yourPlainList)
###
Threads and object instance are different concepts. If you want to share data among threads, you need to access a single object instance from two threads. In this case, you should do something like this.
public class MyClass{ private final List<String> mylist = new ArrayList<String>(); public synchronized void addString(String str){ //code to add string in list } public synchronized void deleteString(String str){ //or passing an index to delete //code to delete string in list } }
and then
final MyClass mine = new MyClass(); Thread t1 = new Thread(){ public void run(){ mine.addString("aaa"); } }(); Thread t2 = new Thread(){ public void run(){ mine.deleteString("bbb"); } }(); t1.start(); t2.start();
Note how you are referring to the same object instance (
mine) from both threads. Also note that I added the
synchronized keyword to make
MyClass thread-safe. This forces all operations to be done sequentially rather than truly “simultaneously”. If you want true simultaneous operations on the collection, you will need to use concurrent data structures like a Skip List and get rid of
synchronized keyword.
|
https://exceptionshub.com/java-access-an-object-in-different-threads.html
|
CC-MAIN-2022-05
|
refinedweb
| 569
| 63.09
|
By Leon Yin Last Updated 2017-06-11
View this notebook in NBViewer or Github
This two part module will show you how to request data from USASpending.gov, store it as a tab-separated value (tsv), and perform some temporal and spatial analysis.
I hope these two modules are clear enough to be used by Journalists, Laywers, and other folks who wish to use data to audit government contractors
Please view part 2 on NBViewer or Github
This notebook describes how to download a annual records from the USASpending.gov website for a specific contractor.
In this example we download all records for the Core Corrections Association of America (CCA), and all it's subsidiaries.
Let's start by getting all the Python packages we used in this module first.
%%sh pip install -r requirements.txt
%matplotlib inline import os from zipfile import ZipFile from multiprocessing import Pool import requests from io import BytesIO from itertools import repeat import datetime import pandas as pd
latest_update = '20170515' next_latest = '20170115' year = 2017 dep= 'All'
url = ('{UP_MONTH}/' 'tsv/{{YEAR}}_{DEP}_Contracts_Full_{UP_DATE}.tsv.zip'.format( UP_MONTH=latest_update[:-2], DEP=dep, UP_DATE=latest_update)) url
'{YEAR}_All_Contracts_Full_20170515.tsv.zip'
# for files that were not updated by the date above... url_legacy = ('{UP_MONTH}/' 'tsv/{{YEAR}}_{DEP}_Contracts_Full_{UP_DATE}.tsv.zip'.format( UP_MONTH=next_latest[:-2], DEP=dep, UP_DATE=next_latest))
data_in = 'data_in/spending'
# these are the years we are interested in: start = 2000 end = 2017 years = [y for y in range(start, end + 1)]
# There are the aliases for each company, this is case insensitive, # but need to be enclose in a single or double quote! companies = ['Corrections Corporation of America', 'CoreCivic', 'TransCor']
def load_and_sift(year, regex): '''Downloads zipped tsv file from: to a requests object. Expands zipfile and reads each file, chunkwise into Pandas dataframes. The dataframe (df) is filtered by the conpanies' RegEx expression. Args: year (int): The fiscal year of records to load. regex (string): A regex expression of company name(s). Returns: df: a Pandas Dataframe containing records from the given. ''' print(year) r = requests.get(url.format(YEAR=year)) last_update = datetime.datetime.strptime(latest_update, '%Y%m%d') if r.status_code == 404: # if url doesn't work, use the legacy url. r = requests.get(url_legacy.format(YEAR=year)) last_update = datetime.datetime.strptime(next_latest, '%Y%m%d') if r.status_code == 200: # make sure the download was successful. # the downloaded stream is a zip archive zipfile = ZipFile(BytesIO(r.content)) df_final = pd.DataFrame() # for each file in the zip archive for f in zipfile.namelist(): # process the file in dataframe chunks! for df in pd.read_csv(zipfile.open(f), sep='\t', chunksize=100000, low_memory=False): # filter the dataframe chunk for active vendors # and relevant company names. df = df[(~df['vendorname'].isnull()) & (df['vendorname'].str.contains(regex, case=False))] # some date tags... df['lastupdate'] = last_update df['contract_year'] = year df['filename'] = f df['search_terms'] = regex df_final = df_final.append(df, ignore_index=True) return df_final else: raise "bad request"
This next step might take a while.
It's expedited using
pool, which parallelizes the task
If is equivalent to
for year, co in zip(years, repeat('|'.join(companies))): df_list += load_and_sift(year, co) df = pd.concat(df_list, ignore_index=True)
with Pool() as pool: df_list = pool.starmap(load_and_sift, zip(years, repeat('|'.join(companies)))) df = pd.concat(df_list, ignore_index=True)
We can take a peek at 5 random records here:
df.sample(5)
5 rows × 229 columns
There are a lot of columns! So it might to hard to know what you're after.
len(df.columns)
229
save the data to a gzipped tab-separated value document (tsv)
outfile = data_in + '_' + companies[0].replace(' ', '_').lower() + '.tsv.gz' df.to_csv(outfile, sep='\t', compression='gzip', index=False) print("Data saved to {}".format(outfile))
Data saved to data_in/spending_corrections_corporation_of_america.tsv.gz
|
https://nbviewer.jupyter.org/github/yinleon/us-spending/blob/master/0_get_data.ipynb?flush=True
|
CC-MAIN-2019-18
|
refinedweb
| 630
| 51.65
|
Exporting XML
Functions for Exporting XML
Export
You can export XML data from the Wolfram Language using the standard Export function.
The first argument of the function specifies the file to which the data should be exported. The second argument specifies the data to be exported. For exporting XML data, this can be a symbolic XML expression or any other Wolfram Language expression. You can also specify an optional third argument to control the form of the output. For exporting XML data, the relevant file formats are "XML", "ExpressionML", "MathML", and "SVG".
You can control details of the export process using options for Export.
ExportString
You can convert Wolfram Language expressions into XML strings using ExportString.
For exporting as XML, the relevant formats are "XML", "ExpressionML", "MathML", and "SVG".
You can control details of the export process using options for ExportString.
Export Options
Introduction
Standard options for Export or ExportString can be used for greater control over the export process.
Options for exporting XML data:
"Annotations"
This option controls which annotations are added to the output XML. The value is a list whose elements can be any combination of "Document" supports output of named character entities.
You can also specify a list as the value of this option. For example, if you want to export both HTML and MathML entities, use "Entities"->{"HTML","MathML"}. If neither the "HTML" nor "MathML" setting is used, all characters are still output correctly in XML. However, they may be numeric entities or encoded in UTF-8.
If you use your own list of character replacement rules, you are also responsible for including some basic escaping required by XML. For example:
".
The XML produced in this case is not meaningful because there is no namespace declaration of the form xmlns:prefix = "". This is desirable when you are exporting the XML as a fragment to be enclosed in an outer piece of XML, for which the namespace has a binding.
|
https://reference.wolfram.com/language/XML/tutorial/ExportingXML.html
|
CC-MAIN-2019-51
|
refinedweb
| 324
| 55.24
|
If you work with big data sets, you probably remember the “aha” moment along your Python journey when you discovered the Pandas library. Pandas is a game-changer for data science and analytics, particularly if you came to Python because you were searching for something more powerful than Excel and VBA.
So what is it about Pandas that has data scientists, analysts, and engineers like me raving? Well, the Pandas documentation says that it uses:
“fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive.”
Fast, flexible, easy, and intuitive? That sounds great! If your job involves building complicated data models, you don’t want to spend half of your development hours waiting for modules to churn through big data sets. You want to dedicate your time and brainpower to interpreting your data, rather than painstakingly fumbling around with less powerful tools.
But I Heard That Pandas Is Slow…
When I first started using Pandas, I was advised that, while it was a great tool for dissecting data, Pandas was too slow to use as a statistical modeling tool. Starting out, this proved true. I spent more than a few minutes twiddling my thumbs, waiting for Pandas to churn through data.
But then I learned that Pandas is built on top of the NumPy array structure, and so many of its operations are carried out in C, either via NumPy or through Pandas’ own library of Python extension modules that are written in Cython and compiled to C. So, shouldn’t Pandas be fast too?
It absolutely should be, if you use it the way it was intended!
The paradox is that what may otherwise “look like” Pythonic code can be suboptimal in Pandas as far as efficiency is concerned. Like NumPy, Pandas is designed for vectorized operations that operate on entire columns or datasets in one sweep. Thinking about each “cell” or row individually should generally be a last resort, not a first.
This Tutorial
To be clear, this is not a guide about how to over-optimize your Pandas code. Pandas is already built to run quickly if used correctly. Also, there’s a big difference between optimization and writing clean code.
This is a guide to using Pandas Pythonically to get the most out of its powerful and easy-to-use built-in features. Additionally, you will learn a couple of practical time-saving tips, so you won’t be twiddling those thumbs every time you work with your data.
In this tutorial, you’ll cover the following:
- Advantages of using
datetimedata with time series
- The most efficient route to doing batch calculations
- Saving time by storing data with HDFStore
To demonstrate these topics, I’ll take an example from my day job that looks at a time series of electricity consumption. After loading the data, you’ll successively progress through more efficient ways to get to the end result. One adage that holds true for most of Pandas is that there is more than one way to get from A to B. This doesn’t mean, however, that all of the available options will scale equally well to larger, more demanding datasets.
Assuming that you already know how to do some basic data selection in Pandas, let’s get started.
The Task at Hand
The goal of this example will be to apply time-of-use energy tariffs to find the total cost of energy consumption for one year. That is, at different hours of the day, the price for electricity varies, so the task is to multiply the electricity consumed for each hour by the correct price for the hour in which it was consumed.
Let’s read our data from a CSV file that has two columns: one for date plus time and one for electrical energy consumed in kilowatt hours (kWh):
The rows contains the electricity used in each hour, so there are 365 x 24 = 8760 rows for the whole year. Each row indicates the usage for the “hour starting” at the time, so 1/1/13 0:00 indicates the usage for the first hour of January 1st.
Saving Time With Datetime Data
The first thing you need to do is to read your data from the CSV file with one of Pandas’ I/O functions:
>>> import pandas as pd >>> pd.__version__ '0.23.1' # Make sure that `demand_profile.csv` is in your # current working directory. >>> df = pd.read_csv('demand_profile.csv') >>> df.head() date_time energy_kwh 0 1/1/13 0:00 0.586 1 1/1/13 1:00 0.580 2 1/1/13 2:00 0.572 3 1/1/13 3:00 0.596 4 1/1/13 4:00 0.592
This looks okay at first glance, but there’s a small issue. Pandas and NumPy have a concept of
dtypes (data types). If no arguments are specified,
date_time will take on an
object dtype:
>>> df.dtypes date_time object energy_kwh float64 dtype: object >>> type(df.iat[0, 0]) str
This is not ideal.
object is a container for not just
str, but any column that can’t neatly fit into one data type. It would be arduous and inefficient to work with dates as strings. (It would also be memory-inefficient.)
For working with time series data, you’ll want the
date_time column to be formatted as an array of datetime objects. (Pandas calls this a
Timestamp.) Pandas makes each step here rather simple:
>>> df['date_time'] = pd.to_datetime(df['date_time']) >>> df['date_time'].dtype datetime64[ns]
(Note that you could alternatively use a Pandas
PeriodIndex in this case.)
You now have a DataFrame called
df that looks much like our CSV file. It has two columns and a numerical index for referencing the rows.
>>> df.head() date_time energy_kwh 0 2013-01-01 00:00:00 0.586 1 2013-01-01 01:00:00 0.580 2 2013-01-01 02:00:00 0.572 3 2013-01-01 03:00:00 0.596 4 2013-01-01 04:00:00 0.592
The code above is simple and easy, but how fast it? Let’s put it to the test using a timing decorator, which I have unoriginally called
@timeit. This decorator largely mimics
timeit.repeat() from Python’s standard library, but it allows you to return the result of the function itself and print its average runtime from multiple trials. (Python’s
timeit.repeat() returns the timing results, not the function result.)
Creating a function and placing the
@timeit decorator directly above it will mean that every time the function is called, it will be timed. The decorator runs an outer loop and an inner loop:
>>> @timeit(repeat=3, number=10) ... def convert(df, column_name): ... return pd.to_datetime(df[column_name]) >>> # Read in again so that we have `object` dtype to start >>> df['date_time'] = convert(df, 'date_time') Best of 3 trials with 10 function calls per trial: Function `convert` ran in average of 1.610 seconds.
The result? 1.6 seconds for 8760 rows of data. “Great,” you might say, “that’s no time at all.” But what if you encounter larger data sets—say, one year of electricity use at one-minute intervals? That’s 60 times more data, so you’ll end up waiting around one and a half minutes. That’s starting to sound less tolerable.
In actuality, I recently analyzed 10 years of hourly electricity data from 330 sites. Do you think I waited 88 minutes to convert datetimes? Absolutely not!
How can you speed this up? As a general rule, Pandas will be far quicker the less it has to interpret your data. In this case, you will see huge speed improvements just by telling Pandas what your time and date data looks like, using the format parameter. You can do this by using the
strftime codes found here and entering them like this:
>>> @timeit(repeat=3, number=100) >>> def convert_with_format(df, column_name): ... return pd.to_datetime(df[column_name], ... format='%d/%m/%y %H:%M') Best of 3 trials with 100 function calls per trial: Function `convert_with_format` ran in average of 0.032 seconds.
The new result? 0.032 seconds, which is 50 times faster! So you’ve just saved about 86 minutes of processing time for my 330 sites. Not a bad improvement!
One finer detail is that the datetimes in the CSV are not in ISO 8601 format: you’d need
YYYY-MM-DD HH:MM. If you don’t specify a format, Pandas will use the
dateutil package to convert each string to a date.
Conversely, if the raw datetime data is already in ISO 8601 format, Pandas can immediately take a fast route to parsing the dates. This is one reason why being explicit about the format is so beneficial here. Another option is to pass
infer_datetime_format=True parameter, but it generally pays to be explicit.
Note: Pandas’
read_csv() also allows you to parse dates as a part of the file I/O step. See the
parse_dates,
infer_datetime_format, and
date_parser parameters.
Simple Looping Over Pandas Data
Now that your dates and times are in a convenient format, you are ready to get down to the business of calculating your electricity costs. Remember that cost varies by hour, so you will need to conditionally apply a cost factor to each hour of the day. In this example, the time-of-use costs will be defined as follows:
If the price were a flat 28 cents per kWh for every hour of the day, most people familiar with Pandas would know that this calculation could be achieved in one line:
>>> df['cost_cents'] = df['energy_kwh'] * 28
This will result in the creation of a new column with the cost of electricity for that hour:
date_time energy_kwh cost_cents 0 2013-01-01 00:00:00 0.586 16.408 1 2013-01-01 01:00:00 0.580 16.240 2 2013-01-01 02:00:00 0.572 16.016 3 2013-01-01 03:00:00 0.596 16.688 4 2013-01-01 04:00:00 0.592 16.576 # ...
But our cost calculation is conditional on the time of day. This is where you will see a lot of people using Pandas the way it was not intended: by writing a loop to do the conditional calculation.
For the rest of this tutorial, you’ll start from a less-than-ideal baseline solution and work up to a Pythonic solution that fully leverages Pandas.
But what is Pythonic in the case of Pandas? The irony is that it is those who are experienced in other (less user-friendly) coding languages such as C++ or Java that are particularly susceptible to this because they instinctively “think in loops.”
Let’s look at a loop approach that is not Pythonic and that many people take when they are unaware of how Pandas is designed to be used. We will use
@timeit again to see how fast this approach is.
First, let’s create a function to apply the appropriate tariff to a given hour:
def apply_tariff(kwh, hour): """Calculates cost of electricity for given hour.""" if 0 <= hour < 7: rate = 12 elif 7 <= hour < 17: rate = 20 elif 17 <= hour < 24: rate = 28 else: raise ValueError(f'Invalid hour: {hour}') return rate * kwh
Here’s the loop that isn’t Pythonic, in all its glory:
>>> # NOTE: Don't do this! >>> @timeit(repeat=3, number=100) ... def apply_tariff_loop(df): ... """Calculate costs in loop. Modifies `df` inplace.""" ... energy_cost_list = [] ... for i in range(len(df)): ... # Get electricity used and hour of day ... energy_used = df.iloc[i]['energy_kwh'] ... hour = df.iloc[i]['date_time'].hour ... energy_cost = apply_tariff(energy_used, hour) ... energy_cost_list.append(energy_cost) ... df['cost_cents'] = energy_cost_list ... >>> apply_tariff_loop(df) Best of 3 trials with 100 function calls per trial: Function `apply_tariff_loop` ran in average of 3.152 seconds.
For people who picked up Pandas after having written “pure Python” for some time prior, this design might seem natural: you have a typical “for each x, conditional on y, do z.”
However, this loop is clunky. You can consider the above to be an “antipattern” in Pandas for several reasons. Firstly, it needs to initialize a list in which the outputs will be recorded.
Secondly, it uses the opaque object
range(0, len(df)) to loop over, and then after applying
apply_tariff(), it has to append the result to a list that is used to make the new DataFrame column. It also does what is called chained indexing with
df.iloc[i]['date_time'], which often leads to unintended results.
But the biggest issue with this approach is the time cost of the calculations. On my machine, this loop took over 3 seconds for 8760 rows of data. Next, you’ll look at some improved solutions for iteration over Pandas structures.
Looping with
.itertuples() and
.iterrows()
What other approaches can you take? Well, Pandas has actually made the
for i in range(len(df)) syntax redundant by introducing the
DataFrame.itertuples() and
DataFrame.iterrows() methods. These are both generator methods that
yield one row at a time.
.itertuples() yields a
namedtuple for each row, with the row’s index value as the first element of the tuple. A
nametuple is a data structure from Python’s
collections module that behaves like a Python tuple but has fields accessible by attribute lookup.
.iterrows() yields pairs (tuples) of (index,
Series) for each row in the DataFrame.
While
.itertuples() tends to be a bit faster, let’s stay in Pandas and use
.iterrows() in this example, because some readers might not have run across
nametuple. Let’s see what this achieves:
>>> @timeit(repeat=3, number=100) ... def apply_tariff_iterrows(df): ... energy_cost_list = [] ... for index, row in df.iterrows(): ... # Get electricity used and hour of day ... energy_used = row['energy_kwh'] ... hour = row['date_time'].hour ... # Append cost list ... energy_cost = apply_tariff(energy_used, hour) ... energy_cost_list.append(energy_cost) ... df['cost_cents'] = energy_cost_list ... >>> apply_tariff_iterrows(df) Best of 3 trials with 100 function calls per trial: Function `apply_tariff_iterrows` ran in average of 0.713 seconds.
Some marginal gains have been made. The syntax is more explicit, and there is less clutter in your row value references, so it’s more readable. In terms of time gains, is almost 5 five times quicker!
However, there is more room for improvement. You’re still using some form of a Python for-loop, meaning that each and every function call is done in Python when it could ideally be done in a faster language built into Pandas’ internal architecture.
Pandas’
.apply()
You can further improve this operation using the
.apply() method instead of
.iterrows(). Pandas’
.apply() method takes functions (callables) and applies them along an axis of a DataFrame (all rows, or all columns). In this example, a lambda function will help you pass the two columns of data into
apply_tariff():
>>> @timeit(repeat=3, number=100) ... def apply_tariff_withapply(df): ... df['cost_cents'] = df.apply( ... lambda row: apply_tariff( ... kwh=row['energy_kwh'], ... hour=row['date_time'].hour), ... axis=1) ... >>> apply_tariff_withapply(df) Best of 3 trials with 100 function calls per trial: Function `apply_tariff_withapply` ran in average of 0.272 seconds.
The syntactic advantages of
.apply() are clear, with a significant reduction in the number of lines and very readable, explicit code. In this case, the time taken was roughly half that of the
.iterrows() method.
However, this is not yet “blazingly fast.” One reason is that
.apply() will try internally to loop over Cython iterators. But in this case, the
lambda that you passed isn’t something that can be handled in Cython, so it’s called in Python, which is consequently not all that fast.
If you were to use
.apply() for my 10 years of hourly data for 330 sites, you’d be looking at around 15 minutes of processing time. If this calculation were intended to be a small part of a larger model, you’d really want to speed things up. That’s where vectorized operations come in handy.
Selecting Data With
.isin()
Earlier, you saw that if there were a single electricity price, you could apply that price across all the electricity consumption data in one line of code (
df['energy_kwh'] * 28). This particular operation was an example of a vectorized operation, and it is the fastest way to do things in Pandas.
But how can you apply condition calculations as vectorized operations in Pandas? One trick is to select and group parts the DataFrame based on your conditions and then apply a vectorized operation to each selected group.
In this next example, you will see how to select rows with Pandas’
.isin() method and then apply the appropriate tariff in a vectorized operation. Before you do this, it will make things a little more convenient if you set the
date_time column as the DataFrame’s index:
df.set_index('date_time', inplace=True) @timeit(repeat=3, number=100) def apply_tariff_isin(df): # Define hour range Boolean arrays peak_hours = df.index.hour.isin(range(17, 24)) shoulder_hours = df.index.hour.isin(range(7, 17)) off_peak_hours = df.index.hour.isin(range(0, 7)) # Apply tariffs to hour ranges df.loc[peak_hours, 'cost_cents'] = df.loc[peak_hours, 'energy_kwh'] * 28 df.loc[shoulder_hours,'cost_cents'] = df.loc[shoulder_hours, 'energy_kwh'] * 20 df.loc[off_peak_hours,'cost_cents'] = df.loc[off_peak_hours, 'energy_kwh'] * 12
Let’s see how this compares:
>>> apply_tariff_isin(df) Best of 3 trials with 100 function calls per trial: Function `apply_tariff_isin` ran in average of 0.010 seconds.
To understand what’s happening in this code, you need to know that the
.isin() method is returning an array of Boolean values that looks like this:
[False, False, False, ..., True, True, True]
These values identify which DataFrame indices (datetimes) fall within the hour range specified. Then, when you pass these Boolean arrays to the DataFrame’s
.loc indexer, you get a slice of the DataFrame that only includes rows that match those hours. After that, it is simply a matter of multiplying the slice by the appropriate tariff, which is a speedy vectorized operation.
How does this compare to our looping operations above? Firstly, you may notice that you no longer need
apply_tariff(), because all the conditional logic is applied in the selection of the rows. So there is a huge reduction in the lines of code you have to write and in the Python code that is called.
What about the processing time? 315 times faster than the loop that wasn’t Pythonic, around 71 times faster than
.iterrows() and 27 times faster that
.apply(). Now you are moving at the kind of speed you need to get through big data sets nice and quickly.
Can We Do Better?
In
apply_tariff_isin(), we are still admittedly doing some “manual work” by calling
df.loc and
df.index.hour.isin() three times each. You could argue that this solution isn’t scalable if we had a more granular range of time slots. (A different rate for each hour would require 24
.isin() calls.) Luckily, you can do things even more programmatically with Pandas’
pd.cut() function in this case:
@timeit(repeat=3, number=100) def apply_tariff_cut(df): cents_per_kwh = pd.cut(x=df.index.hour, bins=[0, 7, 17, 24], include_lowest=True, labels=[12, 20, 28]).astype(int) df['cost_cents'] = cents_per_kwh * df['energy_kwh']
Let’s take a second to see what’s going on here.
pd.cut() is applying an array of labels (our costs) according to which bin each hour belongs in. Note that the
include_lowest parameter indicates whether the first interval should be left-inclusive or not. (You want to include
time=0 in a group.)
This is a fully vectorized way to get to your intended result, and it comes out on top in terms of timing:
>>> apply_tariff_cut(df) Best of 3 trials with 100 function calls per trial: Function `apply_tariff_cut` ran in average of 0.003 seconds.
So far, you’ve built up from taking potentially over an hour to under a second to process the full 300-site dataset. Not bad! There is one last option, though, which is to use NumPy functions to manipulate the underlying NumPy arrays for each DataFrame, and then to integrate the results back into Pandas data structures.
Don’t Forget NumPy!
One point that should not be forgotten when you are using Pandas is that Pandas Series and DataFrames are designed on top of the NumPy library. This gives you even more computation flexibility, because Pandas works seamlessly with NumPy arrays and operations.
In this next case you’ll use NumPy’s
digitize() function. It is similar to Pandas’
cut() in that the data will be binned, but this time it will be represented by an array of indexes representing which bin each hour belongs to. These indexes are then applied to a prices array:
@timeit(repeat=3, number=100) def apply_tariff_digitize(df): prices = np.array([12, 20, 28]) bins = np.digitize(df.index.hour.values, bins=[7, 17, 24]) df['cost_cents'] = prices[bins] * df['energy_kwh'].values
Like the
cut() function, this syntax is wonderfully concise and easy to read. But how does it compare in speed? Let’s see:
>>> apply_tariff_digitize(df) Best of 3 trials with 100 function calls per trial: Function `apply_tariff_digitize` ran in average of 0.002 seconds.
At this point, there’s still a performance improvement, but it’s becoming more marginal in nature. This is probably a good time to call it a day on hacking away at code improvement and think about the bigger picture.
With Pandas, it can help to maintain “hierarchy,” if you will, of preferred options for doing batch calculations like you’ve done here. These will usually rank from fastest to slowest (and most to least flexible):
- Use vectorized operations: Pandas methods and functions with no for-loops.
- Use the
.apply()method with a callable.
- Use
.itertuples(): iterate over DataFrame rows as
namedtuplesfrom Python’s
collectionsmodule.
- Use
.iterrows(): iterate over DataFrame rows as (index,
pd.Series) pairs. While a Pandas Series is a flexible data structure, it can be costly to construct each row into a Series and then access it.
- Use “element-by-element” for loops, updating each cell or row one at a time with
df.locor
df.iloc. (Or,
.at/
.iatfor fast scalar access.)
Don’t Take My Word For It: The order of precedence above is a suggestion straight from a core Pandas developer.
Here’s the “order of precedence” above at work, with each function you’ve built here:
Prevent Reprocessing with HDFStore
Now that you have looked at quick data processes in Pandas, let’s explore how to avoid reprocessing time altogether with HDFStore, which was recently integrated into Pandas.
Often when you are building a complex data model, it is convenient to do some pre-processing of your data. For example, if you had 10 years of minute-frequency electricity consumption data, simply converting the date and time to datetime might take 20 minutes, even if you specify the format parameter. You really only want to have to do this once, not every time you run your model, for testing or analysis.
A very useful thing you can do here is pre-process and then store your data in its processed form to be used when needed. But how can you store data in the right format without having to reprocess it again? If you were to save as CSV, you would simply lose your datetime objects and have to re-process it when accessing again.
Pandas has a built-in solution for this which uses HDF5 , a high-performance storage format designed specifically for storing tabular arrays of data. Pandas’
HDFStore class allows you to store your DataFrame in an HDF5 file so that it can be accessed efficiently, while still retaining column types and other metadata. It is a dictionary-like class, so you can read and write just as you would for a Python
dict object.
Here’s how you would go about storing your pre-processed electricity consumption DataFrame,
df, in an HDF5 file:
# Create storage object with filename `processed_data` data_store = pd.HDFStore('processed_data.h5') # Put DataFrame into the object setting the key as 'preprocessed_df' data_store['preprocessed_df'] = df data_store.close()
Now you can shut your computer down and take a break knowing that you can come back and your processed data will be waiting for you when you need it. No reprocessing required. Here’s how you would access your data from the HDF5 file, with data types preserved:
# Access data store data_store = pd.HDFStore('processed_data.h5') # Retrieve data using key preprocessed_df = data_store['preprocessed_df'] data_store.close()
A data store can house multiple tables, with the name of each as a key.
Just a note about using the HDFStore in Pandas: you will need to have PyTables >= 3.0.0 installed, so after you have installed Pandas, make sure to update PyTables like this:
pip install --upgrade tables
Conclusions. Here are a few rules of thumb that you can apply next time you’re working with large data sets in Pandas:
Try to use vectorized operations where possible rather than approaching problems with the
for x in df...mentality. If your code is home to a lot of for-loops, it might be better suited to working with native Python data structures, because Pandas otherwise comes with a lot of overhead.
If you have more complex operations where vectorization is simply impossible or too difficult to work out efficiently, use the
.apply()method.
If you do have to loop over your array (which does happen), use
.iterrows()or
.itertuples()to improve speed and syntax.
Pandas has a lot of optionality, and there are almost always several ways to get from A to B. Be mindful of this, compare how different routes perform, and choose the one that works best in the context of your project.
Once you’ve got a data cleaning script built, avoid reprocessing by storing your intermediate results with HDFStore.
Integrating NumPy into Pandas operations can often improve speed and simplify syntax.
|
https://realpython.com/fast-flexible-pandas/
|
CC-MAIN-2022-05
|
refinedweb
| 4,326
| 64.2
|
For the one shot upload usecase I tend to lean towards /v3/<plugin>/<content_type>/upload/ and for content_types that require special treatment we can define separate endpoints. If talking about modulemd or modulemd_defaults, it could be /v3/rpm/modules/upload/. -------- Regards, Ina Panova Senior Software Engineer| Pulp| Red Hat Inc. "Do not go where the path may lead, go instead where there is no path and leave a trail." On Wed, Jul 31, 2019 at 1:04 PM Tatiana Tereshchenko <ttereshc at redhat.com> wrote: > If the goal is to make endpoints unified across all actions, then I think > we can only do > POST /pulp/api/v3//plugin/action/ types=[] > > Having plugin/content_type/upload would be nice, however I'm not sure if > it covers enough use cases. > E.g. For pulp_rpm, it makes sense for packages or advisories to have a > dedicated endpoint each, however it doesn't make much sense for modulemd or > modulemd_defaults, because usually they are in the same file and uploaded > in bulk (maybe a separate endpoint is needed for this case). > > For the copy case, it's common to copy more than one type, I think, so > probably 'plugin/copy/ types=[]' makes more sense. > > It would be great to here from more people and other plugins. > > > > On Mon, Jul 29, 2019 at 5:46 PM Pavel Picka <ppicka at redhat.com> wrote: > >> +1 for discuss this to keep some standard as I have already opened PRs >> for rpm modulemd[-defaults]. >> I like idea of /upload in the end. >> But also think it can work without as it will be differ by POST/GET >> methods. >> >> On Mon, Jul 29, 2019 at 4:49 PM Dana Walker <dawalker at redhat.com> wrote: >> >>> Just to provide an added data point, I'll be merging the one-shot PR for >>> pulp_python soon and it currently uses /api/v3/python/upload/ >>> >>> I wanted to keep it simple as well, and so would be happy to change it >>> for consistency based on whatever we decide. >>> >>> --Dana >>> >>> Dana Walker >>> >>> She / Her / Hers >>> >>> Software Engineer, Pulp Project >>> >>> Red Hat <> >>> >>> dawalker at redhat.com >>> <> >>> >>> >>> >>> On Mon, Jul 29, 2019 at 10:42 AM Ina Panova <ipanova at redhat.com> wrote: >>> >>>> Hi all, >>>> As of today, plugins have the freedom to define whichever endpoints >>>> they want ( to some extent). >>>> This leads to the question - shall we namespace one-shot upload and >>>> copy endpoints for some consistency? >>>> >>>> POST /api/v3/content/rpm/packages/upload/ >>>> POST /api/v3/content/rpm/packages/copy/ >>>> >>>> or >>>> >>>> POST /api/v3/content/rpm/upload/ type =package >>>> POST /api/v3/content/rpm/copy/ type = [package, modulemd] >>>> >>>> I wanted to bring this up, before it diverges a lot. For the record, I >>>> have checked only RPM plugin, I am not aware of the state of the other >>>> plugins. >>>> Right now we have an active endpoint for one-shot upload of rpm package: >>>> POST /api/v3/content/rpm/upload/ >>>> >>>> And there is PR for one-shot upload of modulemd-defaults: >>>> POST /api/v3/content/rpm/modulemd-defaults/ >>>> >>>> For rpm copy we have POST /api/v3/content/rpm/copy/ types=[] >>>> >>>> We are starting some work on docker recursive copy, so it would be >>>> helpful to reach some agreement before going further that path. >>>> >>>> Thank >>> >>> >> >> >> -- >> Pavel Picka >> Red Hat >> _______________________________________________ >> Pulp-dev mailing list >> Pulp-dev at redhat.com >> >> > _______________________________________________ > Pulp-dev mailing list > Pulp-dev at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
|
https://listman.redhat.com/archives/pulp-dev/2019-August/003349.html
|
CC-MAIN-2022-40
|
refinedweb
| 571
| 68.2
|
Save/export files
openFrameworks will save to your bin/data folder unless you specify another filepath.
If you want to save many files, each file will need to have its own unique file name. A quick way of doing this is to use the current timestamp because it is never the same. So instead of naming it
"myFile.xml", which will write over itself everytime you save, you can do
"myFile_" + ofGetTimestampString() + ".xml" to give each file its own name.
You can save a file anywhere in your application, but you may want to trigger it at a specific moment. You might want to your file to save everytime you press a specific key.
void ofApp::keyPressed (int key){ if(key == 's'){ // save your file in here!! } }
Or you might want to call it everytime you exit your application.
void ofApp::exit (){ // save your file in here!! }
Note: that exit() will fire automatically when you close or esc your app, but not if you stop the app from the IDE.
A Text File
in the header file (.h)
ofFile myTextFile;
in the implementation file (.cpp)
To create the file in setup.
myTextFile.open("text.txt",ofFile::WriteOnly);
or if you want to append to an existing txt file.
myTextFile.open("text.txt",ofFile::Append);
To add text.
myTextFile << "some text"
This will automatically save, so there is no need to call a save function.
XML Settings
in the header file (.h)
Include the XML addon at the top:
#include "ofxXmlSettings.h"
Initialize your variable:
ofxXmlSettings XML;
in the implementation file (.cpp)
add something to your XML:
XML.setValue("settings:number", 11);
save it!
XML.saveFile("mySettings.xml");
For more info refer to the examples/utils/xmlExample.
An Image
in the header file (.h)
ofImage img;
in the implementation file (.cpp)
Make an image. There are many ways of creating an image! Including grabbing from a camera, creating it pixel by pixel, grabbing from an FBO. I am only showing one option, which is to draw to the screen and then grab that as an image.
//in draw ofSetColor(255,130,0); ofFill(); ofDrawCircle(100,100,50); // in keyPressed img.grabScreen(0,0,300,300);
Then trigger a save in your location of choice. Perhaps in the keyPressed or exit functions. You can save as either a .png or a .jpg.
img.save("myPic.jpg");
Optionally you can specify the quality at which you wish to save by adding an additional parameter.
img.save("myPic.jpg",OF_IMAGE_QUALITY_LOW);
the default is
OF_IMAGE_QUALITY_BEST and all the options are:
OF_IMAGE_QUALITY_BEST, OF_IMAGE_QUALITY_HIGH, OF_IMAGE_QUALITY_MEDIUM, OF_IMAGE_QUALITY_LOW, OF_IMAGE_QUALITY_WORST
Here is what the output should look like in this case.
For more info refer to the examples/graphics/imageSaverExample.
|
http://openframeworks.cc/learning/01_basics/how_to_save_a_file/
|
CC-MAIN-2017-17
|
refinedweb
| 450
| 69.07
|
wcsncat man page
Prolog
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
wcsncat — concatenate a wide-character string with part of another
Synopsis
#include <wchar.h> wchar_t *wcsncat‐2008 defers to the ISO C standard.
The wcsncat() function shall append shall overwrite the null wide-character code at the end of ws1. A terminating null wide-character code shall always be appended to the result. If copying takes place between objects that overlap, the behavior is undefined.
Return Value
The wcsncat() function shall return ws1; no return value is reserved to indicate an error.
Errors
No errors are defined.
The following sections are informative.
Examples
None.
Application Usage
None.
Rationale
None.
Future Directions
None.
See Also
wcscat() .
|
https://www.mankier.com/3p/wcsncat
|
CC-MAIN-2017-17
|
refinedweb
| 148
| 51.95
|
Concurrency.
Consider a program to simulate a forest, in which there are big crazy banana-loving monkeys surrounding a big banana tree. Since these monkeys love banana like crazy, they jump at the banana tree all at once until there are no bananas left.
Naive implementation
/* Tree.java */ public class Tree{ private int bananas; public Tree(int b){ this.bananas = b; } public void grows(){ this.bananas += 1; } public boolean drops(){ this.bananas -= 1; return true; } } /* Monkey.java */ public class Monkey implements Runnable{ Tree tree; public Monkey(Tree tree){ this.tree = tree; } public void run(){ try{ for(int i = 0; i <= 10; i++){ Thread.sleep(10); this.tree.drops(); } } catch(InterruptedException e){ // ... } } }
Looks good? No. Say, there are 2 monkeys trying to grasp the fruit. The line this.bananas -= 1; does not happen simply in one CPU clock cycle, but rather consists of smaller steps: read-update-write. These smaller steps are interleaved such that monkeys can proceed concurrently (but not necessary simultaneously). One possible scenario, which is commonly referred to as race condition:
Monkey A reads : 5 (bananas) Monkey B reads : 5 Monkey A updates: 5 - 1 = 4 Monkey A writes : 4 Monkey B updates: 5 - 1 = 4 Monkey B writes : 4
In the end, there are 4 bananas left in stead of 3 as it should be.
Atomic access
The above problem arise due to updating a variable in Java is not atomic, but spans over multiple CPU clock cycles. So, the simplest solution would be to make this update operation atomic, i.e happens all at once.
For writing to primitive type variables, Java provides the
volatile keyword, which ensures every write happens-before subsequent read on the same memory area. (See docs). The above Tree class can be modified as:
/* Tree.java */ public class Tree{ private volatile int bananas;
Besides, Java 1.5 and above comes with
java.util.concurrent.atomic package which provides atomic update operation for reference types as well (docs).
Problem solved? No (again). The situation we are modelling is not that simple. The banana tree represents a kind of limited resource. When no. of bananas is zero, monkeys stop taking. Let’s add a check in the drops function:
/* inside Tree class */ public boolean drops(){ if(this.bananas > 0){ this.bananas -= 1; return true; } return false; }
Unfortunately, adding that check makes the drops operation no longer atomic. Again, we see a 3 steps read-check-write operation, which is prone to race-condition as the read-update-write operation above, just at a higher logical level.
In order to solve this problem, we will be grouping these read-check-write steps into one single logical block in one of the most commonly used approach in dealing with concurrency, serialization.
Serialization
Imagine that there is a big locked-gate to the banana tree. Monkey who wants to get banana must acquire the lock. Since there is only one lock, only one monkey can get to the tree at a time and he can do the whole of read-check-write operation without any other monkeys interfering in between.
Traditionally, serialization is done via a mechanism called intrinsic lock in Java, using keyword
synchronized. Monkey class run method can be rewritten as:
public void run(){ try{ for(int i = 0; i <= 10; i++){ Thread.sleep(10); // only one thread can execute the synchronized block at a time. // threads that can't acquire the lock are blocked until the lock is released. synchronized(this.tree){ this.tree.drops(); } } } catch(InterruptedException e){ // ... } }
In this example, this.tree is used as the intrinsic lock. We can use any other objects as lock, but make sure that players are synchronized on the same lock.
Or, alternatively, we can make the drops method of Tree class a synchronized method (which is synonym to a synchronized block using
this as the lock object)
/* inside Tree class */ public synchronized boolean drops(){ if(this.bananas > 0){ this.bananas -= 1; return true; } return false; }
The later appears to be cleaner, since it synchronizes right at the exposed public method level. Otherwise it would fall to the caller of drops method to make sure that the call is synchronized. Consider, sometimes later, a Bird class comes along and fails to call drops within a synchronized with the same lock (i.e tree object), race-condition still happens.
However, in some case, the former approach would be preferred. If we modify the problem statement a little bit, instead of just having Monkey and Tree, we are now having Monkey, Tree and Nature. Tree is the passive datasource whereas Monkey and Nature are each actively reducing or growing no. of bananas. Now we have a traditional Consumer-Producer problem. In the end, the consumer and producer will need to collaborate using a synchronization mechanism, which makes synchronizing the drops and grows methods themselves redundant and not efficient. That’s probably the reason why Java
Hashtable is obsolete in favor of
HashMap (docs).
Mechanism like
synchronized,
Object.wait and
Object.notify is considered low-level and hard to get right. It can give rise to a host of possible problems like deadlock, livelock …. Therefore, Java 1.5 adds better high level support for serializing concurrency. Find more about it here.
Serialization is not ideal because it creates efficiency problem as acquiring lock is not free. Executing threads may be busy waiting for the lock to be released. Moreover, context-switching between threads results in overheads.
Following sections discuss ideas of reducing the overhead of serialisation, by making the synchronised logic minimal.
Use single thread
When it comes to modifying state data, allow only one thread to do it.
//modify the Monkey class as below static ExecutorService executor = Executors.newSingleThreadExecutor(); ... private void drop(){ executor.submit( new Callable<Void>(){ public Void call() throws InterruptedException { tree.drops(); } } ); }
Astute readers may notice that, the single-thread executor service must use some underlying synchronised queue to save task waiting to be executed. That should involves intrinsic locks as discussed in the above section. So what exactly is different? The difference here is the granularity of synchronised section. In this example, executing threads are blocked only for the period of inserting tasks into Executor service’s queue, not for the whole computation period.
In framework such as Akka, shared state is often handled by a Actor class instances, which are single-threaded by nature.
//re-write Tree class using Akka actor public class Tree extends UntypedActor{ public static enum Msg { DROP, GROW; } ... public void onReceive(Object msg){ if(msg == Msg.DROP){ drops(); } else if(msg == Msg.GROW){ grows(); } else{ //... } } } //Monkey class get injected with only the Actor Reference of Tree Actor, not the Tree object public class Monkey implements Runnable{ public Monkey(ActorRef treeRef){ this.treeRef = treeRef } public void run(){ try{ for(int i = 0; i <= 10; i++){ Thread.sleep(10); // tell create a record in ActorSystem message queue. this.treeRef.tell(Tree.Msg.DROP) } } catch(InterruptedException e){ // ... } } }
Optimistic lock
Say, the monkey takes really long time to eat a banana, overheads would be unjustifiable to synchronised the entire read-update-write section since update is time-consuming.
The idea is, let each monkey read-update independent of each other. Just before writing banana value back to mutual state, check if the value it reads before update has been modified since. If yes, fails the write and have it re-try.
(Source code in this post can be found here )
|
https://vuamitom.github.io/2016/01/09/concurrency-in-java-context.html
|
CC-MAIN-2019-43
|
refinedweb
| 1,236
| 58.28
|
#include <wx/ribbon/bar.h>
Top-level control in a ribbon user interface.
Serves as a tabbed container for wxRibbonPage - a ribbon user interface typically has a ribbon bar, which contains one or more wxRibbonPages, which in turn each contain one or more wxRibbonPanels, which in turn contain controls.
While a wxRibbonBar has tabs similar to a wxNotebook, it does not follow the same API for adding pages. Containers like wxNotebook can contain any type of window as a page, hence the normal procedure is to create the sub-window and then call wxBookCtrlBase::AddPage(). As wxRibbonBar can only have wxRibbonPage as children (and a wxRibbonPage can only have a wxRibbonBar as parent), when a page is created, it is automatically added to the bar - there is no AddPage equivalent to call.
After all pages have been created, and all controls and panels placed on those pages, Realize() must be called.
This class supports the following styles:
The following event handler macros redirect the events to member function handlers 'func' with prototypes like:
Event macros for events emitted by this class:
Construct a ribbon bar with the given parameters.
Destructor.
Highlight the specified tab.
Highlighted tabs have a colour between that of the active tab and a tab over which the mouse is hovering. This can be used to make a tab (usually temporarily) more noticeable to the user.
Indicates whether the panel area of the ribbon bar is shown.
Delete all pages from the ribbon bar.
Create a ribbon bar in two-step ribbon bar construction.
Should only be called when the default constructor is used, and arguments have the same meaning as in the full constructor.
Delete a single page from this ribbon bar.
The user must call wxRibbonBar::Realize() after one (or more) calls to this function.
Dismiss the expanded panel of the currently active page.
Calls and returns the value from wxRibbonPage::DismissExpandedPanel() for the currently active page, or false if there is no active page.
Get the index of the active page.
In the rare case of no page being active, -1 is returned.
Returns the current display mode of the panel area.
Get a page by index.
NULL will be returned if the given index is out of range.
Get the number of pages in this bar.
Returns the number for a given ribbon bar page.
The number can be used in other ribbon bar calls.
Hides the tab for a given page.
Equivalent to
ShowPage(page, false).
Hides the panel area of the ribbon bar.
This method behaves like ShowPanels() with false argument.
Indicates whether a tab is currently highlighted.
Indicates whether the tab for the given page is shown to the user or not.
By default all page tabs are shown.
Perform initial layout and size calculations of the bar and its children.
This must be called after all of the bar's children have been created (and their children created, etc.) - if it is not, then windows may not be laid out or sized correctly.
Also calls wxRibbonPage::Realize() on each child page.
Reimplemented from wxRibbonControl.
Changes a tab to not be highlighted.
Set the active page by index, without triggering any events.
Set the active page, without triggering any events.
Set the art provider to be used be the ribbon bar.
Also sets the art provider on all current wxRibbonPage children, and any wxRibbonPage children added in the future.
Note that unlike most other ribbon controls, the ribbon bar creates a default art provider when initialised, so an explicit call to SetArtProvider() is not required if the default art provider is sufficient. Also, unlike other ribbon controls, the ribbon bar takes ownership of the given pointer, and will delete it when the art provider is changed or the bar is destroyed. If this behaviour is not desired, then clone the art provider before setting it.
Reimplemented from wxRibbonControl.
Set the margin widths (in pixels) on the left and right sides of the tab bar region of the ribbon bar.
These margins will be painted with the tab background, but tabs and scroll buttons will never be painted in the margins.
The left margin could be used for rendering something equivalent to the "Office Button", though this is not currently implemented. The right margin could be used for rendering a help button, and/or MDI buttons, but again, this is not currently implemented.
Show or hide the tab for a given page.
After showing or hiding a tab, you need to call wxRibbonBar::Realize(). If you hide the tab for the currently active page (GetActivePage) then you should call SetActivePage to activate a different page.
Shows or hide the panel area of the ribbon bar according to the given display mode.
Shows or hides the panel area of the ribbon bar.
If the panel area is hidden, then only the tab of the ribbon bar will be shown. This is useful for giving the user more screen space to work with when he/she doesn't need to see the ribbon's options.
If the panel is currently shown, this method pins it, use the other overload of this method to specify the exact panel display mode to avoid it.
|
https://docs.wxwidgets.org/3.1.5/classwx_ribbon_bar.html
|
CC-MAIN-2021-31
|
refinedweb
| 869
| 73.78
|
Re: Is the memory map of a process different when executed in GDB?
- From: Chris McCulloh <list@xxxxxxxxxxxxxxxxx>
- Date: Tue, 23 Sep 2008 16:07:54 -0400 which way? Where I can read info about this?
It's hard to say exactly what's going on without seeing the example code you're trying to exploit. But let me give you some basic thoughts..
I assume you are putting the address of the string "/bin/sh" somewhere in the environment and then attempting a basic ret-to-libc with a call to system(). So your buffer probably looks something like this:
[ x bytes of junk to overflow buffer ][ address of system() ][ address of exit() ][ address of "/bin/sh" ]
The environment a program receives when being invoked by gdb is a bit different than that which it receives when being invoked from the shell. As an example, try compiling and running this simple program which shows the address of an environment variable:
#include <stdlib.h>
main(int argc, char **argv){ printf("%s :: 0x%08x\n", argv[1], getenv(argv[1])) ; }
Run that both from a shell and from gdb and you will see the addresses are different. Don't hold me to this, but I believe on Linux you will find that your target environment variable address will be 0x20 bytes lower when called from gdb than from a shell. But you should do your own experimenting to see how it works.
Remember also that the length of your program name (argv[0]) also affects the memory layout. So if you compile the above code as './ getenv' and your program name is './vulnerableproggy' then the address will be different. For every character longer the vulnerable program is, the memory address you are examining will be two bytes *lower* (on Linux IA-32, at least). This is because the name of the program, as part of argv, is passed to the program. This difference also differs across platforms.
Also try manually looking through the process memory space in gdb so you can see what kind of items are in there. If the address returned by the above call to getenv() is 0xbffffd10, for example, try looking at memory as strings starting at 0xbffff900 in gdb using
x/20s 0xbffff900
Hope this helps.
-c
- References:
- Is the memory map of a process different when executed in GDB?
- From: Florencio Cano
- Prev by Date: Is the memory map of a process different when executed in GDB?
- Next by Date: Re: Is the memory map of a process different when executed in GDB?
- Previous by thread: Is the memory map of a process different when executed in GDB?
- Next by thread: Re: Is the memory map of a process different when executed in GDB?
- Index(es):
|
http://www.derkeiler.com/Mailing-Lists/securityfocus/vuln-dev/2008-09/msg00002.html
|
CC-MAIN-2014-23
|
refinedweb
| 461
| 70.63
|
03 November 2011 18:08 [Source: ICIS news]
HOUSTON (ICIS)--Chemical railcar traffic on US railroads rose by 1.6% year on year for the week ended on 29 October, according to data released by a rail industry group on Thursday.
There were 30,047 chemical railcar loadings last week, compared with 29,584 in the same week in 2010, the Association of American Railroads (AAR) said.
In the previous week, ended 22 October, US chemical railcar traffic rose by 2 rose by 5.2% to 307,900 carloads.
For the month of October, overall carloads were up 1.7% year on year to 1,215,627.
“While there is clearly room for improvement, October rail traffic appears to indicate that we are still in a slowly growing economy,” said ?xml:namespace>
“Things can change quickly, of course, and the growth rates are certainly not as robust as we would like to see, but we at least appear to be headed in the right direction,” Gray said in commenting on the October data.
Year to date to 29 October, US chemical railcar loadings were up by 4.0% year on year to 1,291,702
|
http://www.icis.com/Articles/2011/11/03/9505293/us-weekly-chemical-railcar-traffic-rises-1.6-year-on-year.html
|
CC-MAIN-2014-35
|
refinedweb
| 195
| 71.65
|
The following code causes an assertion failure:
import ctypes
class BadStruct(ctypes.Structure):
_fields_ = []
_anonymous_ = ['foo']
foo = None
this is because MakeAnonFields() (in Modules/_ctypes/stgdict.c) goes over the
names specified in _anonymous_, and looks each name up in the class by calling
PyObject_GetAttr().
in case an attribute of such a name is found (i.e. PyObject_GetAttr() succeeded),
MakeAnonFields() assumes that the attribute was created by MakeFields(), so it
asserts the type of the attribute is PyCField_Type.
however, PyObject_GetAttr() would succeed also in case it is a normal attribute
specified by the user, but isn't specified in _fields_, as in the code above.
in such a case, the type of the attribute is not PyCField_Type, and so the
assertion fails.
|
https://bugs.python.org/msg302331
|
CC-MAIN-2017-51
|
refinedweb
| 122
| 54.83
|
Re: runtime stack
- From: Joshua Flanagan <josh@xxxxxxxxxx>
- Date: Sat, 23 Apr 2005 00:36:44 -0500
There is no difference. They will both produce identical IL code.
This isn't related to the stack. The stack would be relevant if you were concerned about the number of variables you are using within a method.
If you want to see for yourself that they are exactly the same (and learn how to figure these things out), copy the following text to a file (test.cs):
using System; public class StackTest { public void FirstTry(string varone){ if(varone != null) if(varone != "") dosomething(); } public void SecondTry(string varone){ if(varone != null && varone!= "") dosomething(); }
public void dosomething(){ Console.WriteLine("hello"); } }
Compile from the command line: csc /t:library /out:test.dll test.cs
Load the library in the IL Disassembler: ildasm test.dll
Double click on FirstTry. Double click on SecondTry. Compare the contents of the 2 windows that popped up. The instructions are the same.
Joshua Flanagan .
- Follow-Ups:
- Re: runtime stack
- From: Chance Hopkins
- References:
- runtime stack
- From: Chance Hopkins
- Prev by Date: Re: runtime stack
- Next by Date: Re: ReadXML True vs true
- Previous by thread: Re: runtime stack
- Next by thread: Re: runtime stack
- Index(es):
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2005-04/msg04962.html
|
crawl-002
|
refinedweb
| 207
| 75.71
|
SharePoint 2010 Logical Design: Top 10 Considerations
SharePoint 2010 is a popular tool for creating intranets, customer portals, Internet-facing websites, Business Intelligence dashboards, and even your enterprise social computing strategy. When it comes to your SharePoint 2010 rollout, you’ll want to come up with a solid logical design, which will serve as the “blueprint” for your SharePoint deployment. The logical design consists of answering questions such as:
- What should we have in place before starting a logical design?
- How many web applications do we need?
- How many site collections and sites do we need? (And how do we choose between them?)
- Which service applications should be enabled?
- What server roles do we need?
- What is a managed path? An alternate access mapping?
In order to properly answer those questions and more, you’ll want to consider some key items up front before you ever touch your server. Here’s a collection of “top 10” things to consider when completing your SharePoint 2010 logical design.
Invest in a Solid Information Architecture
Before you start creating a logical design for your SharePoint deployment, you’ll need to have something on which base it. Information architecture (IA) is the process for defining how information will be delivered to (and accessed by) users. The IA describes how information managed in SharePoint will be organized and how users will navigate through the environment. When developing your IA, you’ll probably focus on three key things:
- Overall site map: How your site is structured; home page and sub-site and pages; how your users will likely navigate through the portal.
- Page layout: The position of images, controls, web parts and other content on each page. You can use page mockups using a tool such as Balsamiq Mockups to help you here.
- Metadata: The navigation and tagging structure of the content within the portal. Metadata shows up in many places in SharePoint, including navigation, authoritative and social tagging, the term store (where SharePoint stores the official set of tags), and in search.
Invest in a Governance Plan
No matter how much planning you put into your information architecture and logical design, you’ll likely find that over time your content gets stale and the site structure becomes less relevant. In order to ensure the longevity of your logical design, invest in a governance strategy. The SharePoint 2010 governance planning white paper is a good place to start.
Understand the Logical Containment Hierarchy
The logical containment hierarchy in SharePoint 2010 is simply the way information is managed and stored within SharePoint. You should understand this and educate your designers and power users. The following is an overview of the key elements:
- Farm: A collection of one or more servers acting as a single SharePoint logical group. A farm can contain one or more web applications.
- Web application: A URL (or set of URLs) that serve as the basis for authentication and for managing zero or more site collections.
- Site Collection: Consists of a top-level site and its subsites. It is a logical unit for administration: There are settings that can only be configured at the site collection level. Each web application can host many site collections.
- Site: Consists of a data repository, visual elements, administration, and almost every other core element of the functionality and experience for the user. Visually, a site is represented as one or more web pages. A site contains zero or more lists and libraries.
- List (or library): A data repository that can hold columns of data and/or documents. The objects stored in a list are called items. Visually, a list is represented by views or a web part. It is analogous to a database table or Excel worksheet.
- Item (or document): The fundamental data objects that are stored within a list. An example of an item might be a contact, a task, or a document. Items are analogous to rows within a database.
- Service application: Provides a set of capabilities to one or more web applications. Don’t turn on all service applications; plan for and use only the ones you really need.
Also make sure that your designers and users understand the trade-offs between things like sites and site collections, sites and pages, and collaboration versus publishing sites. The SharePoint 2010 Usage White Paper is a great resource for this.
Consider the Web Application Trade-Offs
Let’s say you’re creating an intranet for your organization. You could put everything on.<yourcompany>.com. But you might be better off with several web applicationsHere are some examples:
-.<yourcompany>.com (for the intranet)
-.<yourcompany>.com (for the Internet)
-.<yourcompany>.com (for the board of directors)
-.<yourcompany>.com (for the My Sites and Profiles)
-.<yourcompany>.com (for collaboration sites)
-.<yourcompany>.com (for your partners and customers)
So… why would you break your deployment into multiple web applications? There are several reasons:
- Security: You might want to separate content based on security. For example, you might need to ensure that your partners and customer never see your internal content. You also might want to make sure that your employees can’t ever get to the board of directors site. You’ll notice that many of our example sites (intranet, www, board, and extranet) are separate based on this one requirement. You can then enforce permissions through the use of policiesfor example, you can create a policy to explicitly deny access to the employees group on the board site. Policies for a Web application are enforced no matter what is configured on individual sites that live within the web application.
- Performance: For large deployments, it enables you to optimize performance by putting similar applications together in the same content database. In our example, ‘my’, ‘team’, and ‘intranet’ are on separate web applications. We did that because My Sites create a large number of sites that are pretty small, whereas the intranet site and team sites will typically have a smaller number of large sites.
- Management Finally, you might want to atomically manage various elements of the web application. When you create separate web applications, you can implement different limits for general application settings like the Recycle Bin. You can also use separate application pools, which means that you can change memory allocation and process recycling separately.
Choose Your Authentication
SharePoint 2010 provides a wide range of authentication options, including a choice to specify claims-mode or classic-mode. In short, you should use claims-based authentication for any new SharePoint 2010 deployment. Why? Because if you choose claims-based, you can use any of the supported authentication types for your web applications. If you don’t, you’ll just be limiting your options laterand it’s no extra work, even if you’re only using Windows authentication. The only consideration might be for integration with other systems that may not support claims.
On the other hand, if you’re upgrading from SharePoint 2007, you can keep classic-mode authentication. However, I recommend considering building a new farm using claims-based authentication and then simply attaching your content databases in order to upgrade. Be sure to read the Plan authentication methods TechNet article for more details.
Design Managed Paths
A managed path lets you define a URL namespace that SharePoint uses to address a site collection. For example, if you have a web application called.<yourcompany>.com, you might want a site collection to exist at.<yourcompany>.com/knowledge. In order to do so, you’ll need to add an explicit inclusion for “knowledge” under that web application. In another example, if you want a general path that all new site collections should go under (let’s say a site called ‘newsite’ under a path called ‘team’ that sits at.<yourcompany>.com/team/newsite), you’d need to create a wildcard inclusion called ‘team’this indicates that all child URLs of the path are site collections. Out of the box, SharePoint has a wildcard path at ‘sites’.
Define Alternate Access Mappings
Alternate Access Mappings (AAMs) enable your websites to function properly under several URLs at once. For example, you might want your users to get to the same content via an internal URL of as well as a fully qualified URL of.<yourcompany>.com or an externally-accessible URL of.<yourcompany>.com. The AAMs enable your URLs to be recognized by SharePoint and for SharePoint to correct the URLs when presented to users for things such as search results links.
Choose Logical like search crawling, search queries, Excel Services, managed metadata service, etc.). The database server role provides database services via SQL Server and does not need to use the SharePoint bits directly. You could combine all these roles in one physical server or separate the logical roles onto many physical servers.
Consider the Site Collection Trade-Offs
After you determine your information architecture, web application URLs, etc., you’ll need to plan how the content will content database. A portal site is often implemented as a site collection with the top-level web site as the home page of the portal.
In general, I recommend that you put each of the following types of sites into separate site collections right from the start, even if you use a single web application for everything. This will help you manage site collections and content databases better in the long run.
- Intranet portal sites
- Extranet sites
- Team sites related to a portal site or Internet site
- My Sites (by default, each My Site is a site collection)
- Internet sites (staging)
- Internet sites (production)
- Lines of business within a conglomerate
- Document Center sites
- Records Center sites
So for example, if you were to deploy a company intranet, a corporate Internet-facing site, and a records management repository, you’d want to create three site collections from the beginning. This enables you to manage the site collections individually, provide separate content databases, and more easily accommodate growth over time. The downside of multiple site collections is that there are some features that do not work across site collections. This is important because a large deployment of SharePoint will dictate multiple site collections. The following is a sampling of the features that do not work across site collections:
- Content types: How common documents, forms, and other content is normalized in your organization. (Note: In SharePoint 2010, there is the notion of Enterprise content types that can be syndicated across site collections.)
- The Content by Query web part: This web part aggregates information from across sites but does not work across site collections.
- Information management and retention policies: Records management policies are set at the site collection level, forcing organizations to deploy the same policy multiple times for large enterprises. This can be addressed by using content type syndication, noted earlier.
- Quotas: You should absolutely define quotas so that users are used to limited storage from day one. They are also configured at the site collection level, which means that you will need to configure quotas separately at each site collection.
Define Profile Properties and Managed Terms
If you are going to provide People Search via user profiles, you must decide which profile properties in the directory service or business system map to the profile properties in SharePoint Server. Before you configure any properties, you should create a list of all user profile properties that you want to capture. I recommend that you have a business analyst gather a comprehensive list of properties up front. A design session works well for this purpose; you can determine which properties are important for your business collaboration and social computing scenarios. This session should include questions such as:
- What core contact information should we include for users? Where is the definitive location for each of those properties?
- Should we include office location or business unit?
- Do we let users update their own information? If so, which properties do we show?
- Do we need to write back to the source location? If so, how does that impact our security plan?
This planning session is important, because it will tie into your governance policies and your policy for Personally Identifiable Information (PII). Any new property that is added to the user profile property list will require a full synchronization with the corresponding directory source. Ideally, changes to the user profile property list will be infrequent and strictly controlled through change management.
In addition, if you are going to provide tagging services to users, you’ll want to decide on a centrally managed set of terms. In SharePoint, this service is provided by the managed metadata service application, which provides a tool that enables the term store manager to:
- Create a new term set and add, change, or remove terms
- Create a hierarchy for terms and identify which terms in the hierarchy can be used to assign tags to content and which terms are just used for grouping purposes
- Define synonyms for terms
- Import terms from a CSV file
- “Promote” social tags into an official term set
Summary
SharePoint 2010 logical design is about putting the right mix of key architectural elements in place in order to achieve your solution. Make sure you focus on up-front planning and understand the trade-offs before jumping into a design.
For additional information, be sure to check out these two articles on TechNet: SharePoint 2010 Logical Architecture Components and the SharePoint 2010 Design Sample.
|
https://www.informit.com/articles/article.aspx?p=1657659
|
CC-MAIN-2021-49
|
refinedweb
| 2,228
| 53.41
|
Domain Driven Design with Web API revisited Part 12: testing the load testing database context
September 14, 2015 Leave a comment
Introduction
In the previous post we created our data store using EF code-first techniques. We also tested our overall database context by running a number of test queries to view and insert a couple of database objects.
We’re done with the data store but we’re not done with our load test context yet. Recall that the load test bounded context has its own well defined database context which is somewhat reduced compared to the full context. We need to test how it works.
Load test database context
In the previous post we inserted a C# console application called DemoDatabaseTester to our demo solution in order to test the overall web suite database context. In order to test the load testing context you can add project references to the following:
- WebSuiteDemo.Loadtesting.Domain
- WebSuiteDemoLoadtest.Repository.EF
- WebSuiteDDD.SharedKernel
Add a new class to the tester app called LoadtestingContextService. Let’s first test whether we can populate the Agent domain objects from the database using the LoadTestingContext object. Remember that the Agent domain is not the same as the Agent database object. The database variant has a couple more properties which we don’t care about in the load testing bounded context.
Insert the following method into LoadtestingContextService.cs:
public void TestLoadtestingContext() { LoadTestingContext loadTestingContext = new LoadTestingContext(); List<Agent> domainAgents = loadTestingContext.Agents.ToList(); foreach (Agent agent in domainAgents) { Debug.WriteLine(string.Format("Id: {0}, city: {1}, country: {2}", agent.Id, agent.Location.City, agent.Location.Country)); } }
Make sure that Agent refers to the domain agent, i.e. you’ll need the following using statements:
using WebSuiteDemo.Loadtesting.Domain; using WebSuiteDemo.Loadtesting.Repository.EF;
Let’s call this from Main in Program.cs. You can either delete or comment out the code we produced in the previous post:
LoadtestingContextService domainService = new LoadtestingContextService(); domainService.TestLoadtestingContext();
Run the code and… …you should get an exception saying that the entity has no key defined. Hmm, what could that be? We do have an ID defined in the EntityBase class:
private readonly IdType _id; public EntityBase(IdType id) { _id = id; } public IdType Id { get { return _id; } }
What’s wrong with that? It turns out that there are multiple problems with mapping the database object ID with the domain ID. We called the ID in the database object Agent “Id”, whereas it’s called _id in the above case. Another problem is that EntityFramework needs a setter so that EF can set its ID. It’s enough to set the access level to private for that purpose. Locate EntityBase.cs and replace the above code to this:
public IdType Id { get; private set; } public EntityBase(IdType id) { Id = id; }
Hang on a minute
We mentioned before in this series and the original one on DDD that domain objects should not concern themselves with data store related operations. This idea is called persistence ignorance (PI) However, you’ll realise that we have just modified the domain to accommodate a technology layer requirement. The change is not dramatic and does not really affect the accessibility rules set out by DDD, but still, it’s a data store related modification. EntityFramework will use reflection to populate the properties of the entity. As “Id” doesn’t match “_id” we got the exception about the undefined ID field for an entity.
Note that EntityFramework probably has a built-in mapping mechanism to declare that “Id” should be converted to “_id” but I’m not sure how to do that or if it’s at all possible. However, this is not a course on EF and I don’t want to bloat the series more than it’s necessary so we’ll just go with the simplest solution presented above. The domain model is still fine as it is.
Let’s go on
OK, let’s rerun the code in DemoDatabaseTester, and… …what now? We’ve got another exception of type System.Reflection.TargetInvocationException:
“Exception has been thrown by the target of an invocation.”
The inner exception states the following:
“The class ‘WebSuiteDemo.Loadtesting.Domain.Location’ has no parameterless constructor.”
There we go again, that’s a requirement by EntityFramework. Each entity must have a parameterless constructor otherwise the mapping will fail. This is yet another case where theory, i.e. Persistence Ignorance, is beaten by practice. If we cannot hold onto 100% PI then we should at least go for, say, 95% and minimise the “casualties”. We certainly don’t want to have a public parameterless constructor for every domain. On the other hand EF requires one. It turns out that it’s enough to have a private constructor which is an acceptable solution to our dilemma.
Let’s test this on our Agent object. Locate Agent.cs in the WebSuiteDemo.Loadtesting.Domain namespace and add the following constructor:
private Agent() : base(Guid.NewGuid()) { }
As Agent has a property of type Location, it will also need a parameterless constructor. Open Location.cs in the same namespace and add the following constructor:
private Location(){}
Run the same test code in LoadtestingContextService.cs. It should succeed this time and you should see the 3 agents printed in the Debug window. I got the following results:
Id: 751ec485-437d-4bae-9ff1-1923203a87b1, city: Seattle, country: USA
Id: 23b83ac5-c29f-420f-bc9a-48906b243693, city: Frankfurt, country: Germany
Id: 3e953948-8d0a-46df-8ffb-9beef9991a9b, city: Tokyo, country: Japan
I’ve double-checked the entries in the database to see whether the IDs are correct:
Everything looks fine.
We’ll need to insert a private parameterless constructor to all our domain entities and value objects. Insert one in each of the following classes:
- Customer
- Description
- Engineer
- Loadtest
- LoadtestParameters
- LoadtestType
- Project
- Scenario
Morale
We have definitely not succeeded to adhere to 100% PI in our domain classes. We had to give in to the pressure and let a technology layer permeate into our domain classes. The morale of the above exercise is that sometimes practice wins over theory. In theory PI is a desirable characteristic of a domain object. However, the selected data store mechanism forces us to bend the rules somewhat.
Are these additions and modifications to the domain objects acceptable? I think so. They are at least not harmful. No external caller will be able to misuse a private constructor very easily. At least it won’t be obvious to an outside class that a certain domain object has a parameterless constructor. I’m not entirely happy with the changes, but I’ll have to live with it. I tried hard to fit EF into the picture since I know that it is the ORM choice #1 for many .NET developers. If you have another way to solve the PI dilemma with EF then you’re welcome to describe it in the comment section.
Testing the concrete Timetable repository
We can also test the 3 methods that we currently have in TimetableRepository.cs. Here’s how we can retrieve the load tests for a time period:
ITimetableRepository timetableRepo = new TimetableRepository(); IList<Loadtest> loadtests = timetableRepo.GetLoadtestsForTimePeriod(DateTime.UtcNow.AddDays(-10), DateTime.UtcNow.AddDays(10));
We can then construct a valid Timetable object:
Timetable tt = new Timetable(loadtests);
Next we can build a new Loadtest domain object:
Loadtest newLoadtest = new Loadtest(Guid.NewGuid(),"));
…or build one to be updated, i.e. where the first parameter is an existing ID:
Loadtest newLoadtest = new Loadtest(Guid.Parse("8c928a5e-d038-44f3-a8ff-70f64a651155"),"));
I copied all these GUID values from the data store. In your case the Agent, Project, Customer etc. IDs will be different. Go ahead and copy those values from your database.
Next we add the load test to a collection:
List<Loadtest> allChanges = new List<Loadtest>() { newLoadtest };
…then call AddOrUpdateLoadtests of Timetable and print the operation summary:
AddOrUpdateLoadtestsValidationResult res = tt.AddOrUpdateLoadtests(allChanges); Debug.WriteLine(res.OperationResultSummary);
We can then call the repository AddOrUpdateLoadtests method:
timetableRepo.AddOrUpdateLoadtests(res);
It’s best if you set breakpoints in your code when testing so that you can follow exactly what happens.
We can also test the deletion:
timetableRepo.DeleteById(Guid.Parse("4e880392-5497-4c9e-a3de-38f66348fe8e"));
Again, I copied the GUID of a valid Loadtest entry from my database.
We’ll reuse most of what we presented here later on when we’re ready to build the application service layer of the demo.
In the next post look at view models.
View the list of posts on Architecture and Patterns here.
|
https://dotnetcodr.com/2015/09/14/domain-driven-design-with-web-api-revisited-part-12-testing-the-load-testing-database-context/
|
CC-MAIN-2021-43
|
refinedweb
| 1,420
| 56.25
|
In this section you will learn about fibonacci number in java. Fibonacci number or Fibonacci sequence are the number in the following sequence 0, 1, 1, 2, 3, 5, 8, 13, 21....... The first two value in the sequence are 1, 1. Every subsequent value is the sum of the two value proceeding it. Here is the code for Fibonacci program:
import java.util.Scanner; public class FibonacciExample { public static void main(String args[]) { int a=0,b=1,c; System.out.println(""); System.out.println("Enter the number"); Scanner sin=new Scanner(System.in); int n=sin.nextInt(); System.out.println("Fibonacci numbers are :"); System.out.print(a); System.out.print(" "+b); for(int i=0;i<n;i++) { c= a + b; System.out.print(" "+c); a=b; b=c; } } }
Description : In the above code getting the input from the user, based on the input provided by user loop will be executed and the first two values are 1, 1 and remaining is added with their preceding term inside a loop and then print by println() method.
Output : When you will compile and execute the program output will look like as follows:: Fibonacci series in java
Post your Comment
|
http://roseindia.net/java/beginners/java-Fibonacci.shtml
|
CC-MAIN-2017-09
|
refinedweb
| 200
| 57.47
|
Industrializing Web Page Construction
December 1st, 1997 by Pieter Hintjens in
When I started building my company's web site about a year ago, I looked for a good, visual web editor, and finding one quickly produced some nice web pages. A week later, I had thrown the web editor away and was working on a tool to solve some of the major difficulties I had found. In this article I'll look at the result—a free HTML preprocessor written in Perl—that makes mass production of web pages a feasible and economical task.
htmlpp was one of the first Perl programs I wrote, and I've not regretted the choice of language. Perl allows me to add functions to the program as fast as I can think of them. The consequence is that htmlpp is a very rich tool, making the task of maintaining a web site with thousands of pages easy.
There are at least a dozen free HTML preprocessors available today; I know of three with the name htmlpp. Something is driving people to write these programs, but what? Some 95% of the web pages I produce are on-line documentation, and I dislike building these by hand. Each page needs a standard header, footer and appearance. When I change my mind, it takes a lot of mouse clicks to go through each web page again, and a lot of care to make sure that every page conforms to my preferred style.
Thus, I started htmlpp with the idea: “take a large text file and break it into smaller web pages, adding pretty headers and footers, building the table of contents, cross-references and hyperlinks.” It would also be nice to define symbols like $(version) and place them into the text. How about conditional blocks so that I can generate frame and non-frame web pages from the same document, a way to share definitions between projects, a for loop to build structured text, access to environment variables and Perl macros, some more hot coffee and a raisin bagel?
htmlpp uses the term “document” to refer to the text files it inputs. This is a “hello world” document:
.echo Hello, World.
Here's something more involved:
.define new-year 0101 .if "&date("mm-dd")" eq "$(new-year)" . echo Happy New Year! .else . echo Hello, World. .endifIf you've used C or C++, htmlpp looks very much like the C preprocessor. You get commands like .define, .include and .if that work in a similiar fashion to the C preprocessor equivalents. For instance, the .if command works at “compile time”, i.e., when you build the HTML pages, not when they are displayed by the browser. Some other htmlpp commands were borrowed from the Unix shells.
Note how I define a symbol, new-year, and then use it in the document as $(new-year). htmlpp provides many variations on this theme; for example, the $(*...) form creates a hyperlink:
.define lj $(*lj="Linux Journal"<\n>) is the magazine of the Linux community.
To define a counter which runs from 0 upwards:
.define counter++ 0A realistic htmlpp script uses the .page command to create HTML pages. Listing 11 shows the template file supplied by htmlpp for your new projects.
Each HTML page gets a header and a footer. htmlpp lets you construct very complex headers and footers. This footer, taken from the htmlpp documentation, builds hyperlinks to the first, previous, next and last pages in the document, plus an index that lets the user jump to any page in the document.
.block footer <HR><P> | $(*FIRST_PAGE=<<) | $(*PREV_PAGE=<) | $(*NEXT_PAGE=>) | $(*LAST_PAGE=>>) .build index <P><A HREF="/index.htm"> <IMG SRC="im0096c.gif" WIDTH=96 HEIGHT=36</A> Designed by <.HREF "/html/pieter.htm" "Pieter Hintjens"> © 1997 iMatix </BODY></HTML> .endblock
The .build index command builds the index by making a list of all the pages in the document. With an .if command, we can show the current page in relationship to the other pages. This is how I define the index:
.block index_open <BR> .block index_entry .if "$(INDEX_PAGE)" eq "$(PAGE)" | <.EM $(INDEX_TITLE)> .else | $(*INDEX_PAGE="$(INDEX_TITLE)") .endif .endblockThis code is beginning to get a bit complex, but the results are well worth the effort. The symbols in capital letters (e.g., $(PAGE), the file name for the current HTML page) are supplied by htmlpp. Some of these symbols, such as $(NEXT_PAGE), require that htmlpp go over the document several times. In fact, htmlpp will run through the document three or more times, until all cross references have been resolved. This multi-pass approach can be a little slow, but it is powerful enough to handle the footer block shown above.
The .build toc command builds a table of contents, a vital part of any large document. htmlpp comes with a small file, contents.def, that does this job. To build the table of contents, you do the following:
.include contents.def
The contents.def file first defines three blocks (toc_open, toc_entry and toc_close) and then does a .build toc:
.block toc_open <MENU> .block toc_entry <LI><A HREF="$(TOC_HREF)">$(TOC_TITLE)</A></LI> .block toc_close </MENU> .end <P> .build toc <HR>htmlpp uses such predefined blocks for headers, footers, indexes, table of contents and other constructions. You can define your own blocks in order to pull standard chunks of HTML text into your pages. You can also use .include commands, but this practice can lead to the creation of many small files.
The key to unlocking htmlpp's real power is learning a little Perl. When you use the .if command, for instance, you use Perl. So, I can write something like this:
.if $ENV {"RELEASE"} eq "test"
It's also possible to run Perl programs and pipe the output into your HTML pages or to extend htmlpp's syntax with your own functions. Finally, since htmlpp comes with source code under the GNU General Purpose License, you can change the tool in any way you wish.
At the other extreme, you can use htmlpp in “guru mode” to turn a simple text file into structured HTML pages. All you need to do is mark the section headers. htmlpp inserts a table of contents, breaks the document into pages, adds headers and footers, detects numbered and bulleted lists, paragraphs, tables and so on. This is a quick and lazy way to produce useful HTML pages without tagging every paragraph.
To use htmlpp, you have to be happy writing HTML by hand (unless you work in guru mode). In return, you get an economical way to maintain large web sites without losing any control over the quality of your work.
To install and use htmlpp, you need Perl version 4 or 5. Download htmlpp from and unpack the .zip file. The package comes with HTML pages describing how to install and use. If you have questions, comments or suggestions, don't hesitate to send me e
|
http://www.linuxjournal.com/article/2448
|
crawl-002
|
refinedweb
| 1,150
| 75.2
|
edit Welcome!
Hello, Hypster,:Hypster 29 October 2011, at 20:22
edit adopt a nOOb
Hey Hypster, I just read your forum. I'd recommend you ask a user if they will adopt you..to make it easier for you to understand the incomprehensible stuff that goes on here. I'd be happy to adpot you, though it woulnd't hurt to check outvarious users and their style of writing and administering and leaving insane/helarious messages on each others talk page, to find the right person for you, or...I guess you could forget about being adopted and just start writing stuff. Have fun here. --ShabiDOO 02:03, October 30, 2011 (UTC)
edit Hello Hypster
I've been circulating back here more than in the past half year, so good to "meet" you. Saw your "How interesting" edit on the bomber's forum. The main reason I thought he was a fake was because he didn't seem like a guy who could think through and organize anything, let alone carry 25 molotov cocktails under his arm at 6 o'clock in the morning. All work and no play, is what I'd say to him. Hypster! Aleister 15:51 17-1-'12
- Thanks! Some of your articles are hilarious. Whoever wrote that threat sounds like an angsty twelve year-old who couldn't even organise his Warhammer 40K figures. - ENTER CITADEL T)alk C)untributions B)an 15:54, January 17, 2012 (UTC)
- Thank you. The good parts of my pages are all written by a neighbor boy who I pay to do odd chores like that. I actually checked the news to see if anything was asplode in New York, but nothing so far. Maybe he got up late, or left his molly cocktails out in the rain and ruined them. More likely is that he wrote the message, realized he could get in real deep trouble because he wrote it so it looked real, and he then erased it two minutes later. MadMax and I were juggling things around the site and we both noticed his edits, and max alerted wikia, but hopefully all for naught. Do you "surf" recent changes? Keep floating like a bee, sting like a butterfly, or something like that. Aleister 16:01 17-1-'12
- I'm more of a maintenance man than a writer, because I'm the least funny person I know. But I think removing the unfunny stuff is just as important, no? - ENTER CITADEL T)alk C)untributions B)an 16:09, January 17, 2012 (UTC)
- Jeez, you could spend your life removing all the unfunny from this site. That's very important. Have you spoken with Puppy, he's organizing the Vital pages, and maybe you and he could hit it off and then marry eventually, which would be good and would cause a party to occur. But fixing the unfunny is probably even more important than the funny. Aleister 16:14 17-1-'12
edit Greetings Hypster
'Cuz if not, I apologize. ~ BB ~ (T)
~ Thu, Feb 16 '12 1:21 (UTC)
- Of course I was, you cock. - ENTER CITADEL T)alk C)untributions B)an 16:48, October 7, 2012 (UTC)
- Wow, a reply to a message after seven-and-a-half months? Gotta be a record. ~, Oct 19 '12 8:57 (UTC)
edit Hello
You seem to be doing great things around here, writing and editing and fixing. Nice. Happy St. Patty's Day, a bit of the green for ya. Aleister 1:17 18-3-'12:51, March 20, 2012 (UTC)
edit)
edit Don't try to deny it...
You voted for "Above Top Secret" on what people there call the VFH page. We have a little surprise for you, unless of course you suddenly get some wise and change your vote to "No, of course not". That would be satisfactory. Thank you very much! Samuel Tompkins Huddleberryski, Assistant Director in Charge of Uncyclopedia and Canada, Things We Don't Want People To Know Patrol (TWDWPTKP), Office of La-La Land Security 17:49 15-4-'12
edit Holocaust Tycoon
Sorry! don't know why I reverted you before; perhaps I read the change backward and thought you were adding the paragraph. I don't understand it either. Thanks for using the Change Summary to explain. Spıke Ѧ 17:54 15-Jun-12
- Thank you. It was one of those jokes that was small but bad/strange enough to completely throw-off the rest of the article. - ENTER CITADEL T)alk C)untributions B)an 20:57, June 15, 2012 (UTC)
edit Comments and unconstructive criticism
Hypster, I find the the comments you leave on peoples articles as you edit sometimes pushing the boundaries of etiquette on uncyclopedia. Its not always easy to draw the line between when you shouuld tell vandals to fuck off and how to respond to established contributors who simple have a different sense of humour. There is a line between between giving your opinion and telling people that they suck. I give you a few examples of comments that I think push or "really" push these boundaries:
- Repeating things is irritating. Repeating things is irritating. Repeating things is irritating. (while that may be true...tell it to the user in a nice way on their talk page...that way they may actually listen to you instead of getting defensive about it after reading a passive agressive comment)
- About as funny as being diagnosed with cataracts. (pretty condescending)
- Come back when you have a knowledge of how to format a Wiki article. (how about helping the user instead of berating them?)
- So opinionated. (all unnews is opinionated)
- That is about as funny as your father's genital warts. (that's just nasty)
- How come the most recent editings of this article are just so functionally retarded? (how will a comment like that help anyone?)
- Let's play a game of 'Spot the joke'! Keep looking.... Keep.... looking.... (that's dickish)
While I understand the intent is to improve the wiki and that your goal (as is for all of us) is to make articles funnier, better, groovier...I think the response of the user you are communicating with and the final result of some of these comments is the opposite. You won't see a single other established user EVER talking to another established user like this (unless it's a clear joke).
Here is a rule of thumb that might help:
1. If what you read is vandalism/stupid new user edit, then do as you like.
2. If it is an actual attempt to write and arcticle...don't attack the user/writing, offer constructive criticism.
Number two is the most important rule on the wiki. While it may seem that users are telling each other to fuck off all the time, that is reserved for having fun on the talk pages and forums, talking to vandals or making what are clearly jokes with each other. Your contributions so for, as per text, have been great and I hope your keep up with your passion for good writing and improving the wiki. Consider funneling all of your critical engergy into pee-reviews, where users are hoping to have other users critique their articles. If you think I am a total retard who is having a bad day at PMS, come to my talk page and we can wrestle in a tub of jello. --ShabiDOO 19:38, July 3, 2012 (UTC)
edit Archivin' VFD
First of all, hi.
Second of all, I saw that you removed my VFD nomination for bobfromaccounting.com without there being an actual vote. Thanks for reinstating the vote. ~! 16:23, August 5, 2012 (UTC)
edit QVFD and Obama article
Erm, first of all, this is a perfectly legitimate satirical article, doesn't appear to be "random", and is written by a well-respected Uncyc writer who happens to be one of the more articulate on the site in my humble opinion. Judging by your comments on its talk page, I can only assume you QVFD'd it because you disagree with its message or apparent message. So let me remind you that Uncyclopedia is not made to be aligned a certain way politically, religiously, etc., etc. and that calling people mouth-breathing hillbillies is also not the best of constructive criticism. I can see from your talk page that you've had problems with this sort of thing before, so please find a way to work on that or there may be unpleasant repercussions going forward.
Secondly, you don't need to sign your QVFD entries. In fact, you shouldn't. At all. Thanks. -RAHB 23:21, September 4, 2012 (UTC)
edit Y U HATE CTAM?
SRSLY, Y? (No, really, I want a serious answer as to why you hate Count to a Million so much that you're not content with just ignoring it and instead try to degrade it and end it any chance you get.) ---06T23:52
- Because it is stupid and redundant. The point of a forum is to discuss the website, not to be moronic. I hate how uncyclopedia has this vibe to it in which it completely mis-uses the forum mainspace. And I wouldn't mind it if it wasn't for the fact it's not even funny, either.
- So yes. I do have a reason for disliking it. -- ENTER CITADEL T)alk C)untributions B)an 14:21, September 7, 2012 (UTC)
- Uncyclopedia has this vibe to it in which we completely pretty much do whatever. If there weren't pointless forums we wouldn't need the Forum namespace. -RAHB 04:07, September 8, 2012 (UTC)
- Well, if you hate it so much, then why don't you just ignore it? We're content with doing our thing (that is, counting to a million) and you're just being a dolt by flaming all of us on-10-07T21:25
edit Hi!
That is all. Make pointless forums, get pointless messages. -- Simsilikesims(♀UN) Talk here. 22:42, October 7, 2012 (UTC)
- Whereas: add pointless text to an article that was carefully making a point, get reverted. This because stupid ≠ funny. Regards. Spıke Ѧ 18:54 21-Oct-12
edit Hello
Vociferous eggplant, salad wending orderly. Orderly. Orderly. Orderly. Orderly. Orderly. Orderly. Orderly. Overbounds.
Was truly has to had been was when we were. Salute. Salud. Alonze! -RAHB 22:49, October 7, 2012 (UTC)
edit Is this a joke or something?
18:08 My Little Pony: Friendship Is Magic (diff | hist) . . (+118) . . Hypster (talk | contribs) (Due to a law passed in Germany today, this website will be blocked in 24 hours unless all naughty words are removed.) [rollback]
Just censoring this article wasn't enough if this was true. Also, do you know this counts as vandalism if it wasn't true? I hope you are joking or)}" > 04:58, October 8, 2012 (UTC)
edit Emily the Strong
.
» Zheliel Talk Contribs Cow » 11:35 October 8
edit Fuck you
You fuck. --Hotadmin4u69 [TALK] 18:50 Oct 18 2012
Yeah! Let's get angry! -Da man360Talk HAPPY 23:31, 24 October 2012!
edit Just to let you know
Sannse said that this image was fine and didn't need censoring. Just so you know. I'd revert it, but I won't in fear of being blocked. ~[ths] UotM 23:01, 10/25/2012
edit Hi
Hypster. Would you please start being pleasant on this website (in my humble opinion of course). Unconstructive criticism, a bitchy attitude and all around distain for all the other users is an assholy way to behave on a wiki. The users here have shown incredible patience with you so far. They are nice people and want to have a good time by writing funny articles. So yeah ... stop being such a collosal thunder cu*t. Please. --ShabiDOO 01:41, November 2, 2012 (UTC)
edit From Morality aka the new member
Hey thanks for the really warm fury welcome.
no sarcasm intended I took it to the heart.
- Good to know there are no hard feelings. Only joking. -- HIPSTER T)alk C)untributions B)an 23:49, November 3, 2012 (UTC)
edit Troll! In the dungeon!
Thought you ought to know... that trolling in the forums is ...sorta dumb. Just let it go. Flaming (or pretending to flame) other users (especially when it rides the original topic off the rails and crashes it into the side of a mountain) is, like, "super-lamz0rz", ya dig? Excellent. ~, Nov 4 '12 14:32 (UTC)
edit Ban
Abusing another user is a no-no in my book. You got a day ban from Frosty before for dickery, so I am going to make it a week:35, November 5, 2012 (UTC)
- if you want to contact me directly, use this: romartus@hotmail:37, November 5, 2012 (UTC)
- Ban expired. Don't do that again eh? Now there's a good chap. --) 00:05, November 14, 2012 (UTC)
- All-right then. I remember when you used to be cool! -- HIPSTER T)alk C)untributions B)an 16:35, November 16, 2012 (UTC)
-:06, November 17, 2012 (UTC)
edit Yo, Mr. Hippo
Just so you know, we have a policy around here: Don't Be A Dick. So yeah, you better follow it or you'll be gettin' bitchslapped by da banhammer fairly-15T23:00
- Yes, in case you were wondering, I already was (And also, I've been permabanned before). Bit of a pedantic time to say that, isn't it? -- HIPSTER T)alk C)untributions B)an 16:35, November 16, 2012 (UTC)
- 0.o permbanned?! ~ 17:17 (UTC)
- Umm... Yes. I was originally a different user. But that was years ago, and it was purely because of formatting and spacing issues (I never used the preview button). It was not anything serious (certainly not vandalism or similar) and you have to remember I was 14 back then, and now I'm 17. -- HIPSTER T)alk C)untributions B)an 18:57, November 16, 2012 (UTC)
- Ok then. Carry 20:12 (UTC)
The name of the account was: User:Another_n00b. From what I believe, the account was created in late 2009 or early 2010, I'm not too sure. I was still in high-school, and now looking at it I realise that I was both a massive weeaboo and a complete dickhead (Seriously, I'm very embarrassed looking at it now). I remember I was accused of "abusing I.P addresses" or something, but that turned out to be a misunderstanding of some kind, I really don't remember. The only reason I was banned was because I mis-used indents and I really didn't understand wiki-formatting. When my account got permabanned because of (what I believed to be a) trivial issue, I was gutted. I eventually forgot about it and moved on. But about two years later I was looking for something to do, and I remembered this website. I changed my IP address and then signed-up for a new account.
"The New Leaf" I was thinking. And it largely was. No-one remembered the name "Another_n00b", and that was how I wanted it to be. No-one recognised me. I was free to do as I fit. But now I feel like I have to say this because I see that within just two years, this wiki's userbase has changed so much that I have to voice how it has changed. If that means me getting banned again for "ban evasion" or "sock-puppeting", then so be it. If I get banned again because of this post, I will just forget that any of this happened and forget about this website once more. -- HIPSTER T)alk C)untributions B)an 20:50, November 16, 2012 (UTC)
edit hello
Due to your above confession thing it has been decided you will not be banned forever and that you will be allowed to stay, however I suggest you read this and understand if you relapse into your old ways you will be banned accordingly. Welcome back though, just be good kth, November 18, 2012 (UTC)
edit So...
You're a young guy who fucked up in the past on Uncyclopedia and who's chosen to come back and try and right his past minor wrongs? Sweet! We need more guys like you around here. --Revolutionary, Anti-Bensonist, and TYATU Boss
Uncyclopedian Meganew (Chat) (Care for a peek at my work?) (SUCK IT, FROGGY!) 21:12, November 18, 2012 (UTC)
- Thanks for the complement. And especially considering how much of a dick I was to you in the past judging from some of my older account's forum edits, I'm sorry if I annoyed you back then. -- HIPSTER T)alk C)untributions B)an 21:55, November 18, 2012 (UTC)
- Frankly, I hardly remember them. It's just good to see you make it to a stage that few formerly annoying members(read: The Crack Vandal, Not NXWave, Not Gomphog, Not Gouncyclopedia, etc.) have actually made it to: Actually worthwhile members! --Revolutionary, Anti-Bensonist, and TYATU Boss
UncyclopedianMeganew (Chat) (Care for a peek at my work?) (SUCK IT, FROGGY!) 22:17, November 18, 2012 (UTC)
- It's great to see that I've received positive feedback. And I'm glad that you're not mad about what I had said, Meganew. -- HIPSTER T)alk C)untributions B)an 19:36, November 19, 2012 (UTC)
- Good to see you back again, you are a credit to your race. Now if only more people would come back, and write as good as you after you wipe the cum off of your hands. Great to "see" you, and I hope to be back more soon myself. See ya! Aleister 20:00 11-19-'12
- So far as I can tell, only thing you did was be an asshole on that Grue Army page. Not enough to put me into ragemode or anything like that. --Revolutionary, Anti-Bensonist, and TYATU Boss
UncyclopedianMeganew (Chat) (Care for a peek at my work?) (SUCK IT, FROGGY!) 16:55, November 20, 2012 (UTC)
edit Your signature
Please read UN:SIG or get help with your signature. All the code that was dumped onto the page when you signed at UN:VFD should be inside a file. This would mean that, by changing the file, you could change past, present, and future signatures. It requires precise typing inside My preferences. Thanks. Spıke Ѧ 11:42 11-Apr-13
edit Duke Nukem Forever
I saw Anon make his change (as the RecentChanges report flags it when an Anon makes a large deletion) and initially agreed with him: The section he deleted superficially looks like it's about Joseph Smith and only touches base with Duke Nukem once. Perhaps if you made it relate more closely to the game, it would make it more obvious that the section is not some sort of serious comment about Mormonism. Spıke Ѧ 19:36 12-Apr-13
edit TM56
(What went here?) Spıke Ѧ 11:27 6-May-13
edit Please use "Hipster" userspace
As you use the Hipster account versus the Hypster sockpuppet, please use the Hipster userspace to make your ownership clearer. Spıke Ѧ 11:14 6-May-13
- Okay. I moved User:Hypster/Article about stuff to User:Hipster/Article about stuff and I put the redirect onto QVFD - Good? -- HIPSTER T)alk C)untributions B)an 11:17, May 6, 2013 (UTC)
Perfect--except for the edit of QVFD. Use {{Redirect}} when asking for a redirect to be deleted. This helps us avoid deleting the article itself by mistake. Spıke Ѧ 11:27 6-May-13
- Thank you for the advice. I'll do that next time! -- HIPSTER T)alk C)untributions B)an 12:43, May 6, 2013 (UTC)
Just one more thing: Would you please not have the contents of your signature file spit out onto the page everywhere you sign something? See my recent instructions to Denza252 on a change to My preferences. To see the problem, edit this section here and look at the code for your signature, compared to the code for mine. Changing your signature to reference the signature file when the page is looked at--rather than output its contents at the time of the post--provides that you can change your signature and have it take effect on past and present posts. Spıke Ѧ 12:49 6-May-13
edit Isle of Man
No, the sentence you deleted is evidently a local saying that fits fine at the end of the critical Intro. Spıke Ѧ 19:16 24-Jun-13
edit User:Hipster/
Hello! Sorry but what is happening with this article? Should Anon's edits be reverted or not? What is all this about? Anton (talk) 09:59, July 16, 2013 (UTC)
|
http://uncyclopedia.wikia.com/wiki/User_talk:Hipster?diff=cur&oldid=5606787
|
CC-MAIN-2015-40
|
refinedweb
| 3,492
| 72.66
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.