text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
#include <ApproxInt_KnotTools.hxx>
Purpose: Simple list to link items together keeping the first and the last one. Inherits BaseList, adding the data item to each node.
Shorthand for a constant iterator type.
Shorthand for a regular iterator type.
STL-compliant typedef for value type.
Constructor.
Copy constructor.
Destructor - clears the List.
Append one item at the end.
Append one item at the end and output iterator pointing at the appended item.
Append another list at the end.
Replace this list by the items of another list (theOther parameter). This method does not change the internal allocator.
Returns an iterator pointing to the first element in the list.
Returns a const iterator pointing to the first element in the list.
Returns a const iterator referring to the past-the-end element in the list.
Clear this list.
Return true if object is stored in the list.
Returns an iterator referring to the past-the-end element in the list.
First item.
First item (non-const)
InsertAfter.
InsertAfter.
InsertBefore.
InsertBefore.
Last item.
Last item (non-const)
Replacement operator.
Prepend one item at the beginning.
Prepend another list at the beginning.
Remove item pointed by iterator theIter; theIter is then set to the next item.
Remove the first occurrence of the object.
RemoveFirst item.
Reverse the list.
Size - Number of items.
|
https://www.opencascade.com/doc/occt-7.2.0/refman/html/class_n_collection___list.html
|
CC-MAIN-2018-05
|
refinedweb
| 218
| 62.95
|
Re: R2 DFS Replication failing
- From: Rory Niland <RoryNiland@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Tue, 14 Mar 2006 07:21:27 -0800
Disabled the firewall and everything started magically working..
Where does that leave me..
BTW: Found out the RPC patch is this one :
it came out march 6..
"Jabez Gan [MVP]" wrote:
No don't open that range of ports..
Leave that closed.
Also refer to:
(Network Ports Used by DFS)
Try disabling the firewall and see if you are still getting this error, so
you will know if it's a firewall issue or not.
--
Jabez Gan [MVP]
Microsoft MVP: Windows Server
MSBLOG:
"Rory Niland" <RoryNiland@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:547E84C3-6237-414F-BA02-BD91C33C76B8@xxxxxxxxxxxxxxxx
Taken from
:;en-us;832017#XSLTH3140121121120121120120
Distributed File System
The Distributed File System (DFS) integrates disparate file shares that
are
located across a local area network (LAN) or wide area network (WAN) into
a
single logical namespace.
System service name: DfsApplication protocol Protocol Ports
NetBIOS Datagram Service UDP 138
NetBIOS Session Service TCP 139
LDAP Server TCP 389
LDAP Server UDP 389
SMB TCP 445
RPC TCP 135
Randomly allocated high TCP ports TCP random port number between 1024 -
65534
Seems to suggest I open up ports 1024 - 65534 !?
Windows firewall doesn't support port ranges .. do I have to disable the
firewall ?
"Rory Niland" wrote:
ok I've had a look at our domain policy for windows firewall for
fileservers
and
File and print sharing is enabled with the following ports open :
UDP 137
UDP 138
TCP 139
TCP 445
Remote desktop is enabled with
TCP 3389
And I've just enabled RPC
TCP 135
Now I no longer get the error "The RPC hotfix is not installed on this
server." in the diagnostic report. However I now get "Cannot retrieve
version
vectors from this member."
I must need to open other ports .. anyone know which ones?
"Rory Niland" wrote:
Everything is up to date .. according to windows update.
I think it may be the fact that I've enabled windows firewall on my
firewall
.. what ports do I need to open to allow DFS replication ?
"Jabez Gan [MVP]" wrote:
Please go to and see if there's any available
hotfix.
Also provide some details of your system configuration.
--
Jabez Gan [MVP]
Microsoft MVP: Windows Server
MSBLOG:
"Rory Niland" <RoryNiland@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:A8F9F40E-743C-49DA-916A-74F5A9569BC2@xxxxxxxxxxxxxxxx
When I run a report it says "The RPC hotfix is not installed on
this
server."
Can anyone help with the paticular hotfix this message is referring
to ?
- Follow-Ups:
- Re: R2 DFS Replication failing
- From: Jabez Gan [MVP]
- References:
- Re: R2 DFS Replication failing
- From: Jabez Gan [MVP]
- Re: R2 DFS Replication failing
- From: Rory Niland
- Re: R2 DFS Replication failing
- From: Jabez Gan [MVP]
- Prev by Date: Re: Remote Desktop Session control 2k3 Ent srvr
- Next by Date: Re: cant contact NTP
- Previous by thread: Re: R2 DFS Replication failing
- Next by thread: Re: R2 DFS Replication failing
- Index(es):
|
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.general/2006-03/msg00721.html
|
crawl-002
|
refinedweb
| 500
| 68.5
|
In this series of tutorials, we will learn about sorting algorithms. This tutorial will focus on the selection sort sorting algorithm. We will see an example in C programming.
Sorting Algorithms Introduction
Sorting is a concept deeply embedded in a lot of things we do. It is quite often that we like to arrange things or data in a certain order. Sometimes this is done to improve the credibility of that data or to able to extract some information quickly out of that data.
For example, if we are playing a card game. Even though the number of cards in our hand are less but we still prefer to keep our hand of cards sorted by rank or suit.
Let’s say this is our hand of cards:
We would like to sort the cards in increasing order of rank. Then, the arrangement will be something like this:
Sorting Examples
Sorting is a really helpful feature even in our daily lives. There are so many places when we want to keep our data sorted. For example language dictionary or a sports ranking chart. In the former we want to keep the words sorted so that searching a word in the dictionary is easy. The latter shows the teams with good performance at the top of the list.
To define sorting in a formal manner: Sorting is arranging the elements in a list or collection in increasing/decreasing order of some property.
The list should be homogenous i.e. all the elements in the list should be of the same type. To study sorting algorithms, most of the time we use a list integers. Typically we sort the list of integers in increasing order of values. Lets say we have the following list of integers: 2,4,8,1,5 then sorting it in increasing order of value will be rearranging the elements like this: 1,2,4,5,8. Sorting the same list in decreasing order of value will be 8,5,4,2,1.
As we have said in the definition we can sort on any property. If we want to sort this list on the basis of increasing number of factors so the number with lesser numbers of factors will be towards the beginning of the list. 1(one factor), 2(two factors),5(2 factors),4(3 factors),8(4 factors).
A sorted list is a permutation of the original list. When we sort a list, we just rearrange the elements.
We can have a list of any data type. Let us sort a list of strings in lexicographical order (the order in which they will be found in a dictionary). For example: “hot”, “anxious”, ‘trot”, “aperture”, “cattle” will be rearranged as “anxious”, “aperture”, “cattle”, “hot”, “trot”.
Linear Search
If a list is stored in the computer’s memory as unsorted, then to search something in this list we will have to run a linear search. In linear search we start at the first element in the list and keep scanning the whole list until we reach the element we are looking for. In the worst case, when an element will not be there in the list we will compare it with all the elements in the list. Suppose, if there are ‘n’ elements in the list, we will make ‘n’ comparisons in the worst case. If we take ‘n’ equal to 2^64 and imagine that 1 comparison takes one millisecond then we will take 2^64 milliseconds to complete the whole process. This constitutes to several years.
Binary Search
However, if our list is sorted we can use binary search. With binary search, if the size of the list is equal to ‘n’ it will take only log₂(n) comparisons to perform a search. So if n=2^64, we will take only 64 milliseconds to complete the search in the worst case scenario.
Sorting Algorithms
Sorting as a problem is well studied and a great deal of research has gone into devising efficient algorithms for sorting. We will analyse and compare different sorting algorithms in the upcoming tutorials. Some of the sorting algorithms include:
- Bubble Sort
- Selection Sort
- Insertion Sort
- Merge Sort
- Quick Sort
- Heap Sort
- Counting Sort
- Radix Sort
Classification of Sorting Algorithms
We often classify sorting algorithms based on some parameters. These are briefly mentioned below:
- The first parameter that we want to classify upon is time complexity. This is the measure of the rate of growth of the time taken by an algorithm with respect to input size. Some algorithms will be relatively faster than others.
- The second parameter that we use for classification is space complexity or memory usage. Some sorting algorithms use a constant amount of extra memory to rearrange the elements in the list while others like merge sort use extra memory to temporarily store the data. Thus, the memory usage grows with input size.
- The third parameter is stability. Suppose we have a set of cards like the ones shown below:
We want to sort these cards in increasing order of rank. We have one 5 of Hearts, one 2 of clubs, one 8 of Diamond, one 5 of spades and one 7 of spades. One permutation will be this:
The cards are sorted by increasing order of rank: 2,5,5,7 and 8 but if you see in the original list 5 of Hearts was coming earlier than 5 of Spades. In our sorted arrangement 5 of Spades has been placed before 5 of Hearts.
A stable sorting algorithm in the case of equality of key, or the property upon which we are sorting, preserves the relative order of elements. So if the key is equal. if an element was coming before in an original list it will also come before in the sorted list. A stable sorted also guarantees that.
If we will use this kind of sorting algorithm we will get this particular permutation where 5 of Hearts will be placed before 5 of Spades:
- The next parameter of classification is whether a sort is internal or external. When all the records that need to be sorted are in the main memory or RAM then such a sort is known as internal sort. If the records are on the auxiliary storage like disk then such a sort is known as external sort.
- The last parameter of classification is whether the algorithm is recursive or non recursive. Some sorting algorithms like quick sort and merge sort are recursive while others like insertion sort and selection sort are non recursive in nature.
Selection Sort Algorithm
Selection sort is one of the simplest sorting algorithm. In this algorithm, we find the minimum element from the list in every iteration and place it in the array beginning from the first index. It uses in-place sorting algorithm and has a time complexity of O(n^2). This makes it inefficient while sorting larger lists as the running time is not very fast.
Playing Card Example
Let’s say we have a set of playing cards and we want to arrange them in increasing order of rank.
One simple thing we can do is initially keep all the cards in our left hand and then first we can select the minimum value of the card out of all the cards and move it to the right hand.
Now once again whatever card is left in the left hand, we will select the minimum from it and move it to the right hand, next to the previous cards in the right.
We can go repeating this process.
At any stage during the process, the left hand will be an unsorted set of cards and the right hand will be the sorted set of cards. In the end, the right hand will be a sorted arrangement of cards. The cards will be sorted in increasing order of rank.
Sorting list of integers with Selection Sort
Lets see how to sort a list of integers given to us in a form of an array.
Suppose we have an array of 5 integers named X with unordered numbers. Let’s see it in memory.
To sort this list, we can do something similar to what we did with the playing cards example. We can create another array of the same size as ‘X.’ Thus, we have created another array ‘Y’ of size 5.
We can start creating ‘Y’ as a sorted list by selecting the minimum from ‘X.’ At each step there will be multiple passes on ‘X’. In the first pass, ‘1’ is the minimum so 1 will go at index 0 in Y. There should be a way to mark that 1 has already been selected so it is not considered again. One way to do this can be by replacing the selected element by some very large integer that is guaranteed to be the maximum in the array in each step. We can choose this MAX to be something like the largest possible value in a 32 bit integer.
Now we will scan ‘X’ again for the second largest element that will go the index 1 in ‘Y’. That element is ‘2’. ‘2’ again will be replaced by MAX and shifted to array ‘Y.’
Now the minimum in ‘X’ is ‘3’ and we will keep on doing this until all the positions in ‘Y’ are filled.
In the end we can copy the contents of ‘Y’ back to ‘X’ so ‘X’ itself will become a sorted arrangement of its initial elements
This logic will work fine. But there is a disadvantage. There is this extra memory requirement for the auxiliary array ‘Y’ that we are creating. Larger the size of ‘X’, larger is the extra memory required for the creation of this temporary array ‘Y.’ So this is not an in-place sorting algorithm. An in-place sorting algorithm takes constant amount of extra memory for sorting a collection. In this case, the amount of extra memory will be proportional to the size of the input array in our case ‘X.’
In-place Sorting Algorithm
We can do something similar where we will select the minimum element at each step but we will not have to use this extra array ‘Y.’ The algorithm will then be in-place.
This time again we will look for the minimum value in the array to sort it in increasing order of rank.
First, we will scan the whole array to find the minimum. The minimum in the array is ‘1’. Now instead of filling ‘1’ at index 0 at another array ‘Y,’ we will swap ‘1’ with the element at index 0. ‘1’ will move to index 0 and the value at index 0 which is ‘4’ will move to index 1.
Now we will look for the next minimum value. ‘1’ will not be considered in this case. We will scan all the elements from index 1 to index 4 in order to find the second minimum. It is ‘2’ at index 3. Now ‘2’ will be placed at index 1 by swapping ‘2’ with the element at index 1 which is ‘4’.
So the second minimum went at the second position which is index 1. Now we have to look for the next minimum value in from index 2 to index 4 which is ‘3’. It is found at index 4. So it will be placed at index 2 by swapping ‘3’ with the element at index 2 which is ‘5’.
As you can see in each pass we are finding out the element that should go to a particular position. At any point during this whole process, the array is divided into two parts. One part is sorted and the other is un-sorted. With each pass we add one more cell to the sorted part. Eventually, the whole array will be sorted.
If we have ‘n’ elements, after ‘n-1’ passes we will have one more cell left but it will be ta its appropriate position.
This particular in-place logic of selecting the minimum in each pass and putting it at its appropriate position is ‘Selection Sort Algorithm.’
Selection Sort Example Code in C and C++
Now let us look at a C++ program where we will sort the same array as in the above example using selection sort.
#include <iostream> using namespace std;++){ cout<<X[i]<<" "; } }
#include <stdio.h>++){ printf("%d", X[i]); } }
How the Code Works?
In the main() method, we have created an array ‘X’ of five elements. We are calling the SelectionSort() function and passing the array and the number of elements as arguments inside it. Then we will print the elements as the output.
int main(){ int X[] ={4,1,5,2,3}; SelectionSort(X,5); for(int i=0; i<5;i++){ cout<<X[i]<<" "; }
We will write a function named SelectionSort() that will take the array and the number of elements in the array as arguments. We will run one loop with a variable ‘i’ starting zero all the way till (n-1). In each iteration of this loop we will set the element at i-th position appropriately. First, we will put the minimum at index 0 then we will put the second minimum at index 1 and so on. In fact we only need to run this loop till (n-2). This is because once we are done with all ‘i’s’ till (n-2), the element at position (n-1) will anyway be appropriate at the correct position.
void SelectionSort(int X[],int n) { for(int i=0;i< n-1;i++){ int iMin = i; for(int j= i+1;j<n;j++){ if (X[j] < X[iMin]) iMin = j; } int y = X[i]; X[i] = X[iMin]; X[iMin] = y; } }
Inside the next, for() loop we will have another variable ‘j’ that will store the index of the minimum element. For i-th element to find the minimum, we will scan the array from i till (n-1). Initially, the i-th element is the minimum element then after running the loop from (i+1) till (n-1) and while scanning if we find any ‘j’ that is having an element lesser than the current minimum we will update this particular variable iMin. When we will come out of this second for loop, iMin will have the index of the minimum element. Now we will have to swap it with element at i-th index. We will do so by using a temporary variable ‘y’.
Output
Now let’s see the code output. After the compilation of the above code, you will get the following output.
As you will see our numbers have been sorted in increasing values.
Time Complexity
The time complexity of selection sort is O(n^2). The running time is the total running time of all the statements.
Lets say this particular statement will take C1 constant time in worst case to execute. It will be executed exactly (n-1) times.
int iMin = i;
The next statements in the for loop will take at max C2 to execute:
if (X[j] < X[iMin]) iMin = j;
Also, the last three statements will take C3 time in the worst case. It will be executed exactly (n-1) times.
int y = X[i]; X[i] = X[iMin]; X[iMin] = y;
Overall time taken can be calculated below as:
T(n) = (n-1)C1 + [n(n-1)]C2/2 + (n-1)C3 = an^2 + bn + c
It belongs to the set O(n^2).
Selection sort has a slow performance as O(n^2) is not one of the best running time for a sorting algorithm.
|
https://csgeekshub.com/c-programming/selection-sort-sorting-algorithm/
|
CC-MAIN-2021-49
|
refinedweb
| 2,620
| 71.14
|
Viridisify
As usual this is available and as been written as a jupyter notebook if you like to play with the code feel free to fork it.
The jet colormap (AKA "rainbow") is ubiquitous, there are a lot of controverse as to wether it is (from far) the best one. And better options have been designed.
The question is, if you have a graph that use a specific colormap, and you would prefer for it to use another one; what do you do ?
Well is you have th eunderlying data that's easy, but it's not always the case.
So how to remap a plot which has a non perceptually uniform colormap using another ? What's happend if yhere are encoding artificats and my pixels colors are slightly off ?
I came up with a prototype a few month ago, and was asked recently by @stefanv to "correct" a animated plot of huricane Matthew, where the "jet" colormap seem to provide an illusion of growth:
Let's see how we can convert a "Jet" image to a viridis based one. We'll first need some assumptions:
- This assume that you "know" the initial color map of a plot, and that the emcoding/compressing process of the plot will not change the colors "too much".
- There are pixels in the image which are not part of the colormap (typically text, axex, cat pictures....)
We will try to remap all the pixels that fall not "too far" from the initial colormap to the new colormap.
%matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np
import matplotlib.colors as colors
!rm *.png *.gif out*
rm: output.gif: No such file or directory
I used the following to convert from mp4 to image sequence (8 fps determined manually). Sequence of images to video, and video to gif (quality is better than to gif dirrectly):
$ ffmpeg -i INPUT.mp4 -r 8 -f image2 img%02d.png $ ffmpeg -framerate 8 -i vir-img%02d.png -c:v libx264 -r 8 -pix_fmt yuv420p out.mp4 $ ffmpeg -i out.mp4 output.gif
%%bash ffmpeg -i input.mp4 -r 8 -f image2 img%02d.png -loglevel panic
Let's take our image without the alpha channel, so only the first 3 components:
import matplotlib.image as mpimg img = mpimg.imread('img01.png')[:,:,:3]
fig, ax = plt.subplots() ax.imshow(img) fig.set_figheight(10) fig.set_figwidth(10)
As you can see it does use "Jet" (most likely),
let's look at the repartitions of pixels on the RGB space...
import numpy as np from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt
def rep(im, cin=None, sub=128): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') pp = im.reshape((-1,3)).T[:,::300] if cin: cmapin = plt.get_cmap(cin) cmap256 = colors.makeMappingArray(sub, cmapin)[:, :3].T ax.scatter(cmap256[0], cmap256[1], cmap256[2], marker='.', label='colormap', c=range(sub), cmap=cin, edgecolor=None) ax.scatter(pp[0], pp[1], pp[2], c=pp.T, marker='+') ax.set_xlabel('R') ax.set_ylabel('G') ax.set_zlabel('B') ax.set_title('Color of pixels') if cin: ax.legend() return ax ax = rep(img)
We can see a specific clusers of pixel, let's plot the location of our "Jet" colormap and a diagonal of "gray". We can guess the effect of various compressions artifacts have jittered the pixels slightly away from their original location.
Let's look at where the jet colormap is supposed to fall:
rep(img, 'jet')
<matplotlib.axes._subplots.Axes3DSubplot at 0x111c9cc88>
Ok, that's pretty accurate, we also see that our selected graph does nto use the full extent of jet.
in order to find all the pixels that uses "Jet" efficiently we will use
scipy.spatial.KDTree in the colorspace. In particular we will subsample the initial colormap in
sub=256 subsamples, and collect only pixels that are within
d=0.2 of this subsample, and map each of these pixels to the closer subsample.
As we know the subsampling of the initial colormap, we can also determine the output colors.
The Pixels that are "too far" from the pixels of the colormap are keep unchanged.
increasing 256 to higher value will give a smoother final colormap.
from scipy.spatial import cKDTree
def convert(sub=256, d=0.2, cin='jet', cout='viridis', img=img, show=True): viridis = plt.get_cmap(cout) cmapin = plt.get_cmap(cin) cmap256 = colors.makeMappingArray(sub, cmapin)[:, :3] original_shape = img.shape img_data = img.reshape((-1,3)) # this will efficiently find the pixels "close" to jet # and assign them to which point (from 1 to 256) they are on the colormap. K = cKDTree(cmap256) res = K.query(img_data, distance_upper_bound=d) indices = res[1] l = len(cmap256) indices = indices.reshape(original_shape[:2]) remapped = indices indices.max() mask = (indices == l) remapped = remapped / (l-1) mask = np.stack( [mask]*3, axis=-1) # here we add only these pixel and plot them again with viridis. blend = np.where(mask, img, viridis(remapped)[:,:,:3]) if show: fig, ax = plt.subplots() fig.set_figheight(10) fig.set_figwidth(10) ax.imshow(blend) return blend
res = convert(img=img) rep(res)
<matplotlib.axes._subplots.Axes3DSubplot at 0x113791278>
Let's loot at what happend if we decrease our leniency for the "proximity" of each pixel to the jet colormap:
rep(convert(img=img, d=0.05))
<matplotlib.axes._subplots.Axes3DSubplot at 0x1159fd6d8>
|
https://matthiasbussonnier.com/posts/24-Viridisify.html
|
CC-MAIN-2019-09
|
refinedweb
| 892
| 68.77
|
Python Programming, news on the Voidspace Python Projects and all things techie.
More Python Stuff & Money
Another random collection. Hopefully coherence will be restored shortly
- Python 2.5 Beta 3 is out and about
- Mark Rees has completed part II of his tutorial on Programming with IronPython and GDATA
- I hope I can find a use for Crunchy. Crunchy is a way of creating fully interactive tutorials with Firefox. See Crunchy Frog News for the low-down
- It looks like the defining feature for Clever Harold, the latest member of the Python-web-framework family, is that it is a full WSGI stack. Wait a minute, isn't that what Pylons is ?
I just bought a laptop. It's an IBM Thinkpad T30 [1], so nothing very new but good enough for commuting.
On the subject of commuting, this has been my week for spending money. I just bought an annual season ticket for Northampton to London. A mere £4324... If you want to work in London, don't live in Northampton. So why don't I move ? No way, if you want to work in London, don't live in London either.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-08-04 13:54:04 | |
Categories: Python, Computers, Life
Big Number Maths
If you have to do maths with really big numbers, which data-type do you choose ? Integers, floats or even the new-fangled decimals [1] ?
There's another 'debate' raging on Python-Dev about floating point maths. Ronald Oussoren just posted this example :
>>> v = 1e200 >>> int(v) 999999999999999969733122212510361659474503275455023626482417509503468484355 540755341963384047062518680275124159738824081821357343682784846393850410472 39877871023591066789981811181813306167128854888448L >>> type(v) <type 'float'> >>> t = int(v) >>> t ** 2 999999999999999939466244425020724235032897553743712247233978162062705420868 772363027380308001932133054230558394675289323324880702327952854432161552216 024892912466614409626956153314556116473848998339762109232220813863099472521 374735119038509661875525607726747258646821773646868361139842288412173261267 039669530389442594522433115448347796339690544576171593343952002082284333711 4038314499908946523848704L >>> float(t**2) Traceback (most recent call last): File "<stdin>", line 1, in <module> OverflowError: long int too large to convert to float
Of course I knew that Python long integers were unbounded, but this is just silly.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-08-03 21:59:31 | |
Categories: Python, Hacking
Continuous Integration, Performance & Ruby on Rails
How long would it take to create a simple (and useful) AJAX application using Ruby on Rails ? How about a lunch break...
This is a short story, but I'll try to draw it out as much as possible.
At Resolver Systems we run continuous integration (using Cruise Control .NET). If you check in anything that breaks the build, we all know about it within fifteen minutes [1].
One of our early user stories was a performance related one. A standard set of operations is done on a moderately complex dataset. If it takes more than a second, then the test fails (and the build is broken). This has caught a few times when changes affected performance.
Sometimes the degradation happens slowly, over a few checkins. It can be difficult to work out exactly which change made the difference.
Over lunch break today Andrzej (who is now an accomplished Python programmer, but used to do web-development with Ruby on Rails), changed the user story to put the results into a database with the revision number and the machine they run on.
One small application later, and we can view the performance of Resolver as it changes with code revisions [2] :
It's nice to see Ruby in action, the AJAX (think column sorting and data editing) is a nice touch.
An interesting point is that the sort of meta-programming that Ruby makes easy, means that Andrzej (like a lot of Ruby programmers) feels that he is more of a Rails programmer than a Ruby programmer.
I guess this is a double edged sword for a language, however at least in terms of the hype-war, Rails is kicking Python's butt when it comes to web-dev.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-08-02 22:01:47 | |
Categories: Work, General Programming
Not Invented Here
Here are some projects I've stumbled across recently that you might find interesting. The first is useful, the second promising, and the third weird.
Bruno Thoorens: Python Projects
This contains two projects of note. The first is a GUI front end to py2exe.
It provides a convenient way of creating Python executables without having to configure the setup.py yourself. Very nifty.
Now what would be even nicer (why hasn't someone done this already ?), would be to create a front end to distutils itself, including all its arcane (and undocumented) jiggery pokery like package_path and extra_dir.
The other package of note from Bruno's site, is a wrapper around the Tk Tile Extension. It includes a binary for windows which isn't available from the Tile Project Page [1].
Tile is a themeing engine for Tk which brings a much nicer (native) appearance to GUIs created with Tkinter. It will be a standard part of Tk 8.5, but currently exists as a separate project. Unfortunately it uses a completely different syntax to standard Tk widgets for styling, so a Python wrapper is very welcome.
Mark Rees has just started a competing^H^H^H^H^H alternative series of tutorials on programming with IronPython. This is just the first entry, but it looks very promising.
He promises to cover a different set of topics to the ones I have planned, so I'm looking forward to it.
This is weird, but in a cool kind of way. Now OCaml programmers can take advantage of some of the awesome Python extension modules written in C.
It claims to have implemented the Python C-API for OCaml, presumably doing type conversion between Python C objects and OCaml types. Far out...
Whilst I'm linking to other people's stuff, here are some more :
Stiff asks, great programmers answer
Great programmers answer questions about programming. Guido just can't take it seriously, very amusing. (The other answers are more informative though...)
Let's Build A Compiler For The CLR
Raj has finally got his own website. This is the new location for his excellent tutorial on writing compilers for the Common Language Runtime (the .NET framework).
When the "best tool for the job" isn't...
A thought provoking blog entry about why "The Best Tool for the Job" might not always be the best tool for the job... Challenging a programmer's truism, by the ever interesting Creating Passionate Users.
Finally one that has nothing to do with programming. This is highly topical, very funny and totally politically incorrect. Scott Adams solves the middle eastern crisis by the judicious application of air conditioning.
Oh, and by the way, The Trunk Freeze for Python 2.5 Beta 3 has just happened. There will probably be a release candidate out about August 18th. The final release will follow on its heels on September 12th or thereabouts. As if that wasn't enough, there is another Python web framework surfacing: Clever Harold. Good name, but God knows what it's for.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-08-02 21:03:18 | |
News from Several Flavours of Python
Recent news from several of the different flavours of Python that exist :
CPython 2.5
The release schedule for Python 2.5 is slightly delayed. It looks like we will have a third beta, and the final release a bit later than expected.
This is mainly because Python 2.5 has lots of changes, particularly in the C API to support 64 bit platforms, so more testing is needed.
Barry Warsaw has just completed porting MailMan to Python 2.5. He didn't have many difficulties, but different applications will find different parts of the changes effect them.
I even made my own small contribution to Python 2.5. Even though Thomas Heller provided the patch for the bug I reported, I did provide a patch for test_shutil.py which tests the copytree function.
IronPython 2.5
I missed this news with the release of IronPython 1.0 RC1, mainly because my email client marked the announcement as spam.
Not content with implementing Python 2.4, the IronPython team have started on Python 2.5.
In addition RC1 has several new 2.5 Python features that can be enabled with the experimental switch –X:Python25, but by default these are disabled:
PEP 308: Conditional Expressions
PEP 343: The 'with' statement. (as per PEP 343, you need to do ‘from _future_ import with_statement’ for enabling ‘with’ statement )
Other Language Changes
- The dict type has a new hook for letting subclasses provide a default value with ‘_missing_’ method.
- Both 8-bit and Unicode strings have new partition(sep) and rpartition(sep) methods.
- The startswith() and endswith() methods of string types now accept tuples of strings to check for.
- The min() and max() built-in functions gained a ‘key’ keyword parameter.
- Two new built-in functions, any() and all(), evaluate whether an iterator contains any true or false values.
- The list of base classes in a class definition can now be empty.
wxPython for PythonCE
Ingmar Steen has announced the first release of his port of wxPython for PythonCE. PythonCE runs on PocketPC and Windows Mobile devices.
From the screenshots it looks very cool.
Luke Dunstan has also made some progress porting Python 2.5 to the PocketPC platform.
PyPy & ctypes
Lawrence Oluyede has been working on porting CPython extensions for PyPy, using ctypes.
He's got SSL Working and shows how to use the ctypes code generator to create wrappers for libraries (like OpenSSL in this example).
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-07-30 23:03:26 | |
Categories: Python, IronPython
Zope 3 & Movable Python
A Movable Python user JimC has got Zope 3 working with Movable Python.
Copy over all the modules listed in c:\python24\Zope-wininst.log.
Movpy 2.4 includes a version of the pytz module, but it's incompatible with Zope [1] (doesn't include the UTC singleton), so you need to copy over a new one and prepend its location to sys.path.
The mkzopeinstance script boils down to:import zope.app.server
from zope.app.server.mkzopeinstance import main
main(from_checkout=False)
The absolute path to the instance directory gets hard-coded into the instance files all over the place, so you have to go through and replace it with movpy-relative paths in all the scripts. The .bat files are more of a pain, I just got rid of them.
The absolute path also gets hard-coded into the zope.conf file. I expect there's a way to override that from the command line, I just modified the runzope script to over-write the file from a template each time it's run.
Movable Python can be obtained from Movpy on Tradebit.
There is a free trial version for Python 2.3 at Movpy Demo Version.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-07-29 14:10:48 | |
Categories: Python, Projects
IronPython & Windows Forms VIII
Note
This article has moved.
You can find the whole tutorial series at IronPython & Windows Forms.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-07-29 03:11:59 | |
Categories: Python, Writing, IronPython
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter...
|
http://www.voidspace.org.uk/python/weblog/arch_d7_2006_07_29.shtml
|
CC-MAIN-2015-40
|
refinedweb
| 1,922
| 74.39
|
DEBSOURCES
Skip Quicknav
sources / myodbc / 3.51.09-1 / README.OS
+-------------------------------------------------------------+
| MyODBC |
| Mac OSX |
+-------------------------------------------------------------+
INTRODUCTION
---------------------------------------------------------------
These are my notes from my experiences building a binary MyODBC
distribution on/for OSX. This was done during the last week in
July 2004 with MyODBC v3.51.9.
WHAT YOU NEED
---------------------------------------------------------------
OSX
---
I did my work on a modest iBook running a fresh install of OSX
10.3 (Panther) with a complete online update.
Well; actually I had to reinstall everything after ending up
with multiple ODBC systems on my OSX. Apple has not done a good
job of taking charge of the core ODBC stuff so the driver
vendors and others are all trying to lead it in various self
serving directions and this sometimes results in redundent driver
managers and drivers with all kinds of degradation of the user
experience.
In the UNIX world libraries can be complicated by the fact that
there are different ways to handle share libraries. OSX adds more
complexity to this by also having 'Bundles' and 'Frameworks'.
Oh joy. I did not get too deep into Frameworks but Bundles are
important here.
Xcode
-----
I installed Xcode v1.1. This is the version which came with my
OSX distribution. Its gcc with some GUI stuff.
ODBC
----
My goal was to get things to work with the default ODBC
environment. This is a version of iODBC built by
Apple - unfortunately this has many problems. I think it is
preferrable to live with the problems and bug Apple to update
ODBC in future releases than to install a new Driver Manager (as
some are doing).
MySQL
-----
I used the latest general release - MySQL v4.0.20. I tried to
use the binary distribution but failed to get MyODBC to build
against it properly. Seems it was fine for building a regular
share library but I was left with 4 or 5 unresolved references
when I tried to build a bundle. So I switched to using the
source tar-ball off of the web.
MyODBC
------
In the past I have built myodbc on OSX using the latest source
distribution but in this case I wanted to build directly from
source in the source repository. In this case it is v3.51.9.
BUILDING
---------------------------------------------------------------
Qt
--
MyODBC includes Windows specific GUI bits and Qt GUI bits. The
Windows specific can be found in the driver itself while on
other platforms a seperate setup library is created using Qt.
This means we should install Qt when working on OSX.
I will not repeat all of the Qt install documentation (see) but here are a couple of tips.
1. Make sure you have your Qt environment variables set. I
put the following in ~/.bashrc;
export QTDIR=~/SandBox/qt-mac-free-3.3.2
PATH=$QTDIR/bin:$PATH
export PATH
Since bash is not my default shell I enter bash as soon
as I open an xterm using;
$ bash
2. We want a static Qt lib so that we do not have to
distribute any extra libraries so when you configure
Qt I suggest doing the following;
$ ./configure -static
Qt will take a long time to build. This tired old iBook
spent most of a day spinnings its bits.
MySQL
-----
I did not want to build or install the server parts - I just
wanted the client stuff. I also wanted to link myodbc against
a static mysql client lib so I did the following;
$ ./configure --without-server --disable-shared
$ make
$ sudo make install
To provide the multi-threaded version of the mysql client I
did the following;
$ make clean
$ ./configure --without-server --disable-shared --enable-thread-safe-client
$ make
$ sudo make install
This resulted in MySQL client stuff being installed in
/usr/local. In particular; the libs went into
/usr/local/lib/mysql and the header files went into
/usr/local/include/mysql and some other stuff into other places
in /usr/local.
MyODBC
------
I cloned the latest sources for myodbc from the source code
repository in the usual way. I had already removed some
libtool files from the source tree in previous work so as
to ensure that the libtool on the build system gets used
when building from source repository. The theory here is that
if you have GNU auto-tools then you should have libtool.
Unfortunately; OSX does not include libtoolize part of
libtool so I had to manually link in the systems libtool
files. So I started with the following;
$ ln -s /usr/share/libtool/config.guess config.guess
$ ln -s /usr/share/libtool/config.sub config.sub
$ ln -s /usr/share/libtool/ltmain.sh ltmain.sh
I then did the following to allow the gnu auto-tools a
chance to create configure script and whatever other magic
it performs.
$ make -f Makefile.cvs
NOTE: The above link steps are now in Makfefile.cvs
so do "$ make -f Makefile.cvs osx".
Unfortunately myodbc could not find all of the mysql client
stuff so I had to help it along with a configure option.
So I did the following to start the build;
$ ./configure --with-mysql-path=/usr/local
$ make
This build process will fail because it lacks a config.h
needed by an odbc include file. I had the source for iodbc
() laying around so I did a quick configure
in there (no build or install) and then copied the config.h
to /usr/include. This is a hack but is probably safe for
my work here.
Starting myodbc make again will result in a link error. Seems
like adding a link option to the Makefile in the driver and
the driver_r dirs solves this problem. Unfortunately I could
not figure out the best way to add this properly to the gnu
auto build stuff nor am I certian this link option is the
best solution. But switching the link lines in those two
make files as follows seems to solve the link error;
LINK = $(LIBTOOL) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
$(AM_LDFLAGS) $(LDFLAGS) -Wl,-single_module -o $@
NOTE: Putting the "-Wl,-single_module" link option in other
places on the link line may not work.
Now doing a make again will result in a complete build so
do the usual to install it.
$ sudo make install
This drops some libmyodbc*dylib share libs into
/usr/local/lib. You can do the following to see that the lib
type is the normal share library;
$ file /usr/local/lib/libmyodbc3-3.51.09.dylib
This will show "Mach-O dynamically linked shared library ppc".
This library will work under the following circumstances;
1. if you link your app (ie imyodbc) directly to the driver
2. using odbctest (not sure why this works and other stuff
fails)
But it will fail to load under the following circumstances;
1. if you link your app (ie imyodbc) against the driver
manager (which is the most common thing to do).
2. if you test with a standard GUI application such as
FileMaker v7.
So lets get rid of the files we just installed into
/usr/local/lib since they are not linked for use with the
driver manager. The following should do the job but your
probably want to make sure you do not delete more files
than wanted.
$ sudo rm /usr/local/lib/libmyodbc*
Now lets link a bundle and place it in a better place (myodbcinst
wants the drivers to be in /usr/lib).
$ cd driver/.libs
$ gcc -bundle -flat_namespace -o libmyodbc3-3.51.09.dylib *.o
-L/usr/local/lib/mysql -lmysqlclient -lz -liodbcinst
$ sudo cp -R *lib /usr/lib
$ sudo cp *la /usr/lib
Do the same for the multi-threaded version;
$ cd driver_r/.libs
$ gcc -bundle -flat_namespace -o libmyodbc_r3-3.51.09.dylib *.o
-L/usr/local/lib/mysql -lmysqlclient -lz -liodbcinst
$ sudo cp -R *lib /usr/lib
$ sudo cp *la /usr/lib
Now check the file type on this library;
$ file /usr/lib/libmyodbc3-3.51.09.dylib
You should see "Mach-O bundle ppc". This should make the
driver manager happy.
NOTE: I assume anyone wanting to link their app directly
to myodbc will have the skills to build an
appropriate myodbc share lib type from myodbc sources
so I simply replace the normal share library with the
bundle - with no intention of including the normal
share lib in a final myodbc binary distibution for
OSX.
setup
-----
Now make the setup library.
$ cd setup
$ qmake
$ make
This will produce libmyodbc3S.dylib (a standard share lib)
and related symbolic links .tmp dir. You want these in
/usr/lib so do the following;
$ cd .tmp
$ sudo cp -R lib* /usr/lib
NOTE: Ignore dl warnings - but these should get sorted
out at some point in the future.
myodbcinst
----------
Ensure that you have the ODBC config files we are going
to use. They are;
~/Library/ODBC/odbcinst.ini
~/Library/ODBC/odbc.ini
If they are missing simply create empty ones. In
anycase ensure that you have read/write privs. on
them.
Now make myodbcinst and use it to register the driver. The
location of the driver is hard-coded to /usr/lib so either
get the driver in /usr/lib or edit myodbcinst.
$ cd myodbcinst
$ make -f Makefile.osx
$ sudo myodbcinst -i
$ sudo myodbcinst -s
This will register the driver in;
~/Library/ODBC/odbcinst.ini
and ensure that a sample DSN (myodbc) exists in;
~/Library/ODBC/odbc.ini
The DSN details may need to be edited to match your
environment. There are several ways to do this;
1. use a text editor to edit ~/Library/ODBC/odbc.ini
(best choice at this point)
2. use gui ODBC Administrator (recommended after
install)
3. use "myodbcinst -e" to get a gui dialog (not best
on OSX)
4. use MYODBCConfig to get gui dialog (recommended
during install)
imyodbc
-------
Now make imyodbc by doing the following;
$ cd imyodbc
$ make -f Makefile.osx
MYODBCConfig
------------
Now make MYODBCConfig by doing the following;
$ qmake
$ make
This will result in MYODBCConfig.app in the current
dir. Execute this GUI app from an xterm with;
$ open MYODBCConfig.app
TESTING
---------------------------------------------------------------
myodbcinst
----------
This is a new addition to the myodbc source. This has been
created to aid the driver installer. It can register/deregister
the driver and even add a sample data source name. Most
importantly it does this using the standard installer ODBC API
so it *should* work regardless of the plaform and the ODBC
system vendor.
dltest
------
This new addition to myodbc source is a tool which can be used
to ensure that the driver can be explicitly loaded using
libtool. It can also check for symbols (ie functions).
This program is not part of the normal build process for myodbc
but can be easily built manually by going to the directory and
running make with the appropriate makefile. For example;
$ make -f Makefile.osx
NOTE: This is a good test for normal share library and seems to
to be useful for the bundle type share library but I am not
sure that the default driver manager actually uses
libtool. This probably works well on OSX with the help of
dlcompat.
odbctest
--------
This is a standard feature of ODBC on OSX. It can be used to test
connecting to the server and submitting SQL to the server.
NOTE: odbctest can be used to test myodbc driver but the test
may be somewhat misleading as odbctest seems to work fine
with a normal lib type - but even a simple C test
program will show that the driver manager fails to load
such a library. So odbctest does some magic here. I
even downloaded the sources for it and built it without
being able to recreate this minor miracle.
imyodbc
-------
This new addition to myodbc source is a tool like the mysql
command-line tool but is based soley upon ODBC calls. Use this to
test connecting to the server and submitting SQL to the server.
This can be linked directly to the driver or to the driver manager
providing more testing options.
This program is not part of the normal build process for myodbc
but can be easily built manually by going to the directory and
running make with the appropriate makefile. For example;
$ make -f Makefile.osx
By default; this will link against the driver manager. Do the
following to link directly against the driver (you may need the
driver built as a standard library for this);
$ make -f Makefile.osx todriver
INSTALLER
---------------------------------------------------------------
Copy the osx/MyODBC dir to some place like you home dir.
Go into the MyODBC/resources dir and update the html files as
needed - which means the version number at least.
Got into each dir in the MyODBC/root dir nd check for any
BillOfMaterials.txt files. Where found; copy appropriate
files to the dir and then remove the BillOfMaterials.txt file.
You also want to remove and hidden dirs and files and the
SCCS dirs wherever they are found in MyODBC.
Now open the mac-install.pmsp like this;
$ open mac-install.pmsp
Edit the "Files" and "Resources" tabs to reflect your
environment.
Then do File -> Create Package. Create the file as;
MyODBC-ver-OSX.pkg
Note that the a pkg file is actually a dir - which is not
suitables for distribution. Put this dir into a new dir and
include any addiotional reamde as needed in there as well.
Then use "/Applications/Utilities/Disk Utility.app" to create
a disk image (dmg file).
MORE TESTING
---------------------------------------------------------------
Well start testing install by trying on current machine.
the working files - which would be something like;
$ sudo rm /usr/lib/libmyodbc*
$ sudo rm /usr/bin/*myodbc*
$ sudo rm ~/Library/ODBC
$ sudo rm -r /Library/Receipts/MyODBC*
NOTE: The above is, in practice, an uninstall.
Now open the dmg and the pkg in the usual fashion.
Complications
-------------
- its is very possible that non-standard and even multiple
ODBC systems can be installed on OSX which complicates
things in a variety of ways - perhaps most notably in that the
ODBC config files may exist in different places and with diff
editing rules. For example; FileMaker v7 comes with DataDirect
ODBC drivers AND a Driver Manager of some sort.
- standard installer API calls do not create config dirs/files,
they are however created by ODBC Admin gui and can then be
updated using ODBC installer API
- the default ODBC installer on OSX does not try to call
the drivers ConfigDSN()
RECOMMENDATIONS
---------------------------------------------------------------
1. Need GNU auto-tools expert to work with OSX platform
specialist (someone who really knows gcc/ld/libtool well would
also do) to;
- ensure that the driver is linked correctly. ie no
manual editing of make files after configure.
- address link warnings
2. Need GNU auto-tools expert to overcome following problems;
- need for iodbc config.h
- using libtoolize where it exists and otherwise linking
libtool files into source dir
3. Need graphics/web/UI person to jazz up installer and GUI
bits.
4. Need to lean on Apple to make FileMaker use default ODBC
system and to make some other enhancements fixs in ODBC;
- make calls to ConfigDSN() if driver has it
- make installer API create ODBC config files (ini files)
not just the ODBC Admin GUI.
- and many other non-conformance issues and bugs (see
source code - search for OSX conditions)
5. Possibly create a custom installer which can install, uninstall
and configure myodbc. This can be good for a number of reasons;
- myodbcinst, used in PackageMaker install, does not have a
window handle to give ConfigDSN() so no GUI can be provided
to allow user to tweek their first DSN during install. They
muust find and run the ODBC Admin tool after the install.
- PackageMaker does not seem to allow for an uninstall?
- the installer can be more intelligent and look nicer
Finally; OSX Users demand the best User experience from their
products but with ODBC - things fall short of great. A
lot can be done to improve this and make it easier for OSX Users,
many of which are web developers or otherwise use mysql.
---
Peter Harvey
August 2004
|
https://sources.debian.org/src/myodbc/3.51.09-1/README.OSX/
|
CC-MAIN-2021-31
|
refinedweb
| 2,660
| 64.2
|
Hello Community!
Today we have another task for you to help contribute to our collection of useful ReadyAPI content.
Here is the task: Create a Groovy script that will send an email when an assertion fails
Difficulty:
The idea of the script is to run a send email test step when the assertion fails. Please note that the email should be sent if any assertion for any test step in a project fails.
To complete the task you should write a script for one of the events.
Good luck😊
Solved!
Go to Solution.
Task: Create a Groovy script that will send an email when an assertion fails
This is a solution created for [TechCorner Challenge #6]
I thought I posted a reply but it disappeared.
// set project reference
def project = context.project;
// set testTypes. Can expand for more test step types that allow assertions.
def testTypes = [com.eviware.soapui.impl.wsdl.teststeps.WsdlTestRequestStep, com.eviware.soapui.impl.wsdl.teststeps.RestTestRequestStep];
// start an error list.
def errors = [];
// Walk through the test suites, test cases, and test steps.
for (ts in project.getTestSuiteList())
{
for (tc in ts.getTestCaseList())
{
for (testType in testTypes)
{
for (step in tc.getTestStepsOfType(testType))
{
for (assertion in step.getAssertionList())
{
if (assertion.errors != null)
{
def errorMsg = ts.getName() + " - " + tc.getName() + " - " + step.getName();
errorMsg += " has failed with the error: " + assertion.errors.toString();
errors.add(errorMsg);
}
}
}
}
}
}
if (errors != [])
{
// def sendMail = context.project.testSuites["TestSuiteWithSendMail"].testCases["TestCaseWithSendMail"].testSteps["Send Mail"];
def body = "";
for (error in errors)
{
body+= error + "\n";
}
def sendMail = context.testCase.testSteps["Send Mail"];
sendMail.setMessage(body);
sendMail.setSubject("Project run Errors");
sendMail.run(context.testRunner, context);
}
View solution in original post
Hi @nmrao,
Yes, I agree with you that this task isn't so realistic 😉
But, maybe, this is a regression test for some basic functionality that always should pass.
Or, we can make a task more accurate - send an email only if an assertion for the "Very Important" Test Step fails.
Thank you for the comment Rao!
Who else would like to participate?🙂
@msiadak @richie @krogold @HimanshuTayal @Radford
Hey @sonya_m
I wont be trying this - this is way beyond my coding skills - I wouldn't know where to start - its just too different to what I've tried before - (however, I'm more than happy to steal @msiadak's code for future reference) - but I cant wait for next week's task!
nice one,
rich
@msiadak great solution!
As for the disappeared comment - the spam filter got you. This was not normal behavior and I made sure that never happens to you again🙂
@richie Glad to hear this! I am about to post a new task🙂
|
https://community.smartbear.com/t5/API-Functional-Security-Testing/TechCorner-Challenge-6-How-to-Generate-Email-When-Assertion/m-p/204467
|
CC-MAIN-2020-45
|
refinedweb
| 438
| 68.36
|
In this series of articles, we’ll show you how to use a Deep Neural Network (DNN) to estimate a person’s age from an image.
This is the second article of the series, in which we’ll talk about the selection and acquisition of the image dataset. This dataset is the first core component you’ll need to solve any image classification problem, including age estimation. This dataset is used in the DL pipeline for training our CNN to distinguish images with human faces and classifying them into age groups – child, teen, adult, and so on. Also, the data will be used to estimate the precision of our CNN model, that is, how correct it predicts a person’s age from a picture.
What images do we need in our dataset? As we try to estimate people’s age, we’ll need images of human faces. We must know the age of the person in each image. Using DL terms, our images need to be "labeled" with the age of the person measured in years.
How many pictures do we need? This question is really hard to answer exactly. The number depends on the problem we are solving and the goal we’ve set. DL researchers say that solving a classification problem requires at least 1,000 examples per class. The minimal number of data samples for successful training of a CNN stems from the problem of overfitting. If our dataset is insufficient for CNN training, our model will not be able to generalize the information. So the CNN model would be very precise while processing the training data, but it will give bad results when handing the testing data.
As we stated in the first article of this series, we’ll consider age estimation as a classification problem. So we need to split the entire age range into several age groups, and then classify a person as belonging to one of these groups. For practical purposes, let’s use the following age groups:
With ten groups (classes), we’ll need at least 10,000 images for CNN training. We’ll need extra images for testing the quality of CNN after the training. Commonly, testing requires 10-20% of the training data. So, we’ll need at least 11-12 thousand images in our dataset to train and test our CNN.
Don’t forget: not only are the total number of images important, but so is the sample distribution. The ideal dataset includes the same number of samples in each class to avoid the class imbalance issues. Another important aspect: a subset of images for every age group must include samples for a range of conditions: smiling and serious faces; persons with and without glasses; men and women, and so on. This will allow CNN to extract the various features from the images and select only those features that help estimate age.
After we decide what data we need, it is time to acquire the images. Fortunately, there are many open datasets of faces, which we can use for our purposes. A quick Internet search produces several datasets with age-labeled human faces:
We’ll use the UTKFace dataset, which contains images with properly aligned and cropped faces, single face per image.
As you can see, every file name contains three prefix numbers. The first number is the age of the person in years, the second is its gender label, and the last one is the ethnicity label.
For convenience, let’s organize the dataset on our working computer in the following directory structure:
C:\Faces
C:\Faces\Results
C:\Faces\Testing
C:\Faces\Training
C:\Faces\UTKFace
The UTKFace folder contains the original images from the UTKFace dataset, with the images for ages greater than 100 removed. Now we need to split the original dataset into the training and testing parts. This simple Python class will do the splitting:
import os
from shutil import copyfile
class DataSplitter:
def __init__(self, part):
self.part = part
def split(self, folder, train, test):
filenames = os.listdir(folder)
for (i, fname) in enumerate(filenames):
src = os.path.join(folder, fname)
if (i % self.part) == 0:
dst = os.path.join(test, fname)
else:
dst = os.path.join(train, fname)
copyfile(src, dst)
The part parameter passed into the class constructor is the part of the original data that will be used for testing. The folder parameter is the path to the original dataset directory, while the train and test parameters stand for the paths to the training and testing dataset directories, respectively. Executing the following Python code will split our dataset into the training and testing sets:
part
folder
train
test
ds = DataSplitter(10)
ds.split(r"C:\Faces\UTKFace", r"C:\Faces\Training", r"C:\Faces\Testing")
The original data split with ratio 1:9. That is, every tenth original image will end up in the testing dataset, and the remaining nine will stay in the training one.
In this article, we explained what data we’ll need to solve the problem of age estimation with DL and CNN. We found some open datasets suitable for our purposes and selected one which we’ll use further in this series. We then split our dataset and organized it in our workspace conveniently for further use.
We are now ready for the next step – CNN.
|
https://codeproject.freetls.fastly.net/Articles/5273648/Age-Estimation-With-Deep-Learning-Acquiring-Datase?PageFlow=FixedWidth
|
CC-MAIN-2022-05
|
refinedweb
| 891
| 62.38
|
im having trouble linking .h and .c files, i've also read some threads regarding this problem and all of them is a bit vague and still i can't fully grasp the concept of it, and im having a lot of linking problems, Say i have b.c and b.h which i will use in a.c, and im confused whether to include b.h both a.c and b.c cuz b.c itself needs to know the structure defined in b.h, i have some function which has its prototype in b.h and is defined in b.c which also use the structure in b.h, im am not including b.h in b.c cuz as what i know b.h is more like an interface to a.c which will use the functions in b.c... Here a more clear example
b.h file
typedef struct{
int x, y;
}myStruct;
void funct1(myStruct);
void funct2(myStruct);
void funct1(myStruct x)
{
//do something
}
void funct2(myStruct y)
{
//do something
}
#include "b.h"
int main()
{
myStruct x;
funct1(x);
funct2(y);
return 0;
}
You do indeed need to
#include b.h in
b.c. Each file is compiled separately before the linker takes over, so it doesn't matter that you have included b.h in a.c, because b.c is compiled by itself and has no idea about the contents of b.h unless you include it.
Here's an example of a
#include guard
// some_header_file.h #ifndef SOME_HEADER_FILE_H #define SOME_HEADER_FILE_H // your code #endif
When some_header_file.h is included anywhere, everything in between the
#ifndef and the
#endif will be ignored if SOME_HEADER_FILE_H has been defined, which will happen on the first time it is included in the compilation unit.
It is common practice to name the
#define after the name of the file, to ensure uniqueness within your project. I like to prefix it with the name of my project or namespace as well, to reduce the risk of clashes with other code.
NOTE: The same header file CAN be included multiple times within your project even with the above include guard, it just can't be included twice within the same compilation unit. This is demonstrated as follows:
// header1.h #ifndef HEADER_H #define HEADER_H int test1 = 1; #endif // header2.h #ifndef HEADER_H #define HEADER_H int test2 = 2; #endif
Now let's see what happens when we try to include the above two files. In a single compilation unit:
// a.cpp #include "header1.h" #include "header2.h" #include <iostream> int main() { std::cout << test1; std::cout << test2; };
This generates a compiler error because test2 is not defined - it is ignored in header2.h because HEADER_H is already defined by the time that is included. Now if we include each header in separate compilation units:
// a.cpp #include "header2.h" int getTest2() { return test2; }; // b.cpp #include "header1.h" #include <iostream> int getTest2(); // forward declaration int main() { std::cout << test1; std::cout << getTest2(); };
It compiles fine and produces the expected output (1 and 2), even though we are including two files which both define HEADER_H.
|
https://codedump.io/share/g3RUnDl3Yjp1/1/linking-h-files-with-c-with-ifdef-header-guards
|
CC-MAIN-2018-09
|
refinedweb
| 522
| 77.84
|
Red Hat Bugzilla – Bug 845877
[patch] maven-archetype fails to build from source. OSGI info for catalog.jar
Last modified: 2012-08-07 05:38:36 EDT
Created attachment 602399 [details]
Fix-jetty-namespace-Add-OSGI-manifest-to-catalog.jar
Description of problem:
1. FTBFS
2. I'm building eclipse-m2e, and it needs catalog.jar from this package to have OSGI info.
Additional info:
The mass rebuild[1] shows that this package is broken. The problem is that jetty has a new namespace since this was last built. All references to jetty have been updated.
I've added some handmade OSGI information to catalog.jar. I think it's okay, but it probably should be checked! It works for my needs at least.
Builds successfully for F18[2] and F17[3].
[1]
[2]
[3]
Created attachment 602603 [details]
use pom macros to inject OSGI info instead of hand-carving it
I've modified the patch so that the OSGI info is added by maven-plugin-bundle, added with pom macros,
I'm not sure if OSGI info should be added for all jars: I didn't think of it at the time, but with maven-plugin-bundle it would be straightforward I imagine.
We'll see about addition of this for all jars. I'll file an upstream bug
Patch applied with a few more changes on top to cleanup export information
|
https://bugzilla.redhat.com/show_bug.cgi?id=845877
|
CC-MAIN-2017-43
|
refinedweb
| 233
| 67.25
|
On Wed, May 9, 2012 at 10:16 AM, Eric V. Smith <eric at trueblade.com> wrote: > In the pep-420 branch I've checked in code where find_loader() returns > (loader, list_of_portions). I've implemented it for the FileFinder and > zipimport. > > loader can be None or a loader object. > list_of_portions can be an empty list, or a list of strings. > > To indicate "no loader or portions found", return (None, []). > > If loader is not None, list_of_portions is ignored. > > I'm pretty happy with this API. Comments welcome. Yeah, I think that works well and covers all the even vaguely reasonable cases. Besides, any desire for truly exotic import behaviour can always be handled via sys.meta_path. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
|
https://mail.python.org/pipermail/import-sig/2012-May/000592.html
|
CC-MAIN-2016-30
|
refinedweb
| 125
| 70.8
|
Details
- Type:
Bug
- Status:
Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 1.8.4
- Fix Version/s: 1.8.6, 2.0-beta-3
- Component/s: parser-antlr
- Labels:None
- Number of attachments :
Description
Take this code and put it into the groovy console:
def meth() { label: assert i == 9 }
Now inspect the AST and navigate to the assert statement. You will see that the assert statement has a lineNumber and lastLineNumber are 2, but it should be 3. This is incorrect. It doesn't seem like this affects Groovy itself too much, but this is causing an exception in Groovy-Eclipse. See
GRECLIPSE-1270.
The fix is simple enough.
Issue Links
- is related to
GRECLIPSE-1270 IndexOutOfBoundsException with assert keyword in Spock test
Activity
Proposed patch - a slight variation of what Andrew proposed. The code for the statement method is a little strange in that despite each branch of the switch deriving the respective source positioning depending on the particular branch chosen, it then overrides that information with the information for the entire node. The more refined info is useful in other contexts.
I guess the patch is ok then
Well, it is more of a philosophical decision I think. At the statement level, we tend to keep track of the whitespace around the statement. Here we would not be doing this. There are no LineColumn checks that test this area. There is only a very minimal SourcePrinter test. I will do another sanity check with some more elaborate SourcePrinter tests before applying the patch. The alternative is that we need to document that any statement may contain a label and it would be up to other tools, e.g. CodeNarc etc. to step around the label. We could provide a utility to make it easier I guess.
From the tooling perspective, including whitespace in AST nodes is problematic (specifically in expression nodes and too a lesser extent in statement nodes). There are many situations where trailing whitespace must be worked around.
When there is an easy fix, I patch the groovy compiler and try to send you guys a patch, but often I need some hairy workaround code in Groovy-Eclipse.
Is there a reason why many AST nodes keep trailing whitespace? Or is this largely legacy? (From my investigations, it seems to be built into the parser.)
I think it is mainly because it is not easy to tell the grammar to not to include the whitespace and since we didn't really need that information to a level where trailing whitespace matters, we simply didn't put in the energy to prevent that from happening.
Patch applied - I tried playing around with different source texts to confuse the SourcePrinterTests but could't spot any obvious breakages.
Fix: in org.codehaus.groovy.antlr.AntlrParserPlugin.statement(AST) at the end of the method, change:
to:
This problem is coming about because the source location of statement is being set twice: once from within the call to labeledStatment using the correct source location and once again using the source location for just the label in the call to statement.
This leads me to question if explicitly calling configureAST is necessary at all in the statement method since all calls to create the different kinds of statements already call configureAST.
|
http://jira.codehaus.org/browse/GROOVY-5197?focusedCommentId=287029&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-22
|
refinedweb
| 551
| 62.17
|
Deciding on the best way to implement Inversion of Control in Umbraco has been the source of much musing and head-scratching among the community for as long as I have been in it. When I first arrived at Crumpled Dog, back in May, I was delighted to discover that here, this was a problem they had effectively solved using LightInject. In this article, I hope to be able to share some of the ways you can make it work for you too. I’ll take you through the rationale, the implementation and the benefits of choosing LightInject as your container for IoC.
By way of introduction to Lightinject, let me do a bit of explaining. LightInject is an ultra lightweight IoC container that supports the most common features expected from a service container. But when I first started writing this article, I couldn’t have told you that.
When I decided to write for Skrift, I was faced with a problem some of you can probably identify. Should I write what I know? Take the ever-so-comfortable route of waxing lyrical about the kind of topic you’d find me bending ears about at the pub; spew forth fifteen hundred words on a rant or a utopic vision so well-trodden that I might just end up looking good? Sounds interesting. Or should I take this article, a situation decidedly more public than most classrooms, as a learning opportunity to write what I want to know? You can probably tell from the title of this very piece that I ditched the former and went with the latter.
Personally, I like a deadline. A deadline pulls things into focus and forces you to get acquainted with something you may have been tempted to put off. Programming is a funny old business too. There are things that I work with, daily, that I don’t fully understand. There are things that I can get working, with a bit of tinkering, that I couldn’t implement from scratch. Most of us work for someone, a client, a boss. Essentially this means that most days I am faced with an experience I’m sure we are all pretty familiar with: something I didn’t build has broken or not broken but needs improving anyway. I need to get in, work out what’s going on and get it working the way we’d like it to be working.
A deadline pulls things into focus and forces you to get acquainted with something you may have been tempted to put off.
This approach has meant that over the course of my so-far pretty short career I’ve gotten elbow deep into a myriad of other people’s code. Some of it has been horrifying - tightly coupled beasts that you fix in one place and it breaks somewhere else. There have been occasions where, much to my chagrin, I’ve had to hash out a solution in keeping with that environment (just as an aside, learning that I wouldn’t be able to rip the guts out of these projects and start all over again every time I began to work on something came as a great shock to me).
On the flip side, and here’s what I want to write about today, I’ve delved into elegant solutions, ready to get involved and found that what I’m looking for doesn’t live where I thought it would. In fact, when I do find it, it doesn’t even look like I thought it would. And when faced with that scenario, I have done what any good developer would do: looked for patterns, sussed out behaviours, stepped through debug mode with an excessive number of breakpoints, done copious amounts of printing to screen and fashioned a solution that ‘looks’ like the rest of the code.
Now in an ideal world, the next logical thing I would do would be to find out what it is I’m looking at. Right? Get acquainted with how it works and why we’ve built this way. The world being far from ideal (I mean, where to even begin on the world?), that’s not always what happens next. More often than I’d like to admit, I’ve moved onto the next task and I made a little note in my ever-growing ‘Things I should Revisit’ document on Google Docs. Well, now is that time and here is what I learnt.
Inversion of control
The architecture of software design characterised by the inversion of the flow of control.
I learned about the SOLID principles when I first started coding. An old-colleague and sometime friend (you know who you are) said that it was something best learned early on, in order to keep my code clean as I grow as a developer. He told me I should consider these principles in anything I do, whether it be a small fix or a from-scratch shiny new build. I will happily admit that at that stage in my career as a developer, I was just happy if I could get a thing to work. I knew that there were such things as ‘elegant solutions’ but I certainly didn’t know if mine were shining examples of that or if they were just desperate hacks. So when I first read I made it all the way to the ‘D’, I found it perplexing. So with that in mind I’m going to do you all a favour and talk a little bit about what dependency inversion means, philosophically. For the uninitiated, this might just clear some things up for you. The pros can probably skip the next paragraph. Who am I kidding? You’ll have already skipped to the code snippets.
Dependency inversion is the idea entities should depend on abstractions wherever possible, rather than concretions. This allows you to decouple your code so that classes, rather than relying on each other, now rely on interfaces. Why would we want to do this? Well, the way it was explained to me, by a very patient colleague, was that if Class A depends on Class B and we decide to make a change to Class B, we are probably going to affect Class A.
This is all well and good, right? But when we are writing code how do we ensure we follow this principle? Well, one way we have done that here at my new(ish) home is by using LightInject as a container for IoC in all of our recent builds.
Getting started the right way
The structure of a build at its inception dictates the way it’s going to grow. The ways you will be constricted when a change is requested, the foundation of your future woes. As Umbraco has been developed over the years, it has become easier and easier to ensure that you follow ‘best practises’ when you build. Umbraco do not include IoC in the source code and this allows us as developers to implement whichever solution suits us best. The CMS gets better and better at ‘getting out of the way’, as it were. Now in Version 8, LightInject will be the engine that they do use to implement IoC. This helped us to make the decision as to which engine we should use. Still, we did some research and found that it was the fastest and most lightweight of the solutions that we looked at. After some tinkering (technical term, that is) we found that it suited our purposes beautifully.
For more information on the benchmark tests, have a look here at this fantastic blog on the performance times of various IoC containers.
When we decided to implement we started by playing with a dummy project, and testing the practicalities of the implementation. We got ourselves a new Umbraco project installed with a starter kit to get going. Once that was all good and installed, we enabled Models Builder in Models Factory mode. We created a homepage with a single property - a title with a textstring property editor. And then installed LightInject.
Installing LightInject was a simple process in and of itself. It’s available on Nuget so we just fired up the Nuget console and started the install with this command:
Install-Package LightInject
This installs the Binary version of the engine. Once that’s in place, you can start setting up IoC for the project.
We begin by creating a new class that inherits from IApplicationEventHandler so the code is run on startup of the application. We set up the using statements - in this case we will need both:
using LightInject; using LightInject.Mvc;
To create your container, once LightInject is installed it’s really as simple as just newing one up. In this case,
var container = new ServiceContainer();
Then, you get your assemblies using a foreach statement and for each of those assemblies, you can register a controller type with it’s own container. So here we have used the following logic to do that:
foreach (var assembly in AppDomain.CurrentDomain.GetAssemblies()) { //gets each of the assemblies var controllerTypes = assembly.GetTypes().Where(t => !t.IsAbstract && typeof(IHttpController).IsAssingnableForm(t)); //registers the types we need for LightInject foreach (var controllerType in controllerTypes) { container.Register(controllerType, new PerRequestLifeTime()); } }
That set up, there are still a few steps to complete before we get stuck into our shiny new project.
We enable LightInject to use MVC mode:
container.EnableMvc();
We make sure that we allow LightInject to create a new container for each Web request:
container.EnableWebRequestScope();
And we allow WebApi mode too passing in our site’s configuration as we do:
container.EnableWebApi(GlobalConfiguration.Configuration);
And we mustn’t forget to resolve the dependencies using DependencyResolver:
DependencyResolver.SetResolver(new LightInjectMvcDependencyResolver(container));
After the set-up, we can go about the business of registering our interfaces to their concrete models. Like so:
var container = new ServiceContainer(); container.Register(typeof(IEmailManager), typeof(EmailManager));
And then you’re able to get your object in any constructor:
private readonly IEmailManager _emailManager; public PasswordResetController(IEmailManager emailManager) { this._emailManager = emailManager; }
Now, the wonderful thing about doing things this way is that once you’ve registered the interfaces you can call them from anywhere. This means that you only have to instantiate the class once so if you’re logic is particularly expensive, this isn’t going to hit the project’s overall performance too hard. Equally, there’s no longer a need for Singletons when you're setting up your unit testing.
Broken down like this, it all seems terribly simple, right? I can say from experience that while it took me some time to bend my mind around the whys and hows of this particular way of working, I have found it a pleasure to work with.
So here ends my first technical article. I hope it’s been of use to you. I can certainly say that it’s been of use to me and that, after many talks with colleagues, pages and pages of reading and 1500 or so words of writing, LightInject is one thing that I can now strike from my list.
For further reading on IoC Version 8, please see below:
|
https://skrift.io/articles/archive/the-hows-and-whys-of-using-lightinject-as-a-container-for-ioc/
|
CC-MAIN-2018-47
|
refinedweb
| 1,874
| 69.01
|
- NAME
- DESCRIPTION
- Incompatible Changes
- Core Changes
- Significant bug fixes
- Supported Platforms
- New tests
- Modules and Pragmata
- Utility Changes
- Documentation Changes
- New Diagnostics
- Obsolete Diagnostics
- Configuration Changes
- BUGS
- SEE ALSO
- HISTORY
NAME
perldelta - what's new for perl5.006 (as of 5.005_54)in order to get these definitions.
PERL_POLLUTE_MALLOC
Enabling the use of Perl's malloc in release 5.005 and earlier caused the namespace of system versions of the malloc family of functions to be usurped by the Perl versions of these functions, since they used the same names by default.
Besides causing problems on platforms that do not allow these functions to be cleanly replaced, this also meant that the system versions could not be called in programs that used Perl's malloc. Previous versions of Perl have allowed this behaviorin order to get the older behavior. HIDEMYMALLOC and EMBEDMYMALLOC have no effect, since the behavior it due to the change.
Binary Incompatibilities
This release is not binary compatible with the 5.005 release and its maintenance versions.
Core Changes behavior. Some of them produced ancillary warnings when used in this way, while others silently did the wrong thing.
The parenthesized forms of most unary operators that expect a single argument will now ensure that they are not called with more than one argument, making the above cases syntax errors. Note that the usual behavior of:
print defined &foo, &bar, &baz; print uc "foo", "bar", "baz"; undef $foo, &bar;
remains unchanged. See perlop.
Improved
qw// operator
The
qw// operator is now evaluated at compile time into a true list instead of being replaced with a run time call to
split(). This removes the confusing behavior of
qw// in scalar context stemming from the older implementation, which inherited the behavior from split().
Thus:
$foo = ($bar) = qw(a b c); print "$foo|$bar\n";
now correctly prints "3|a", instead of "2|a".
pack() format 'Z' supported
The new format type 'Z' is useful for packing and unpacking null-terminated strings. See "pack" in perlfunc.
Significant bug fixes
<HANDLE> on empty files
With
$/ set to
undef, slurping an empty file returns a string of zero length (instead of
undef, as it used to) for the first time the HANDLE is read. Subsequent reads yield
undef.
This means that the following will append "foo" to an empty file (it used to not do anything before):
perl -0777 -pi -e 's/^/foo/' empty_file
Note that the behavior of:
perl -pi -e 's/^/foo/' empty_file
is unchanged (it continues to leave the file empty).
pack() format modifier '_' supported
The new format type modifer '_' is useful for packing and unpacking native shorts, ints, and longs. See "pack" in perlfunc.
Supported Platforms
VM/ESA is now supported.
Siemens BS200 is now supported.
The Mach CThreads (NeXTstep) are now supported by the Thread extension. x seconds instead of guessing the right number of tests to run.
- Fcntl.
- Math::Complex
The accessors methods Re, Im, arg, abs, rho, theta, methods can ($z->Re()) now also act as mutators ($z->Re(3)).
- Math::Trig
A little bit of radial trigonometry (cylindrical and spherical) added, for example the great circle distance.
- Time::Local
The timelocal() and timegm() functions used to silently return bogus results when the date exceeded the machine's integer range. They consistently croak() if the date falls in an unsupported range.
Pragmata
Lexical warnings pragma, "use warning;", to control optional warnings.
Filetest pragma, to control the behaviour of filetests (
-r
-w ...). Currently only one subpragma implemented, "use filetest 'access';", that enables the use of access(2) or equivalent to check the permissions instead of using stat(2) as usual. This matters in filesystems where there are ACLs (access control lists), the stat(2) might lie, while access(2) knows better.
Utility Changes
Todo.
Documentation Changes
- perlopentut.pod
A tutorial on using open() effectively.
- perlreftut.pod
A tutorial that introduces the essentials of references. 296:
'=item' outside of any '=over'
- Around line 312:
You forgot a '=back' before '=head1'
|
https://metacpan.org/pod/release/GSAR/perl5.005_55/pod/perldelta.pod
|
CC-MAIN-2017-13
|
refinedweb
| 664
| 55.74
|
C++ STL a Multimap?
Multimap is similar to map with two additional functionalities:
Multiple elements can have the same or duplicate keys.
Multiple elements can have the same or duplicate key-value pair.
For a better understanding of its implementation, refer to the well-commented C++ code given below.
Code:
#include <iostream> #include <bits/stdc++.h> using namespace std; int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate the concept of Multimap (Part 1), in CPP ===== \n\n\n"; cout << " Multimap is similar to map with two additional functionalities: \n1. Multiple elements can have same keys or \n2. Multiple elements can have same key-value pair.()); cout << "\n\nThe elements of the Multimap m1 are: "; for (i = m1.begin(); i != m1.end(); i++) { cout << "( " << i->first << ", " << i->second << " ) "; } cout << "\n\n\n"; return 0; }
Output:
We hope that this post helped you develop a better understanding of the concept of a Multimap Container in STL and its implementation in C++. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : )
|
https://studytonight.com/cpp-programs/cpp-stl-multimap-program
|
CC-MAIN-2021-04
|
refinedweb
| 183
| 56.15
|
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
* The version 6.1 of the Boehm-Weiser GC has been imported.
* Some statements of the form "#endif some_word_here" have been fixed, be=
cause=20
most modern compilers complain about them.
* The headers of the garbage collector are installed together with those =
from=20
ECL, so that users writing extensions for ECL may benefit from them.
--=20
Max-Planck-Institut fuer Quantenoptik=09+49/089/32905-127
Hans-Kopfermann-Str. 1, D-85748=09=09
Garching b. Muenchen, Germany=09=09worm@...
I have been abroad, and then again ill for several days, and see that the=
re=20
are some reported errors in ECL. Ok, I will fix them! But I also found a =
few=20
errors of my own :-) thanks to Edi Weitz's CL-PPCRE package, and have=20
committed some changes to the CVS repository. Here's a summary:
* Errors fixed:
- The PCL relied on the compiler to optimize certain method
combinations. However, the compiler is not always present, and
therefore it is safer to use interpreted functions instead.
* Visible changes:
- No "Bye" message in QUIT.
*") =3D> #P"FOO"
This last error is rather important, as it caused problems with DEFSYSTEM=
=2E
Have fun!
Juanjo
Juan Jose Garcia Ripoll wrote:
> I have been abroad, and then again ill for several days, and see that there
> are some reported errors in ECL. Ok, I will fix them!
I've been working on an exhaustive ansi test suite for gcl. I've got
it to run under ecl. It's reporting 1542 test failures on
10785 tests.
To run the test suite, download the gcl/ansi-tests module from the
gcl cvs repository (see for directions).
Go to the ansi-tests directory, start up ecl, and enter (load "gclload.lsp").
This will load (some with compilation) various files, the execute the
tests. In at least one test the debugger is improperly invoked; just
enter :c to continue.
Instead of submitting bug reports to the sourceforge site, it's more
convenient for everyone if the maintainer(s) download and run the
test suite themselves.
This test suite is being used by various CL development teams besides
gcl to bring their lisps more fully into ANSI compliance. The test suite
does not yet cover all of the spec, but that parts that are covered
are covered in great detail. As a result, the test suite has been
finding problems that previous public tests have missed. I estimate
the test suite is about half done.
Paul Dietz
dietz@...
The latest additions to cvs:
* The changes to CLOS which I mentioned long ago are now in CVS. Further
improvements will probably have to wait until I come back from holidays.
* I have fixed a bug in FLOAT-DIGITS: the number of _decimal_ digits would be
output. It still remains to be fixed FLOAT-PRECISION. Does anybody know how
to retrieve this information in a portable way?
* I have ported CMUCL's FORMAT to ECL. It can be selected using
--with-cmuformat at configuration time and it shows as :CMU-FORMAT in
*features* I warn you, though, that this implementation is LARGE. It
currently adds over 60k to ECL. It is there because it is slightly better
than ECL's old routine, and because I would like to work on it making it
smaller and faster.
* A dumb FORMATTER has been implemented. It produces just a function which
calls FORMAT, and it works with and without --with-cmuformat.
I noticed that ECL no longer builds on CYGWIN. I currently haven't this
environment in my laptop and cannot check it. The problem seems to be related
to the header <inttypes.h>. Could somebody have a look at the problem?
Regards,
Juanjo
* System design:
- The bytecodes compiler now works with character arrays. Bytecodes
are thus 8 bits large, while their arguments are 16 bits large.
Lisp objects referenced in the code are kept in a separate array
to simplify garbage collection. This strategy limits the size of
bytecode objects to about 32000 bytes and about 32000 constants,
but reduces the use of memory by about 25%.
- Macros are implemented in C as functions with two arguments.
Argument checking is thus left to funcall() and apply(), saving
space (3k).
- AND, OR and WHEN are now just macros, without any special
treatment in the bytecodes compiler.
--
Max-Planck-Institut fuer Quantenoptik +49/(0)89/32905-345
Hans-Kopfermann-Str. 1, D-85748
Garching b. Muenchen, Germany Juan.Ripoll@...
Here are the latest things committed to CVS:
+ Julian Stecklina's patch to fix a problem with FORMAT.
+ The class reinitialization and redefinition protocol. It is now possible to
redefine classes and to make instances obsolete. A sample session follows.
+ It is now possible to define a class with a superclass that has not yet been
defined. These FORWARD-REFERENCED-CLASSES are automatically handled by the
system, and they are converted to normal classes when the user supplies the
right definition. Another sample session follows.
+ Fixes in many integer routines including GCD, LCM and LOGBITP.
Best regards,
Juanjo
------------------------------------------------------------
[SESSION 1: Class redefinition]
> (defvar *class1* (defclass faa () (a b c)))
*CLASS1*
Top level.
> (setf *a* (make-instance 'faa))
#<a FAA>
Top level.
> (setf (slot-value *a* 'a) 2 (slot-value *a* 'b) 3)
3
Top level.
> (defvar *class2* (defclass faa () (c b)))
;;; Warning: Redefining class FAA
*CLASS2*
Top level.
> (slot-value *a* 'b)
3
Top level.
> (slot-exists-p *a* 'a)
NIL
Top level.
> (eq *class1* *class2*)
T
Top level.
------------------------------------------------------------
[SESSION 2: Forward referenced classes]
> (defclass faa (foo) (b c))
;;; Warning: Class FOO has been forward referenced.
#<The STANDARD-CLASS FAA>
Top level.
> (defclass foo () (a))
;;; Warning: Redefining class FOO
;;; Warning: Redefining class FAA
#<The STANDARD-CLASS FOO>
Top level.
> (make-instance 'faa)
#<a FAA>
Top level.
> (slot-exists-p * 'a)
T
Top level.
+ The sources of the portable CLX library, which is maintained by Dan
Barlow and hosted at Telent (). This proved to be much
easier, it is shared by several implementations (SBCL, OpenMCL, and now
ECL), and I got CVS right so it is more convenient for me.
+ The C routines for CHAR-NAME and NAME-CHAR had a bug. They should
return their values through the lisp values stack, not just a plain C
"return" form. This only revealed in *BSD boxes.
+ Several bugs in the C code of the multithreaded version of ECL.
Best regards,
Juanjo
--
Max-Planck-Institut fuer Quantenoptik +49/(0)89/32905-345
Hans-Kopfermann-Str. 1, D-85748
Garching b. Muenchen, Germany Juan.Ripoll@...
I have uploaded some changes before I go on holidays for a couple of
weeks. They are the following:
+ The GMP library and the Bohem-Weiser GC are now renamed as libeclgmp.a
and libeclgc.a, to avoid conflicts with local libraries when building
ECL. Hopefully headers will not interfere.
+ Some fixes in the compiler. I am still working on some major issues
regarding LABELS/FLET, which is the reason why there will not be a
release this summer.
+ The inliner for EXPT now produces safer code.
Regards,
Juanjo?
Paul
Paul F. Dietz wrote:
>?
Definitively. First I thought there were only minor issues, but I have
discovered that the old compiler, which was thought for a very
straightforward compilation ala GCL (i.e. all functions are closures,
and there is only one class of environment), does not handle well
situations as the following one:
(lambda (a b c)
(flet ((f1 () a))
(flet ((f2 ...
(flet ((f3 ...
(f1)))
(mapcar #'f2 ...)))
The compiler, as of now, finds F1 and decides that it needs no closure,
because the variables it accesses are just "lexically" closed and they
can be accessed via a pointer that the function receives. However, after
processing the second flet, the compiler should notice that F1 is
invoked from within a closure (F2), and thus the references to the
variables A, B, C, etc, cannot be done by a mere "pointer", but rather
these variables must form part of a real full-fledge environment.
Your tests, fortunately, is full of funny code like this ;-) Thanks a
lot for your work on it.
Regards,
Juanjo
--
Max-Planck-Institut fuer Quantenoptik +49/(0)89/32905-345
Hans-Kopfermann-Str. 1, D-85748
Garching b. Muenchen, Germany Juan.Ripoll@...
* Introduced --with-system-{boehm,gmp}. --enable-local-{boehm,gmp} still
work but are deprecated.
* Fixed the configuration flags for the different modules. Now, if the
user does not select them, they are not built/installed (There were
some problems with obsolete shell statements like
test ${option}
instead of
test ${option} = yes
* Backquote, comma, comma-at, and comma dot (` , ,@ ,.) are now implemented
using a common macro EXT:UNQUOTE. By default, the reader produces forms
using this macro, which print as they were read
(format nil '`(foo ,@a)) => "`(foo ,@a)"
and it is at evaluation time that they are expanded into calls to LIST,
LIST*, APPEND and NCONC.
* The previous implementation seems to solve a problem with the interaction
between #n# and backquote. For instance, the following form
`(foo #1=(faa ,a) #1#)
now produces the right expansion.
* In pathnames, all strings are converted to simple strings without
fill pointers because otherwise routines like probe-file, chdir, etc
which deal with the C world, become too complicated.
Hopefully after these fixes I will be able to continue working on a port
of Slime ;-)
Juanjo
>>>>> "Juan" == Juan Jose Garcia Ripoll <lisp@...> writes:
[...]
Juan> Hopefully after these fixes I will be able to continue working on
Juan> a port of Slime ;-)
I was about to ask about this :-) This IMHO would significantly boost
the usefulness of ECL.
I am amazed at the quick response, thanks for all the fixes.
--J.
* Fixed configure.in: --with-system-* did not always set the right flags.
* Implemented slot-definition objects. This is an important part of the
MOP which allows one to query the slots of a class and alter their order
in an structured way.
Hmm... Funny how a horrible change with lots of bootstrap problems ends
up described in such a small paragraph :-)
Juanjo
P.S.: I will be away in the following two weeks, with maybe ocasional
access to the internet.
These are the latest fixes committed to CVS:
+ Some bugs in the Mingw32 build fixed (TCP sockets still broken)
+ DEFMACRO accepts again lambda lists of the form (arg1 . arg-rest)
+ ADJUST-ARRAY now works with strings and remembers the element type of
the array.
+ Several fixes in the C code generator for closures.
Regards,
Juanjo
+ Version 6.5 of the Boehm-Weiser garbage collector imported.
+ (FLOAT n/m) works even when "n" and "m" are extremely big integers
which cannot be represented as floating point numbers.
+ COMPILE now always outputs three values, even when building the binary
file succeeded.>
+ COMPILE now honors the value of :OUTPUT-FILE, including the file name
extension. To comply with this, whenever LOAD finds a file name with an
unknown extension, such as "fasl", "bin", etc, it tries first to load it
as a binary file.
+ VECTOR-PUSH-EXTEND and ADJUST-ARRAY now use recursive functions to
traverse multidimensional arrays and do not cons when copying the
content of the original array.
+ Functions ecl_copy/reverse_subarray abstract some destructive
operations on arrays, much like BLAS abstracts handling of vectors in
FORTRAN. The sequence functions and ADJUST-ARRAY are now based on these.
+ The inliner expansions for ROW-MAJOR-AREF and (SETF ROW-MAJOR-AREF)
have been fixed.
Please notify any problem with these changes.
Regards,
Juanjo
Daniel Crettol wrote:
>I just checked the CVS repository and I found that the fix I submitted
>the 06/10/05 has not been applied.
>Is there any reason ?
>
Sorry, my e-mail program (mozilla), marked it as spam and ended in the
Junk folder. I had a look at it, though, and it is not so simple..
I am still thinking about the right solution.
Regards,
Juanjo.
I have a FFI interface to SDL-TTF that work without problem, once my
little patch was applied.
On Tue, Jun 28, 2005 at 12:01:28PM +0000, Maciek Pasternacki wrote:
>.
>
>
>
>
> -------------------------------------------------------
> SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
> from IBM. Find simple to follow Roadmaps, straightforward articles,
> informative Webcasts and more! Get everything you need to get up to
> speed, fast.
> _______________________________________________
> Ecls-list mailing list
> Ecls-list@...
>
--
Courage, fuyons...
Maciek Pasternacki wrote:
.
>
The point is not that. ECL always stores L+1 characters in a string,
where L is the expected size and the last character is enforced to be zero.
The problem is that there are strings in lisp which seem to have less
characters because they have fill pointers. Take, for instance this example
> (progn
(setf *a* (make-array 4 :element-type 'character :fill-pointer 0
:initial-element #\Space))
(vector-push #\h *a*)
(vector-push #\i *a*)
(vector-push #\! *a*)
*a*)
"hi!"
Well, the output string seems to have three characters but it has 4! So
when you pass it to the C world, it sees "hi! " and not "hi!" because
the terminating zero is after the fourth character, not after the #\! as
some people would expect. This problem is not so artificial: it may
happen when you call FORMAT to produce a string, when you concatenate
sequences, etc. I do not remember many functions which are forced to
output simple-string.
My question is not so much as to whether we should protect users or not.
I just want to come up with a solution which is compatible with UFFI and
with people's expectations. If the solution is to set the zero after the
'!' then ok, I can fix the function that takes pointers out of strings
to produce copies when required.
[...After some thinking...]
I think I have found the right fix. Reading the UFFI manual, C-STRING is
expected to be some type of string that can be passed to the C world. If
one expects to have some "buffer" which can be written to by a C
routine, then one has to create a C-STRING. Ok then the solution is to
enforce that these C-STRING be a lisp simple-string . In other words, a
string which has no fill pointer and has as many characters as the user
expected. Then ECL's rule that all strings have a null character at the
end will be enough.
The fix has been committed to CVS. Please raise your hands if I am
allowed to close the associated bug report in Sourceforge.
Regards,
Juanjo
Hi,
a major reorganization of the compiler is taking place. Roughly, the old
{K,G,E}CL lisp-to-C translators were designed as three layers, called
T1, T2 and T3, which do
(T1) Process each top-level form and translate it into some internal
representation (C1FORMs).
(T2) Process the C1FORMs, removing unreachable code and finally
producing C/C++ code for each form.
(T3) Similar to T2, but for each of the local/global functions which
were created in each of the toplevel forms.
The major problem is that, as you see, everything is organized around
the notion of toplevel form. For each of these, there is a processing
function which compiles it. However, many of the lisp forms can appear
both as toplevel and non-toplevel: for instance PROGN, SYMBOL-MACROLET,
or even DEFUN. This leads to a horrible duplication of code and with
time it has meant that the improvements for non-toplevel forms have not
been propagated to the toplevel compiler.
Right now I am working on reorganizing/redesigning the compiler so that
there is no notion of toplevel form, but just of lisp form. The idea is
that in a near future you will be able to write code as follows
(with-compiler-environment
;; Process all forms into C1 intermediate representation
(let* ((*compile-toplevel* 'T))
(c1forms (mapcar #'lisp-to-c1form myforms)))
;; Finally output the code.
(with-compiler-output (:c-file "foo.c" :h-file "foo.h")
(mapc #'compile-c1form c1forms))))
Of course few people are going to use this, but the idea is that the
compiler should have a very clear, very simple structure, where the fact
that a form is toplevel is signified only by a global variable.
As of now, the code duplication in the T1 phase has been eliminated, and
this has resulted in several fixes for the EVAL-WHEN forms. Furthermore,
there is no longer need for special processing of DEFUN forms, and
CLINES is implemented now as a very simple macro. There's still quite a
lot to do, such as simplifying everything down to the LISP-TO-C1FORM and
COMPILE-C1FORM interface, introducing new macrology, using the condition
system for signalling errors/warnings, and unifying the environments for
toplevel and nontoplevel forms.
Although I am conducting extensive testings before each minor commit, I
expect some instability in CVS. If this is a concern, please stay with
code before 3th July 2005.
Regards,
Juanjo
Latest changes. Notice that the new code for processing command line options
introduces incompatible changes, but allows for compilation of multiple files.
* Design:
- Simplified the structure of the frame stack, removing redundant fields.
- Reworked the structure of the lexical environment to accelerate access to
variables.
- New hash routine, similar to SBCL's one, faster and leading to fewer
collisions between similar strings.
* Visible changes:
- The code for handling command line options has been redesigned. Now multiple
-compile options are allowed; the -o/-c/-h/-data options have to come before
the associated -compile; we introduce a new -rc option to force loading
initialization files; errors during initialization are intercepted and cause
ECL to abort.
I am working on a closification of the compiler code. Maybe after that I will
find some time to review the callbacks code.
Regards,
Juanjo
Hi,
I have commited several fixes to the reader, plus a fix to the type
subsystem which makes all complex types equivalent to (COMPLEX REAL).
With this, other minor fixes, and a correction to the ANSI test suite,
the number of failures drops down to 43/21344 (0.2%), which is not bad,
I think.
In any case, I will be traveling next week so that means I will not be
able to read emails, solve bugs, etc.
Regards,
|
http://sourceforge.net/p/ecls/mailman/ecls-list/thread/40C9A2C4.8050108@mpq.mpg.de/
|
CC-MAIN-2015-22
|
refinedweb
| 3,047
| 65.32
|
Light.
C++ REST SDK
There are lots of ways to process HTTP requests and responses in C++, right from barebones WinInet calls (not recommended!) to Boost or POCO. In this article, we shall use Microsoft’s C++ REST SDK. This was available as a beta with the name Casablanca (version 0.6) during Visual Studio 2012, and also included the Windows Azure SDK for a while which has now been split off into a separate SDK. Visual Studio 2013 ships with version 1.0 of the C++ REST SDK and newer versions (2.0 at the time of writing) are available on CodePlex.
REST stands for Representational State Transfer and although it has many implementations, for our purposes it is essentially a simple way of issuing requests and commands to a remote server via HTTP calls. You can use the GET method to query tables, POST to insert rows or call remote functions, PUT to update rows and DELETE to delete rows. In other words, it is conveniently exactly like the interface supplied by the OData endpoints on our LightSwitch project (and in the case of remote functions, calling any WCF RIA Services we make – see parts 2 and 3 for details and examples).
While you can download and configure the SDK in your projects manually, the easiest way is to use the NuGet package manager and add a reference to the C++ REST SDK (search for ‘Casablanca’ in the Manage NuGet Packages search box to find it) in your project. This will download and install the SDK and add all the appropriate include and library paths so you can get going quickly.
NOTE: The C++ REST SDK only supports dynamic linking (you must compile with the /MDd or /MD flags – this is the default). Since you may be integrating the code with game engines or other libraries that only support static linking, I have produced an article explaining how to re-compile the C++ REST SDK to support static linking. It is not necessary for this tutorial, but note if you try to statically link the code below without re-compiling the SDK, it will crash with debug assertion errors.
Make a new, empty C++ project in Visual Studio and reference the SDK in whichever way you prefer, then we can get cracking!
PPLX Primer
PPLX is a Linux-compatible version of PPL included with the C++ REST SDK. PPL stands for Parallel Patterns Library and is an SDK which allows for some neat syntactical sugar to create multi-threaded applications. Everything in the C++ REST SDK is based on PPL tasks and asynchronous operation, and as such there is a bit of a steep learning curve for those not used to this kind of programming. You don’t need to know everything to use the SDK, but to help things along, I’ll give a brief introduction into the basic techniques required.
As it happens, a network SDK based on PPL is very useful for our purposes because we really don’t want our game to stall while it is waiting for the server to respond. Usually we take care of this by making network communication run in the background by starting additional threads so that the game’s main thread can continue uninterrupted. With PPL, the thread management is taken care of for us automatically, making things much easier.
PPL Tasks
Instead of performing a computation directly on our main thread, we can wrap it in a pplx::task<T> object (where T is the return type of the function encapsulating the task, ie. the variable type of the task’s output). The task will run automatically in another thread.
The C++ REST SDK has many functions which return tasks instead of the result directly.
For example the http_client object’s request() method returns a pplx::task<http_response> object, rather than an http_response directly. This means that when you call request() to execute an HTTP request, it returns immediately (doesn’t block) and the task automatically starts to run in another thread.
For example:
http_client client(""); client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json");
The above code fetches the URL in another thread, while the call to client.request() returns immediately, allowing execution to continue.
Notice that we have not actually made mention of pplx::task<http_response> in the code itself. We don’t actually generally need to deal with tasks directly unless we’re doing something special. It is assumed that the thread which receives the actual HTTP response will signal the application that the request has completed and provide the result (we’ll see how to do this below).
Task waits and results
If you want to wait for a task to finish – blocking the current thread – use wait() on a task:
client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json").wait();
If you want to get the result of a task – blocking the current thread until it is ready – use get() on a task:
http_response response = client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json").get();
Continuations
A continuation is a construct which indicates what should happen when a task is completed. You can add continuations to tasks by using the then() method to generate a new task which includes the continuation. You can chain as many then() calls as you want together.
For example:
client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json") .then()([](http_response response) { // do something with the HTTP response });
then() takes a single argument which is the function to call when the task completes. In the above value-based continuation, the target function receives a single argument which is the result of the task. By using C++11 lambda functions as above, and chaining together continuations with then(), we can essentially write the code serially in layout even though it really executes in parallel.
NOTE FOR EXPERTS: The target function executes in the same thread as the task by default, and this can sometimes be a problem. The SDK provides a way for you to indicate that the continuation should be run on the original thread from where the task was created using concurrency::task_continuation_context::use_current(), but since this is only supported in Windows 8, we show another way to deal with this problem below.
You can also create a task-based continuation, where the target function receives a new task which wraps the result of the previous task, instead of receiving the result of the previous task directly:
client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json") .then()([](pplx::task<http_response> responseTask) { // do something with the HTTP response task http_response response = responseTask.get(); });
The main reason to use this for us is exception handling. If the server is down or the user isn’t connected to the internet, attempting to generate an http_response will cause an http_exception exception to be thrown. In a value-based continuation there is no way to handle this, so we have to wrap all our network task generation calls from the main thread in try/catch blocks), but in a task-based continuation we can just put the try/catch block inside the continuation and keep things tidy. More on this below.
A note on strings
The C++ REST SDK uses the platform string string format for virtually everything involving strings. This basically means the default string format on your PC (ANSI, UTF-8, Unicode etc.). This can make things a bit tricky if you’re used to just using std::string, std::cout and so forth, because most machines default to a wide string format (as opposed to a ‘narrow’ 8-bit format) nowadays. Most of the C++ library string manipulation functions have wide versions with the same names as the narrow ones but with the letter ‘w’ in front, eg. std::wstring, std::wcout and so on. You can hard-code for this if you want (in which case remember to pre-pend all string literals with L to make them long/wide), or you can use some syntactical sugar:
The C++ REST SDK provides utility::string_t (and utility::stringstream_t etc.) which maps to either std::string or std::wstring depending on your environment. You can use the _XPLATSTR() macro (or just U() as a shortcut) to convert any string literal into the platform default.
In the code below, I have just hard-coded everything to use wide strings.
Walkthrough example: Create a new user
Let’s make a bare-bones console application which creates a new user. First, the boilerplate code:
#include <Windows.h> #include <cpprest/http_client.h> #include <cpprest/uri.h> #include <iostream> using namespace concurrency::streams; using namespace web; using namespace web::http; using namespace web::http::client; using std::string; using std::cout; // at the end of your source file: int main() { string UserName = R"##(jondoe)##"; string FullName = R"##(Jon Doe)##"; string Password = R"##(somepassword12345)##"; string Email = R"##(jon@doe.com)##"; CreateUser(UserName, FullName, Password, Email).wait(); }
If you installed the C++ REST SDK using NuGet, the include path for the SDK’s header files will be cpprest/* as shown above, otherwise you may need to change this. The namespaces are all defined by the SDK. Replace the account details in main() with pleasing defaults! Note that CreateUser() will return a PPL task, so we call wait() on it to make sure the application doesn’t exit before the server has responded to the request.
NOTE: I have used C++11 raw string literals above. Earlier versions of Visual Studio do not support this, so you must replace them with normal string literals.
Authentication
Our LightSwitch server uses HTTP Basic auth and the authentication process is trivially handled with the C++ REST SDK as follows:
http_client_config config; credentials creds(U("__userRegistrant"), U("__userRegistrant")); config.set_credentials(creds); http_client session(U(""), config);
Recall that we made an account __userRegistrant with special privileges in part 2 to allow the anonymous creation of new player accounts. To log in an actual user later on, simply replace the arguments to credentials‘ constructor with the user’s username and password.
Remember that HTTP is a stateless protocol, so the correct username and password must be supplied with every request. There are no session keys. Since HTTP Basic auth involves the transmission of the password in plaintext (unencrypted), it is critical that you use an SSL-encrypted connection to the server for authenticated requests; be sure to use https:// rather than http:// in your URL paths to make sure SSL is turned on. If you are using Windows Azure to host your LightSwitch server, SSL is configured and enabled for you when you provision a new web site, otherwise you will need to do the server-side configuration yourself.
Building the request
We construct the request to create a user as follows:
http_request request; string requestBody = "{UserName:\"" + UserName + "\",FullName:\"" + FullName + "\",Password:\"" + Password + "\",Email:\"" + Email + "\"}"; request.set_method(methods::POST); request.set_request_uri(uri(U("/ApplicationData.svc/UserProfiles"))); request.set_body(requestBody, L"application/json"); request.headers().add(header_names::accept, U("application/json"));
We construct the JSON request manually in requestBody (this isn’t good practice and later we’ll see how to do this properly; for one thing, if one of the user-supplied fields contains a backslash, the above code will fail to encode it properly), set the HTTP method to POST, set the endpoint to the UserProfiles table, specify that the request is in JSON (rather than XML) and that we also want the response in JSON too.
Processing the response
We now create a task for the request with a continuation to deal with the response. We start by getting the HTTP response status code and body:
return session.request(request).then([] (http_response response) { status_code responseStatus = response.status_code(); std::wstring responseBodyU = response.extract_string().get(); }
(Note that calling extract_string() to get the response body text returns a task rather than the string directly)
Although it shouldn’t normally be necessary, if for some reason you need to convert the response into a narrow string, you can do so as follows:
string responseBody = utility::conversions::to_utf8string(responseBodyU);
Next we look at the 3-digit HTTP response status code. Typically this is 200 OK when successfully fetching a web page, 404 if the page is not found and so on. OData standardizes on a few codes:
- 200 OK – the query was executed succesfully and the result is in the response body
- 201 Created – a row was successfully inserted into a table (status_codes::Created)
- 500 Internal Error – there was a problem with the input data (status_codes::InternalError)
- 401 Unauthorized – the user’s username or password was incorrect (status_codes::Unauthorized)
- … + others
So first we’ll check to see if the user was created:
if (responseStatus == status_codes::Created) cout << "User created successfully." << std::endl;
If not, we try to find out why. In the case of status code 500, the LightSwitch server returns some XML with error codes and descriptive error text. The <Id> tag contains the error code enum value. This doesn’t change regardless of the server’s LightSwitch version or locale so you should prefer to inspect this tag when deducing which error has occurred. You can use an XML parser if you want, but it’s much simpler to just do a brute-force string search:; } }
If the status code was neither 201 (Created) or 500 (Internal Error), something else happened so just dump out the information for debugging purposes:
else { cout << "Unexpected result:" << std::endl; cout << responseStatus << " "; ucout << response.reason_phrase() << std::endl; cout << responseBody << std::endl; } }); // this line ends the continuation } // this line closes the CreateUser() function
And that’s it. If you now run the example numerous times with various valid and invalid username/password combinations, try changing the request URI to one that doesn’t exist on the server and so forth, you should find that it all behaves exactly as you would expect – as long as the server is up and you’re connected to the internet.
Walkthrough example: Fetch a user’s profile
Let us now write code to fetch a user’s profile. First add the following code to the previous example:
// in using namespace declarations: using namespace web::json; // in main(): std::wstring profileUserName = LR"##(jondoe)##"; std::wstring profilePassword = LR"##(somepassword12345)##"; GetProfile(profileUserName, profilePassword).wait();
First we’ll create an http_client with the user’s login credentials:
pplx::task<void> GetProfile(std::wstring UserName, std::wstring Password) { http_client_config config; credentials creds(UserName, Password); config.set_credentials(creds); http_client session(U(""), config);
Notice that the constructor for credentials only allows platform strings so we had to use wstring for the argument types here.
Querying a database table is much simpler than inserting one because we don’t need to set up a POST request body or supply additional HTTP headers, so we can use a simple overload of request() as follows:
return session.request(methods::GET, L"/ApplicationData.svc/UserProfiles?$format=json")
We include a parameter in the GET query indicating that we want the response in JSON format.
When we process the response, we first check for errors:
.then([] (http_response response) {; }
We check for the condition that the authentication failed (error code 401 – Unauthorized) and for any other unexpected HTTP status code in the response. If everything is ok, we proceed to extract the relevant data from the JSON response:
else { json::value &responseJ = response.extract_json().get(); json::value &profile = responseJ[L"value"][0]; std::wcout << "Full name: " << profile[L"FullName"].as_string() << std::endl; std::wcout << "Email : " << profile[L"Email"].as_string() << std::endl; } }); // this line ends the continuation } // this line closes the GetProfile() function
Whereas before we used extract_string() to get the text of the HTTP response body, here we use extract_json() instead (returns pplx::task<json::value>), which converts the response text into a json::value object.
When you query a LightSwitch table in JSON format, what you get back is a single object containing two items: odata.metadata which you can safely ignore, and value which contains the query result. Specifically, value is an array with one element per retrieved row, and each element is an object which has one property for each field in the retrieved row. json::value has an overloaded [] operator which lets us retrieve items using standard C++ syntax, so the code responseJ[L”value”][0] returns a json::value representing the first (and in this case, only) retrieved row.
When you pull a value out of a json::value via an indexer as above, what you get is another json::value (think of it as tree traversal). To convert the leaf textual values to actual C++ strings, use as_string() as shown above. There are various as_*() functions for the different types you might want to convert to.
The final code retrieves the specified user’s profile and prints their full name and email address to the console.
Dealing with no internet connection
If there is no internet connection, the task which generates an http_response will throw an http_exception. The simplest way to deal with this is to wrap all of the relevant code (not just the task-generating code; that on its own won’t raise an exception) in a try/catch block as follows:
try { CreateUser(UserName, FullName, Password, Email).wait(); } catch (http_exception &e) { if (e.error_code().value() == 12007) std::cerr << "No internet connection or the host server is down." << std::endl; else std::cerr << e.what() << std::endl; }
Error code 12007 is defined somewhere in the Windows API as
ERROR_INTERNET_NAME_NOT_RESOLVED – in other words a DNS failure, which is what is likely to happen if the user’s internet connection is off or has failed. We simply check for this error code so we can print a meaningful error message, or print the error message supplied with the exception if something else went wrong.
Obviously, wrapping everything in error-handling code like this creates a lot of repetition and isn’t very readable or maintainable. A better way is to use a task-based continuation by changing code like this:
return session.request(request).then([] (http_response response) { ...
to:; } ...
Now, you don’t have to worry about catching exceptions from your main code.
Here is the full source code so far:
#include <Windows.h> #include <cpprest/http_client.h> #include <cpprest/uri.h> #include <iostream> using namespace concurrency::streams; using namespace web; using namespace web::http; using namespace web::http::client; using namespace web::json; using std::string; using std::cout; pplx::task<void> CreateUser(string UserName, string FullName, string Password, string Email) { http_client_config config; credentials creds(U("__userRegistrant"), U("__userRegistrant")); config.set_credentials(creds); http_client session(U(""), config); http_request request; cout << "Creating user..." << std::endl; string requestBody = "{UserName:\"" + UserName + "\",FullName:\"" + FullName + "\",Password:\"" + Password + "\",Email:\"" + Email + "\"}"; cout << "User creation request: " << requestBody << std::endl << std::endl; request.set_method(methods::POST); request.set_request_uri(uri(U("/ApplicationData.svc/UserProfiles"))); request.set_body(requestBody, L"application/json"); request.headers().add(header_names::accept, U("application/json"));; } std::wstring responseBodyU = response.extract_string().get(); string responseBody = utility::conversions::to_utf8string(responseBodyU); status_code responseStatus = response.status_code(); if (responseStatus == status_codes::Created) cout << "User created successfully." << std::endl;; } } else { cout << "Unexpected result:" << std::endl; cout << responseStatus << " "; ucout << response.reason_phrase() << std::endl; cout << responseBody << std::endl; } }); } pplx::task<void> GetProfile(std::wstring UserName, std::wstring Password) { http_client_config config; credentials creds(UserName, Password); config.set_credentials(creds); http_client session(U(""), config); cout << "Fetching user profile..." << std::endl; return session.request(methods::GET, L"/ApplicationData.svc/UserProfiles?$format=json") ; } else { json::value &responseJ = response.extract_json().get(); json::value &profile = responseJ[L"value"][0]; std::wcout << "Full name: " << profile[L"FullName"].as_string() << std::endl; std::wcout << "Email : " << profile[L"Email"].as_string() << std::endl; } }); } int main() { string UserName = R"##(jondoe)##"; string FullName = R"##(Jon Doe)##"; string Password = R"##(somepassword12345)##"; string Email = R"##(jon@doe.com)##"; CreateUser(UserName, FullName, Password, Email).wait(); std::wstring profileUserName = LR"##(jondoe)##"; std::wstring profilePassword = LR"##(somepassword12345)##"; GetProfile(profileUserName, profilePassword).wait(); while(true); }
Maintenance and extensibility
What we’ve done so far works but it is far from optimal from a development point of view. Here are some of the problems:
- username and password must be supplied with every request
- due to the stateless nature of the protocol, there is no way to know if the user is metaphorically “logged in” or not
- the interface (output or other application-specific processing of results) is mixed up with the request/response logic. We would like to separate these so we can re-use our network code in multiple games/apps.
- the universal LightSwitch/OData handling code is mixed up with the code specific to the requests/functions/tables available in our game network. We would like to separate these so the LightSwitch client code can be re-used in other applications that aren’t related to our game network project.
- adding new request functions means we’ll have to add new response/error checking/validation code that will be similar for many requests
- we are not constructing JSON requests in a safe way (recall that in the CreateUser example we made the request by joining strings together)
- iterating through many JSON objects is syntactically messy. We would like to convert returned rows to C++ structs with a property for each field.
- there is no way for the main thread to know if the request was successful or an error occurred
- there is no way for the main thread to know if the network code is still busy processing the request without also blocking it (using pplx::task::wait())
- the server URL is repeated in every request function
- mixing of different string types makes code maintainability harder
All of this can be solved by producing a class framework which:
- stores persistent data (server URL, login credentials)
- has a number of helper functions (boxing/unboxing JSON requests, generating row insert/row query requests, error-checking)
- tracks whether the current login credentials were valid last time they were used (indicating ‘successful logon’)
- maintains state in a thread-safe manner about whether the network code is busy processing a request, which can be polled by the main thread
- has an inheritance hierarchy that separates LightSwitch/OData logic, game network logic and logic for our specific game
- is used in each game by a separate application-specific class containing the game’s interface which will be linked to the network code via task continuations
The full source code for just such a framework can be found at the bottom of the article. I’m not going to go over it line by line but I will highlight a few features of the code we haven’t looked over yet.
Framework Details
Three classes are involved:
- LightSwitchClient – generic functions for inserting and querying rows and performing generalized error-handling, tracking logon and busy state and generating and accessing JSON data
- GameNetwork – derives from LightSwitchClient and includes the functions/tables supported by our GameNetwork LightSwitch project
- GameClient – the game’s interface and has GameNetwork as a member through which the LightSwitch server is accessed
If your game network will have various functions that aren’t game-specific as well as some that are – and this is probably going to be the case – you may wish to further derive from GameNetwork so that this class does not have to be modified with game-specific code.
Usage:
Error handling
Instead of outputting error messages directly, we store them for later retrieval and the client code can call LightSwitchClient::GetError() to check if an error occurred. All error types – no internet connection, HTTP error status codes and LightSwitch errors are funneled through this mechanism so that error-checking by the client can be done in a simple unified way.
Credentials
The desired user’s login and password can be set via LightSwitchClient::SetUser(). This is initially assumed to be a valid user and this assumption changes if a request returns a 401 Unauthorized error, or if LightSwitchClient::LogOut() is called, clearing the stored credentials. The login state can be checked via bool LightSwitchClient::LoggedIn().
Busy state
We create a type ThreadSafeBool which can be converted to the standard bool type and back via overloaded operators. The class essentially wraps a single bool in a Windows CRITICAL_SECTION such that it can be read and written by multiple threads without corruption. We then store an instance of this object in our class framework which is set to true at the start of any request and false when the request completes (with or without errors). Call LightSwitchClient::Busy() to get the busy state.
Techniques (the following code is all included in the framework; it is provided here for educational purposes if you want to roll your own):
LightSwitch error handling
You can extract the error code from a failed LightSwitch request as follows:
if (responseStatus == status_codes::InternalError || responseStatus == status_codes::NotFound) { wstring const &body = response.extract_string().get(); std::wregex rgx(LR"##(.*<Id>(.*?)</Id>.*)##"); std::wsmatch match; if (std::regex_search(body.begin(), body.end(), match, rgx)) { lastError = match[1]; return false; } rgx = LR"##(.*<Message>(.*?)</Message>.*)##"; if (std::regex_search(body.begin(), body.end(), match, rgx)) { lastError = match[1]; return false; } lastError = L"An internal server error occurred and no error code or message was returned."; return false; }
Some responses (mainly those as a result of a 404 Not Found error) don’t have <Id> tags with an error code, so in those cases we try to extract the error text from the <Message> tag instead. If neither are found, a default error message is returned.
Extracting JSON data
Unlike querying a row which was described earlier, inserting a row in the database will return a JSON object which contains the inserted fields, without the extra object/array wrapping. You can bypass all of this and ensure you get the data you want regardless of request type as follows:
json::value LightSwitchClient::SanitizeJSON(http_response &response) { json::value &responseJson = response.extract_json().get(); if (responseJson.is_object()) { if (responseJson.size() == 2) { if (responseJson.has_field(L"value")) { json::value value = responseJson[L"value"]; if (value.is_array()) return value; else return responseJson; } else return responseJson; } else return responseJson; } lastError = L"JSON response corrupted"; return json::value(); }
In a nutshell, if the response JSON data is a 2-element object where one of the properties is called value and is itself an array, then it’s most likely we have just received the query results of one or more rows so we return the array directly; in all other cases we return the original response. If the JSON data is anything besides an object, it is probably corrupt data.
Encapsulating JSON data
We define a type JsonFields which is a simple mapping of keys to values using std::map as follows:
typedef std::map<wstring, wstring> JsonFields;
Unlike json::value, we can use a C++11 initializer list to populate this very easily; for example, to create a user profile JSON object we could write something like:
JsonFields userProfile{ { L"UserName", UserName }, { L"FullName", FullName }, { L"Password", Password }, { L"Email", Email } };
JsonFields can be passed to various functions in the framework and are easily converted internally to json::value objects for sending an HTTP request as follows:
JsonFields args...; ... json::value reqJson; for (auto &kv : args) reqJson[kv.first] = json::value::string(kv.second); wstring requestBody = reqJson.serialize();
Invalid URI errors
Calling http_client::request() will throw a uri_exception if there is a problem with the supplied URI. We catch this as follows:
try { return session.request(....).then(...); } catch (uri_exception &e) { lastError = utility::conversions::to_utf16string(e.what()); busy = false; return pplx::task_from_result(json::value()); }
Note that we have to return a pplx::task, but when an error occurs there is no task to perform. Luckily we can use pplx::task_from_result(T value) to generate a task that simply returns the supplied value immediately.
Retrieve only the first row matching a query
You can add the OData directive $top=1 to a URL’s query string to fetch only the first matching row of a query, and then look at the first element of the array returned by SanitizeJSON above (the framework includes a function LightSwitchClient::QueryFirstRow() to do this for you).
Updating table rows
Although our registration and login example doesn’t require it, the framework also allows you to update rows with one or more changed fields. OData uses the HTTP PATCH method to do this. The HTTP request should be formed in the same way as for inserting rows but with one additional header:
If-Match: *
This is a requirement in LightSwitch and simply means that any matching entity (row) can be updated.
The URL should point to the row or rows to be updated. To point to a single row in a LightSwitch application, the auto-generated Id field for each table is used as the primary key. Brackets are added to the table name to select a row by its primary key:
will select the row for the user with Id 1234.
The LightSwitchClient::Update(wstring table, int key, JsonFields fields) function in the framework will handle table row updates for you automatically.
Note that on a successful update, the server will return 204 No Content with an empty response body.
NOTE: HTTP PUT can also be used but this updates all the fields in a matching row, even if you don’t specify them in the request (in that case, they will be blanked).
Deleting table rows
Once again not called for in our example code but available in the framework, deleting rows uses the HTTP DELETE method and has the same URL and HTTP header requirements as for updating rows, but no request body needs to be specified as there is nothing to update. Deleting rows also returns 204 No Content with an empty response body from the server on success.
Warning about row update/delete security
Ensure that users can only modify table rows that they should be modifying!
While in this case, users are restricted to viewing their own profile row and will encounter a 404 Not Found error if they try to access someone else’s, there is no harm in being paranoid! In the server code following on from part 3, I added the following business logic to the UserProfiles table (C#):
partial void UserProfiles_CanDelete(ref bool result) { // Only allow administrators to delete users result = Application.Current.User.HasPermission(Permissions.SecurityAdministration); }
Be careful though. In part 2 we allowed __userRegistrant to add users by performing a temporary privilege elevation. However, we implemented this in SaveChanges_Executing which actually runs before UserProfiles_CanDelete in the save pipeline, so as things stand now the delete will always be allowed. To fix this, move this line:
Application.Current.User.AddPermissions(Permissions.SecurityAdministration);
out of SaveChanges_Executing() and insert it at the beginning of UserProfiles_Inserting() instead.
Game Network implementation
We will now layer functions specific to our GameNetwork LightSwitch project from the rest of the series over the LightSwitchClient class.
Creating a C++ struct to represent a JSON object
Here is an example of how to create a struct that is easily convertible to and from a JSON object. The more adventurous among you may want to use type reflection to avoid having to write the ToJSON() and FromJSON() methods for every new type.
struct UserProfile { wstring UserName; wstring Password; wstring FullName; wstring Email; int Id; JsonFields ToJSON() { return JsonFields{ { L"UserName", UserName }, { L"FullName", FullName }, { L"Password", Password }, { L"Email", Email } }; } static UserProfile FromJSON(json::value j) { if (j.is_null()) return UserProfile{}; UserProfile p{ j[L"UserName"].as_string(), j[L"Password"].as_string(), j[L"FullName"].as_string(), j[L"Email"].as_string(), j[L"Id"].as_integer() }; return p; } };
The code should be fairly self-explanatory, but note that – crucially – Id is defined last so that you can use an initializer list to create a new UserProfile without specifying an ID, since that will be automatically assigned by the LightSwitch server.
WARNING: For reasons known only to Microsoft, trying to return a UserProfile created with an initializer list directly in FromJSON() crashes the Visual Studio 2013 C++ compiler and returns an empty struct with the November 2013 CTP compiler. This is why I create it in “p” first. If you declare Id as the first item in the struct, returning directly with an initializer list works as expected on both compilers.
Game Network client implementation
We define one method in GameNetwork for each possible action we want to perform on the server. In our example, we are creating a user and fetching a user’s profile so we need two methods. We also define callbacks that will trigger when a request completes, such that the main application knows a response has been received – this solves the signalling problem described earlier.
The interface:
// ================================================================================= // Handler functions // ================================================================================= typedef std::function<void(json::value)> ODataResultHandler; typedef std::function<void(UserProfile)> UserProfileHandler; // ================================================================================= // Game server functions // ================================================================================= class GameNetwork : public LightSwitchClient { public: GameNetwork() : LightSwitchClient(L"") {} pplx::task<UserProfile> GetProfile(UserProfileHandler f = nullptr); pplx::task<UserProfile> CreateUser(UserProfile profile, UserProfileHandler f = nullptr); };
With all the work we’ve done in LightSwitchClient, the actual implementation is remarkably simple – which is exactly what we want, because it is a breeze to add new methods!:
pplx::task<UserProfile> GameNetwork::GetProfile(UserProfileHandler f) { return QueryFirstRow(L"UserProfiles").then([f](json::value j){ UserProfile p = UserProfile::FromJSON(j); if (f) f(p); return p; }); } pplx::task<UserProfile> GameNetwork::CreateUser(UserProfile profile, UserProfileHandler f) { SetUser(L"__userRegistrant", L"__userRegistrant"); return Insert(L"UserProfiles", profile.ToJSON()).then([f, profile](json::value j){ UserProfile p = UserProfile::FromJSON(j); p.UserName = profile.UserName; if (f) f(p); return p; }); }
Let’s take a closer look at this.
Fetch user profile
Line 1 of the return statement fetches the first row from UserProfiles whose UserName field matches the name of the currently logged in user (it has to even without any query parameters, because in Part 2 we configured the server so that it would only return the current user’s profile row for security reasons), and fetches the row as a json::value.
Line 2 converts the json::value into a UserProfile object.
Line 3 calls the application-defined callback if one has been set.
Line 4 returns the UserProfile object to the thread which created the task.
Create new user
Line 1 sets the current user to the special user registration account __userRegistrant which we defined in part 2.
Line 2 converts the supplied new UserProfile object to a JsonFields object, inserts it into the database (which calls the UserProfiles table business logic we defined on the server to validate all the fields and update the ASP.NET Membership database at the same time, as well as assigning the new user to the Player role), and fetches the sever’s version of the new profile as a json::value.
Line 3 converts the json::value into a UserProfile object.
Line 4 sets the UserName field. This is important, because the application-defined callback may need it, but if an error occurred, the server will not return a new JSON profile object, so when the conversion takes place in line 3, the resulting UserProfile object will not have any of its fields populated. When a new user is created successfully, this line of code has no effect.
Line 5 calls the application-defined callback if one has been set.
Line 6 returns the UserProfile object to the thread which created the task.
As you can see, adding new functions to the GameNetwork implementation will be trivially easy in most cases thanks to the dirty work being done in LightSwitchClient for us.
Game interface implementation
Now we turn to the final piece of puzzle: the game, which actually calls these functions in GameNetwork and does something with the results. Because all of the client-server logic is now abstracted away, we can now plug in whatever behaviours we want and re-use all of the previous code in any game or application. So let us now re-write the previous examples to use this new framework.
The game client definition:
class GameClient { GameNetwork cloud; void UserCreated(UserProfile profile); void ProfileReceived(UserProfile profile); public: void Run(); };
In this simple example, we define one method Run() which will be the actual main application code, and two callbacks which are called when a new user is created or a profile is fetched (or an error occurs trying to do either of these things).
The full source code is available at the end of the article, but the relevant part of the Run() implementation is:
void GameClient::Run() { ... wcout << std::endl << "Creating user..." << std::endl; cloud.CreateUser(UserProfile{ UserName, Password, FullName, Email }, std::bind(&GameClient::UserCreated, this, _1)); while (cloud.Busy()) { wcout << "."; Sleep(10); } wcout << "Fetching user profile..." << std::endl; cloud.SetUser(UserName, Password); cloud.GetProfile(std::bind(&GameClient::ProfileReceived, this, _1)); while (cloud.Busy()) { wcout << "."; Sleep(10); } }
As you can see, we merely call GameNetwork::CreateUser() and GameNetwork::GetProfile() with appropriate arguments and sit back and wait until the work is done. Instead of blocking the thread with pplx::task::wait() as we did in the original examples, we now poll the GameNetwork object’s Busy() function repeatedly until it becomes false. For the sake of proving that the network code does in fact run in another thread, we print dots every 10ms until each request completes (note: you may notice when running this code that the order of output of text and dots on the console is not correct; this is because console writes are not atomic operations and therefore, not thread-safe and may be executed out of order. In a DirectX/OpenGL or Windows GUI application this will not be an issue).
std::bind is used to set the callback to a method of an object instance. The syntax:
std::bind(&MyClass::MyMethod, this, _1)
can be used anywhere in C++ where you might need a function pointer that is a pointer to a member function of the calling object.
The actual callback functions merely print out a friendly error message where possible if an error occurred, or the actual result of the server request if it completed successfully:
void GameClient::UserCreated(UserProfile p) { wcout << std::endl; // Returns 201 when the user is created, 500 otherwise if (p.Id == 0) { wstring &errorCode = cloud.GetError(); if (errorCode == L"Microsoft.LightSwitch.UserRegistration.DuplicateUserName") wcout << "Username '" << p.UserName << "' already exists." << std::endl; else if (errorCode == L"Microsoft.LightSwitch.UserRegistration.PasswordDoesNotMeetRequirements") wcout << "Password does not meet requirements." << std::endl; else if (errorCode == L"Microsoft.LightSwitch.Extensions.EmailAddressValidator.InvalidValue") wcout << "Invalid email address supplied." << std::endl; else wcout << cloud.GetError() << std::endl; return; } wcout << "User '" << p.UserName << "' created successfully." << std::endl; } void GameClient::ProfileReceived(UserProfile p) { wcout << std::endl; if (p.UserName != L"") { wcout << "Logged in successfully" << std::endl; wcout << "Full name: " << p.FullName << std::endl; wcout << "Email : " << p.Email << std::endl; } else wcout << cloud.GetError() << std::endl; }
Note the method of checking for errors: when creating a user, UserProfile::Id will be zero if creation failed; when fetching a user profile, UserProfile::UserName will be blank if the fetch failed. GameNetwork::GetError() (which inherits from LightSwitchClient::GetError()) is used to find the relevant error code or error message. In the case of LightSwitch error codes, the callback converts them into human-readable error messages.
Example Output
Here is how the final sample looks when you run it:
Register new user Enter username: djkaty1 Enter password: [ELIDED] Enter full name: efwefjiwefjiweiof Enter email address: [ELIDED] Creating user... ..................................................... .................................. Username 'djkaty1' already exists. Fetching user profile... .................................... Logged in successfully Full name: Noisy Cow Email : some@email.com
Wrapping up
Now the low-level stuff is out of the way, we are ready to move on to integrating the client code with a graphical interface, which will be the subject of part 5. I’m still sick so please donate to my final wishes crowdfund if you found this article useful!
Until next time!
Source code and executable
Download all of the source code and pre-compiled EXE for this article
References
Here are some pages I found useful while researching this article:
Information Security: Is BASIC-Auth secure if done over HTTPS?
MSDN Blogs: The C++ REST SDK (“Casablanca”)
OData.org: Protocol Operations
JSON Spirit: an alternative to JSON processing in the C++ REST SDK if you wish to use Boost Spirit
MSDN Blogs: Creating and Consuming LightSwitch OData Services (Beth Massi)
InformIT: Get to Know the New C++11 Initialization Forms
Dr. Dobbs: Using the Microsoft C++ REST SDK
Dr. Dobbs: JSON and the Microsoft C++ REST SDK
That is some pretty sexy code; C++ 11 goodness everywhere 😮 I thoroughly appreciate this series of articles for both its content value _and_ its style ! Keep up the good work 🙂
Any updates, programming/life?
8 months later… there is going to be one posted tomorrow 🙂
Can I see credentials form http_request author ?
Wow, I’m impressed, awesome write up, will have to visit here more often 🙂
P.S. I’ve been doing software development on and off (do hardware 1/2 time) about 30 years now. So, it takes a lot to impress me 🙂
Keep up the great work!
|
https://katyscode.wordpress.com/2014/04/02/lightswitch-for-games-part-4-odata-access-from-c-client-code-with-the-c-rest-sdk/
|
CC-MAIN-2018-17
|
refinedweb
| 6,835
| 50.16
|
The Sorting Table Stylesheet
Now that we have walked through the process of converting a ResultSet to a JDOM representation, it's time to take a look at the transforming stylesheet.
This XSL stylesheet is as generic as possible. In fact, it can be used to transform many different sets of data without altering anything. Any XML document that has grandchild elements with text data and child elements that group the grandchild elements can be output as a table with linked labels using this stylesheet.
The stylesheet has two root-level parameters. The first, Sort, is the column number by which to sort the resulting table. This parameter is used to enable the JSP to pass this sorting information through from the URL parameter. The value of the second parameter, Page, is used to create the links that will cause the re-sorting of the report.
The stylesheet is composed of three templates. The first template will match on the root element and begin the processing of the entire document. The next template will create all the table column labels, which are linked to cause the document to reload sorted on the chosen row. The last template will match each child of the root element in turn and output all element children as table cells.
The stylesheet begins with the XML document declaration, and root xsl:stylesheet element that describes the namespace of the XSL elements. The root level parameters are then declared void of default values, and the output method is set:
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet <xsl:param <xsl:param <xsl:output
Next is the start of the first template. This template matches the root of the XML document with the match attribute value of /. This causes the entire XML document to pass into this template. The HTML div and table elements are output at this point, and the header template is called with an empty body through the call-template element. Then, the body of the header template is executed with the context node as the root of the document:
<xsl:template <table border="1"> <xsl:call-template
Next, an apply-templates is used to select each element that matches the XPath statement */*. This statement will select each child element of the root in turn and match each to a template with a matching select attribute.
Notice that the body of this template tag is not empty. Inside, there is a sort element that selects which of a set of nodes to be sorted on. More specifically, this sort element selects the child whose position number is equal to the $Sort parameter by which to sort. This parameter-dependent statement enables us to dynamically sort the output. When these operations are completed, the end table HTML tag is output and the template is finished:
<xsl:apply-templates <xsl:sort </xsl:apply-templates> </table> </xsl:template>
Let's look at the next template that creates the linked table labels. This again starts out by defining the template tag with the name attribute value of the header:
<!-- creates the table headers from the tag names --> <xsl:template
Next, a table row tag is output, followed by the beginning of a for-each loop. This loop selects each grandchild of the root element whose parent is first in the sibling position. This results in the exclusive selection in turn of each child that is descended from the first child of the root element. In other words, it selects each column of data one at a time from the first record of the ResultSet from which this XML descends:
<tr> <xsl:for-each
This causes the stylesheet to properly handle any number of data columns from the original ResultSet.
Within this loop, each link is created through the use of the appropriate text and stylesheet parameters. In this case, href is equal to the $Page variable set previously through the JSP, and the number of the data column in terms of sibling position returned by the position() method:
<th> <A href="{$Page} ?sort={position()}">
In the preceding code snippet, the shorthand value-of notation is usednamely the curly brackets. This permits the inclusion of the results of XPath expressions within other output tags. If this feature were unavailable, the only way to access this information would be through the use of a value-of tag. This would make it impossible to dynamically create HTML element attribute values, because tags cannot contain other tags.
Now that the anchor element has been created with the proper href attribute value, the tag name of each element will be selected. This enables us to label the HTML table with each element's tag name regardless of the number of columns found in the original record set. This is achieved through the use of the local-name() method with the . parameter, which denotes this:
<xsl:value-of
Next, the anchor and table head cell is closed, as is the for-each loop that iterated through each element. The table row is closed and the template is complete:
</A> </th> </xsl:for-each> </tr> </xsl:template>
Last up is the template that will select each child element of the root, no matter how many, and output a table row formatted with cells for each text data containing child elements.
Like the previous template, this one iterates through each child of the rootexcept this one doesn't exclude all but the first child element. Once matched, a table row tag will be output, and a for-each element will iterate through each child element of the currently selected element through the *:
<!-- creates a row for each child of root, --> <!-- and cell for each grandchild of root --> <xsl:template <tr> <xsl:for-each
Next, the . notation is used to output the value of this, which will be each column of data found in one record. Finally, the table row is closed, and the template ended:
<td><xsl:value-of</td> </xsl:for-each> </tr> </xsl:template>
Finally, the stylesheet root element is closed, and the document is finished. The complete stylesheet follows in Listing 11.6, and should be saved as \webapps\xmlbook\chapter11\TableSort.xsl.
Listing 11.6 TableSort.xsl
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet <xsl:param <xsl:param <xsl:output <xsl:template <table border="1"> <xsl:call-template <xsl:apply-templates <xsl:sort </xsl:apply-templates> </table> </xsl:template> <!-- creates the table headers from the tag names --> <xsl:template <tr> <xsl:for-each <th> <A href="{$Page} ?sort={position()}"> <xsl:value-of </A> </th> </xsl:for-each> </tr> </xsl:template> <!-- creates a row for each child of root, --> <!-- and cell for each grandchild of root --> <xsl:template <tr> <xsl:for-each <td><xsl:value-of</td> </xsl:for-each> </tr> </xsl:template> </xsl:stylesheet>
The output of the previous Java class, JSP, and XSL stylesheet is as shown in Figure 11.1.
Figure 11.1. Results of DBtoXML.jsp, RStoXML.java, and TableSort.xsl.
If you encounter problems, verify that Xerces, Xalan, and JDOM have been installed. These examples depend on that software for successful execution.
Notice that the column heads are linked properly in order to cause the re-sorting of the document as shown, with the URL shown in the status bar. Also, the data is sorted on column number 1, as was set in the catch loop of parsing the query parameter in the JSP. To change this default value, just alter the parameter in the JSP.
This has been an example to demonstrate how easy it is to create a large number of reports. Simply by changing the data set that the stylesheet transforms, a large number of custom-arrangeable table reports can be created.
|
http://www.informit.com/articles/article.aspx?p=169589&seqNum=6
|
CC-MAIN-2019-51
|
refinedweb
| 1,290
| 60.85
|
CodePlexProject Hosting for Open Source Software
Hello everyone (again).
So, I've tried to do many things to solve it but I can't, one of these problems are that I have to use bigger numbers to apply forces, like 50000.
I thought that it was my mistake, but I've copied exactly another code that works ok, but the mine don't work as i want.
So, I know that my post is a little confuse, but i beg for your help, please. XD
Here is a link to download my project:
14 people downloaded my archive, but no one could help me?
Hi,
it seems you did not scale your units to mks (meter, kilo, seconds) but work with pixels instead. Do a search for "mks" and "scale" on this discussion board. It has come up here at least a dozen times.
Also take a look at the "Hello World" sample for a really simple example.
thank you, now it is working rightly.
I've implemented the convert unit class that i found here
and this:
private int RoundToInt(float inFloat)
{
int retInt = (int)inFloat;
if (inFloat - retInt >= 0.5f)
{
retInt++;
}
return retInt;
}
protected override void Draw(GameTime gameTime)
{
spriteBatch.Begin();
spriteBatch.Draw(charTex, new Rectangle(RoundToInt(ConvertUnits.ToDisplayUnits(character.Body.Position.X)), RoundToInt(ConvertUnits.ToDisplayUnits(character.Body.Position.Y)), 49, 65), new Rectangle(0, 0, charTex.Width, charTex.Height), Color.White, 0, new Vector2(charTex.Width / 2, charTex.Height / 2), SpriteEffects.None, 0.5f);
spriteBatch.End();
base.Draw(gameTime);
}
to draw my texture, thank you.
round to int can be done better:
return (int)(inFloat + 0.5f);
I'll try to do this.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
https://farseerphysics.codeplex.com/discussions/284899
|
CC-MAIN-2017-04
|
refinedweb
| 312
| 68.47
|
Siri Remote and Bluetooth Controllers
- PDF for offline use
-
- Related Samples:
-
- Related SDKs:
-
Let us know how you feel about this
Translation Quality
0/250
last updated: 2017-03
This article covers supporting the new Siri Remote and Bluetooth game controllers in your Xamarin.tvOS apps.
Overview
Users of your Xamarin.tvOS app will not be interacting with it's interface directly as with iOS where they tap images on the device's screen, but indirectly from across the room using the Siri Remote.
If your app is a game, you can optionally build in support for 3rd party, Made For iOS (MFI) Bluetooth Game Controllers in your app as well.
This article describes the Siri Remote, Touch Surface Gestures and Siri Remote Buttons and shows how to work with them via Gestures and Storyboards, Gestures and Code and Low-Level Event Handling. Finally, it discusses Working with Game Controllers in a Xamarin.tvOS app.
The Siri Remote
The main way that users will be interacting with the Apple TV, and your Xamarin.tvOS app, is through the included Siri Remote. Apple designed the remote to bridge the distance between the user sitting on the couch and the Apple TV's user interface displayed across the room on the TV screen.
Your challenge as a tvOS app developer is the create a quick, easy to use and visually compelling user interface that leverages the Siri Remote's touch surface, accelerometer, gyroscope and buttons.
The Siri Remote has the following features and expected usages within your tvOS app:
Touch Surface Gestures
The Siri Remote's Touch Surface is able to detect a variety of single-finger gestures that you can respond to in your Xamarin.tvOS app:
Apple provides the following suggestions for working with Touch Surface gestures:
- Differentiate between Clicks and Taps - Clicking is an intentional action by the user and is well suited for selection, activation and the primary button of a game. Tapping is more subtle and should be used sparingly because the user is often holding the Siri Remote in their hand and can accidentally activate a Tap event easily.
- Don't Redefine Standard Gestures - The user has an expectation that specific gestures will perform specific actions, you shouldn't redefine the meaning or function of these gestures in your app. The one exception is a game app during active gameplay.
- Define New Gestures Sparingly - Again, the user has an expectation that specific gestures will perform specific actions. You should avoid defining custom gestures to perform standard actions. And again, games are the most usual exception where custom gestures can add fun, immersive play to the game.
- If Appropriate, Respond to D-Pad Taps - Lightly tapping on the corner edges of the Touch Surface will react like a D-Pad on a game controller moving focus or direction up, down, left or right. If appropriate, you should respond to these gestures in your app or game.
Siri Remote Buttons
In addition to gestures on the Touch Surface, your app can respond to the user clicking the Touch Surface or pressing the Play/Pause button. If you are accessing the Siri Remote using the Game Controller Framework, you can also detect the Menu button being pressed.
Additionally, menu button presses can be detected using a Gesture Recognizer with standard
UIKit elements. If you intercept the Menu button being pressed, you'll be responsible for closing the current View and View Controller and return to the previous one.
⚠️
NOTE: You should always assign a function to the Play/Pause button on the remote. Having a non-functional button can make your app look broken to the end user. If you don't have a valid function for this button, assign the same function as the primary button (Touch Surface Click).
Gestures and Storyboards
The easiest way to work with the Siri Remote in your Xamarin.tvOS app is to add Gesture Recognizers to your views in the Interface Designer.
To add a Gestures Recognizer, do the following:
- In the Solution Explorer, double-click the
Main.storyboardfile and open it for editing the Interface Designer.
Drag a Tap Gesture Recognizer from the Library and drop it on the View:
Check Select in the Button section of the Attribute Inspector:
- Select means the gesture will respond to the user clicking the Touch Surface on the Siri Remote. You also have the option of responding to the Menu, Play/Pause, Up, Down, Left and Right buttons.
Next, wire up an Action from the Tap Gesture Recognizer and call it
TouchSurfaceClicked:
- Save your changes and return to Xamarin Studio.
Edit your View Controller (example
FirstViewController.cs) file and add the following code to handle the gesture being triggered:
using System; using UIKit; namespace tvRemote { public partial class FirstViewController : UIViewController { ... #region Custom Actions partial void TouchSurfaceClicked (Foundation.NSObject sender) { // Handle click here ... } #endregion } }
For more information on working with Storyboards, please see our Hello, tvOS Quick Start Guide. Specifically the Introduction to Xcode and Interface Builder and Outlets and Actions sections.
Gestures and Code
Optionally, you can create gestures directly in C# code and add them to views in your User Interface. For example, to add a series of Swipe Gesture Recognizers, edit your View Controller and add the following code:
using System; using UIKit; namespace tvRemote { public partial class SecondViewController : UIViewController { #region Constructors public SecondViewController (IntPtr handle) : base (handle) { } #endregion #region Override Methods public override void ViewDidLoad () { base.ViewDidLoad (); // Wire-up gestures var upGesture = new UISwipeGestureRecognizer (() => { RemoteView.ArrowPressed = "Up"; ButtonLabel.Text = "Swiped Up"; }) { Direction = UISwipeGestureRecognizerDirection.Up }; this.View.AddGestureRecognizer (upGesture); var downGesture = new UISwipeGestureRecognizer (() => { RemoteView.ArrowPressed = "Down"; ButtonLabel.Text = "Swiped Down"; }) { Direction = UISwipeGestureRecognizerDirection.Down }; this.View.AddGestureRecognizer (downGesture); var leftGesture = new UISwipeGestureRecognizer (() => { RemoteView.ArrowPressed = "Left"; ButtonLabel.Text = "Swiped Left"; }) { Direction = UISwipeGestureRecognizerDirection.Left }; this.View.AddGestureRecognizer (leftGesture); var rightGesture = new UISwipeGestureRecognizer (() => { RemoteView.ArrowPressed = "Right"; ButtonLabel.Text = "Swiped Right"; }) { Direction = UISwipeGestureRecognizerDirection.Right }; this.View.AddGestureRecognizer (rightGesture); } #endregion } }
Low-Level Event Handling
If you are creating a custom type based on
UIKit in your Xamarin.tvOS app (for example
UIView), you also have the ability to provide low-level handling of button press via
UIPress events.
A
UIPress event is to tvOS what a
UITouch event is to iOS, except
UIPress returns information about button presses on the Siri Remote or other attached Bluetooth devices (like a Game Controller).
UIPress events describe the button being pressed and its state (Began, Canceled, Changed or Ended).
For analog buttons on devices like Bluetooth Game Controllers,
UIPress also returns the amount of force being applied to the button. The
Type property of the
UIPress event defines which physical button has changed state, while the rest of the properties describe the change that occurred.
The following code shows an example of handling low-level
UIPress events for a
UIView:
using System; using Foundation; using UIKit; namespace tvRemote { public partial class EventView : UIView { #region Computed Properties public override bool CanBecomeFocused { get { return true; } } #endregion #region public EventView (IntPtr handle) : base (handle) { } #endregion #region Override Methods public override void PressesBegan (NSSet<UIPress> presses, UIPressesEvent evt) { base.PressesBegan (presses, evt); foreach (UIPress press in presses) { // Was the Touch Surface clicked? if (press.Type == UIPressType.Select) { BackgroundColor = UIColor.Red; } } } public override void PressesCancelled (NSSet<UIPress> presses, UIPressesEvent evt) { base.PressesCancelled (presses, evt); foreach (UIPress press in presses) { // Was the Touch Surface clicked? if (press.Type == UIPressType.Select) { BackgroundColor = UIColor.Clear; } } } public override void PressesChanged (NSSet<UIPress> presses, UIPressesEvent evt) { base.PressesChanged (presses, evt); } public override void PressesEnded (NSSet<UIPress> presses, UIPressesEvent evt) { base.PressesEnded (presses, evt); foreach (UIPress press in presses) { // Was the Touch Surface clicked? if (press.Type == UIPressType.Select) { BackgroundColor = UIColor.Clear; } } } #endregion } }
As with
UITouch events, if you need to implement any of the
UIPress event overrides, you should implement all four.
Bluetooth Game Controllers
In addition to the standard Siri Remote that ships with the Apple TV, 3rd party, Made For iOS (MFI) Bluetooth Game Controllers can be paired with the Apple TV and used to control your Xamarin.tvOS app.
Game Controllers can be used to enhance gameplay and provide a sense of immersion in a game. They can also be used to control the standard Apple TV interface so the use doesn't have to switch between the remote and the controller..
A Game Controller has the following features and expected usages within your tvOS app:
Apple provides the following suggestions for working with Game Controllers:
- Confirm Game Controller Connections - Your tvOS app can be started and stopped at any time by the end user. You should always check for the presence of a Game Controller at app start or awake times and take action as needed.
- Ensure Your App Works on both Siri Remote and Game Controllers - Don't require users to switch between the Siri Remote and a Game Controller to use your app. Test your app often with both types of controllers ensuring that everything is easy to navigate and works as expected.
- Provide a Way Back - Pressing the Menu button should always return to the previous screen. If the user is at the main app screen, the Menu button should return them to the Apple TV Home screen. During gameplay, the Menu button should display an alert giving the user the ability to pause/resume gameplay or return to the main menu.
Working with Game Controllers
As stated above, in addition to the standard Siri Remote that ships with the Apple TV, the user can optionally attach a 3rd party, Made For iOS (MFI) Bluetooth Game Controllers and use it to control your Xamarin.tvOS app.
If your app required low-level controller input, you can uses Apple's Game Controller Framework which has the following modifications for tvOS:
- The Micro Game Controller profile (
GCMicroGamepad) has been added to target the Siri Remote.
- The new
GCEventViewControllerclass can be used to route game controller events through your app. See the Determining Game Controller Input section below for more details.
Game Controller Support Requirements
Apple has several specific requirements that must be met if your Xamarin.tvOS app supports Game Controllers:
- You Must Support the Siri Remote - You must always support the Siri Remote. Your game cannot require a 3rd party Game Controller to be playable.
- You Must Support the Extended Control Layout - All tvOS Game Controllers are non-formfitting, extended controllers.
- Games Must be Playable with Stand-Alone Controllers - If your app supports an Extended Game Controller, it must be playable solely with that Game Controller.
- You Must Support the Play/Pause Button - During gameplay, if the user presses the Play/Pause button, you should display an alert giving the user the ability to pause/resume gameplay or return to the main menu.
Enabling Game Controller Support
To enable Game Controller support in your Xamarin.tvOS app, double-click the
Info.plist file in the Solution Explorer to open it for editing:
Under the Game Controller section, place a check by Enable Game Controllers, then check all of the Game Controller types that will be supported by the app.
Using the Siri Remote as a Game Controller
The Siri Remote that come with the Apple TV can be used as a limited Game Controller. Like other Game Controllers, it shows up in the Game Controller Framework as a
GCController object and supports both the
GCMotion and the
GCMicroGamepad profiles.
The Siri Remote has the following characteristics when being used as a Game Controller:
- The Touch Surface can be used as a D-pad that provides analog input data.
- The remote can be used in either a portrait or landscape orientation and your app decides if the profile object should flip input data automatically.
- Clicking the Touch Surface acts like pressing button A on a Game Controller.
- The Play/Pause button acts like button X on a Game Controller.
- The Menu button should display an alert giving the user the ability to pause/resume gameplay or return to the main menu.
Determining Game Controller Input
Unlike iOS where Game Controller events can be received in parallel with Touch events, tvOS processes all low-level events to deliver high-level
UIKit events. As a result, if you need access to the low level Game Controller events, you'll need to turn off
UIKit's default behavior.
On tvOS, when you want to process Game Controller input directly you need to use a
GCEventViewController (or a subclass) to display the game's User Interface. Whenever a
GCEventViewController is the First Responder, Game Controller input will be captured and delivered to your app through the Game Controller Framework.
You can use the
UserInteractionEnabled property of the
GCEventViewController class to toggle how events are processed and handled.
For information about implementing Game Controller support, please see Apple's Working with Game Controllers section in the App Programming Guide for tvOS and Game Controller Programming Guide.
Summary
This article has covered the new Siri Remote that ships with the Apple TV, Touch Surface gestures and Siri Remote buttons. Next, it covered working with gestures and Storyboards, gestures and code and low-level events. Finally, if discussed working with Game Controllers..
|
https://docs.mono-android.net/guides/ios/tvos/platform-features/remote-bluetooth/
|
CC-MAIN-2017-30
|
refinedweb
| 2,179
| 52.29
|
Comment on Tutorial - How to Send SMS using Java Program (full code sample included) By Emiley J.
Comment Added by : pablo
Comment Added at : 2014-08-03 16:10:55
Comment on Tutorial : How to Send SMS using Java Program (full code sample included) By Emiley J.
I am receiving an error as
Error java.lang.NullPointerException
The code is
package p1;
import javax.comm.*;
import p1.SMSClient;
public class test{
private static int sendMessage;
/**
* @param args
*/
public static void main(String[] args) {
try{
SMSClient obj=new SMSClient();
int n=obj.sendMessage("9062322456", "test");
}
catch(Exception e){System.out.println("Error " +e);}
//Error java.lang.NullPointerException
}
}
I would also like to knwo where is the smsc number in these four files and where can i find it in ??
Is it the operators sr.
do u have any idea , how to convert
View Tutorial By: gohila at 2012-08-15 14:31:31
2. Sir, can i execute the program from windows enviro
View Tutorial By: krishna at 2014-04-08 07:05:30
3. very good for programing
View Tutorial By: kiran at 2007-12-06 00:39:47
4. i am run code this rectify this error please some
View Tutorial By: indiran at 2010-01-28 01:45:56
5. thank you sir
View Tutorial By: sivaranjani at 2015-01-30 01:15:40
6. Hi, I need the complete working code for sending a
View Tutorial By: Unnati at 2011-07-13 05:06:06
7. It seems like your getConnection doesnt recognize
View Tutorial By: Partha at 2009-11-28 11:32:53
8. nice one
View Tutorial By: priyaaaaaaaaaaaaaaa at 2011-06-21 08:04:39
9. Can u please tell me how to compile and execute a
View Tutorial By: RJ at 2010-04-20 01:30:10
10. method overloading possible across classes????
View Tutorial By: Rohit at 2011-01-26 20:23:02
|
https://www.java-samples.com/showcomment.php?commentid=39596
|
CC-MAIN-2021-49
|
refinedweb
| 321
| 66.64
|
ACL_GET_PERM(3) BSD Library Functions Manual ACL_GET_PERM(3)
acl_get_perm — test for a permission in an ACL permission set
Linux Access Control Lists library (libacl, -lacl).
#include <sys/types.h> #include <acl/libacl.h> int acl_get_perm(acl_permset_t permset_d, acl_perm_t perm);
The acl_get_perm() function tests if the permission specified by the argument perm (one of ACL_READ, ACL_WRITE, ACL_EXECUTE) is contained in the ACL permission set pointed to by the argument permset_d. Any existing descriptors that refer to permset_d continue to refer to that permission set.
If successful, the acl_get_perm() function returns 1 if the permission specified by perm is contained in the ACL permission set permset_d, and 0 if the permission is not contained in the permission set. Otherwise, the value -1 is returned and the global variable errno is set to indi‐ cate the error.
If any of the following conditions occur, the acl_get_perm() function returns -1 and sets errno to the corresponding value: [EINVAL] The argument permset_d is not a valid descriptor for a permission set within an ACL entry. The argument perm is not a valid acl_perm_t value.
This is a non-portable, Linux specific extension to the ACL manipulation functions defined in IEEE Std 1003.1e draft 17 (“POSIX.1e”, abandoned).
acl_add_perm(3), acl_clear_perms(3), acl_delete_perm(3), acl_get_permset(3), acl_set_permset(3),
|
http://man7.org/linux/man-pages/man3/acl_get_perm.3.html
|
CC-MAIN-2017-47
|
refinedweb
| 213
| 53.21
|
In the continuing vein of updating/refreshing my older python posts for Python 3,
I have outlined the changes necessary to test for open TCP ports using Python 3.
My original post showed you how to open a socket connection to a host:port to see if it was active and accepting connections. Luckily, this time around I didn’t have to change much of anything. Turns out the only missing links were my print statements. As I mentioned in my last post, Python3 has turned the print statement into a function.
I also added some slightly better error handling to the example. If a connection fails, you can now see the cause of the failure.
Things to remember:
- You can use an ip or hostname for the host variable value.
- You can test UDP sockets by changing socket.SOCK_STREAM to socket.SOCK_DGRAM.
import socket #Simply change the host and port values host = '127.0.0.1' port = 80 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.connect((host, port)) s.shutdown(2) print("Success connecting to ") print(host," on port: ",str(port)) except socket.error as e: print("Cannot connect to ") print(host," on port: ",str(port)) print(e)
As always, I appreciate any feedback or modifications that would make this example more useful or easy to understand.
|
http://www.techniqal.com/blog/author/admin/
|
CC-MAIN-2015-48
|
refinedweb
| 220
| 67.45
|
Type: Posts; User: scousesheriff
Missing the same line again:
var compiledQuery = document.getElementById('textarea');
This is due to the scope of the compiledQuery variable.
No problem. Glad I could help.
Hi
Is there an item on the .aspx page for the master page?
Not used the app-code directory before, but it could be that you need to include the class with a "using..." statement because it is...
Hi
Perhaps it is something to do with
namespace Fiona_Ross.master_page
should it not just be
I see, the issue is with your hideAllDivs() function.
Are you wanting to hide items by group? So that a maximum of 1 div is shown for each select item?
If so, the best thing to do is make the...
Maybe the following is what you're after:
$(document).ready(function(){
//your code here
});
Where do you setup compiledQuery for use in compiledQuery.value = XXX?
I think you are missing a line such as:
var compiledQuery = document.getElementById('textarea');
Are you wanting to have different drop downs controlling different areas?
If so, the same code is fine, you just use unique ids for the various div elements and the values of the drop downs.
...
Try:
echo "the name is " . isset($row['name']) ? row['name'] : "-";
For each PHP file you wish to search the contents of, you could read the file into a string using file_get_contents()
Then you can attempt...
To have the links open in a new window, simply add a target attribute to your anchor (link) tag.
document.write('<a href='+'"'+imagelinks[ry]+'"'+' target="_blank"><img src="'+myimages[ry]+'"...
You dont have any of the co_ fields in your form.
I would also highly recommend using $_POST rather than $_REQUEST
What format do you have the day of the week in? Is it a timestamp? Take a look at, which should help you out.
SS
function a(){
...
}
function b(){
...
a;
...
}
Sorry, I meant in reference to the earlier posts that are talking about deciding to do a redirect half way through a script after other output has been made.
SS
For completness, can I just add that http_redirect() is part of the PECL extension to PHP so might not be installed, and as far as I am aware, it would behave the same way as header("Location:..")...
Yes, they should be the same. It is either your script is in the wrong location or the value in open_basedir arent set up for you correctly. Most certainly it is something that your server host could...
What folder is PHPMyAdmin in?
$sqltotaluniqueweek = "select distinct ip from stats where received > date_sub(currentdate(), INTERVAL 7 DAY);";
give that a bash. Please note I havent tested this, so no promises.
It appears to be a problem with your PHP configuration rather than MySQL.
open_basedir restriction in effect means that you can only run PHP files in the folders shown in path(s), which your file...
untested, but:
substr(stristr($uptime, 'load average:'), 14, 4);might work
would be most appreciated.
Glad i could help!
As for SQL, I would just recommend getting a book like the complete reference to SQL...
sorry, should subsr be substr
Try this:
$year = subsr($row['recieved'],0,4);
$month = subsr($row['recieved'],5,2);
$day = subsr($row['recieved'],8,2);
$target = mktime(0,0,0,$month, $day, $year);
echo date("l", $target); ...
$sql = "select count(distinct IP) as hits, receieved FROM $dbsql TABLE group by recieved";
Then use the PHP functions substr, mktime and date, to format $row['recieved'] as you like.
Hope...
|
http://www.webdeveloper.com/forum/search.php?s=f4a422c2659811a0e7a4d031b6e35df4&searchid=2936655
|
CC-MAIN-2014-10
|
refinedweb
| 588
| 75
|
pclose (3p)
PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
NAMEpclose — close a pipe stream to or from a process
SYNOPSIS
#include <stdio.h>
int pclose(FILE * stream);
DESCRIPTIONThe −12008 that could do one of the above
RETURN VALUEUpon successful return, pclose() shall return the termination status of the command language interpreter. Otherwise, pclose() shall return −1 and set errno to indicate the error.
ERRORSThe pclose() function shall fail if:
- ECHILD
- The status of the child process could not be obtained, as described above.
EXAMPLESNone.
APPLICATION USAGENone.
RATIONALEThere −1")
int pclose(FILE *stream) { int stat; pid_t pid;
pid = <pid for process created for stream by popen()> (void) fclose(stream); while (waitpid(pid, &stat, 0) == -1) { if (errno != EINTR){ stat = -1; break; } } return(stat); }
|
https://readtheman.io/pages/3p/pclose
|
CC-MAIN-2019-09
|
refinedweb
| 154
| 52.8
|
I have considerable experience with Allegro on Windows, including teaching a game development class in college using this marvelous library. But for the life of me, I cannot seem to get the configuration right to get even a simple Allegro program to link with Xcode 8. Various other programmers have had the same discrepancy, and I have attempted all of the suggested fixes, to no avail.
I am attempting a simple project as an Xcode Command Line Tool with C++ chosen as the language. After ensuring my main function header is correct and linking with liballegro.5.2.3.dylib and liballegro_main.5.2.3.dylib, I never fail to get an abort during linking with the error:
"dyld: Symbol not found: __al_mangled_main Referenced from: /usr/local/opt/allegro/lib/liballegro_main.5.2.dylib Expected in: flat namespace in /usr/local/opt/allegro/lib/liballegro_main.5.2.dylib"
I have also attempted the fix proposed by SiegeLord in the topic "al_mangled_main() issue OSX while creating a standalone bundle", and that has no effect on the result.
Any suggestions would be very highly appreciated, as I have a considerable set of projects developed over the past ten years in MSVC that I would love to be able to continue work on in the Mac environment. Thanks so much!
UPDATE
There was one provision I did not see in SiegeLord's post: "Make sure not to link allegro_main addon..." After I removed the reference to liballegro_main.5.2.3.dylib in the Build Phases, Link Binary... settings, the program actually ran! (Except for al_clear_to_color not working, but that is only a challenge, not a roadblock.) Hopefully, someone else can learn from my mistake and solution posted here. Thanks for reading!
|
https://www.allegro.cc/forums/thread/617210
|
CC-MAIN-2018-05
|
refinedweb
| 288
| 56.86
|
BBC micro:bit
Touchy Feely
Introduction
The micro:bit can detect touch input on pins 0, 1 and 2. You have to complete a connection between one of these pins and the GND pin using your body. In simple terms, touch both GND and the pin. You don't need any special components for this one.
You can do this directly on the micro:bit, but it's more fun to do it with something else. For the test program on this page, I used banana connectors and simply touched the pins. You could connect the touch pin to something conductive, like tin foil, play-doh, or even a piece of fruit. As long as you touch GND when you touch the other object, you should be able to trigger an event on the micro:bit. Alligator/crocodile clips are excellent for this kind of thing.
ProgrammingThis is pretty simple. The top left corner pixel lights if pin0 is touched, the top right if pin1 is touched.
from microbit import * while True: if pin0.is_touched(): display.set_pixel(0,0,9) else: display.set_pixel(0,0,0) if pin1.is_touched(): display.set_pixel(4,0,9) else: display.set_pixel(4,0,0) sleep(10)
Challenges
- Time to get creative. This feature is fun to explore just to see what you can make into a switch.
- Use some home-made switches to control the movement of a pixel on the matrix.
- Make some big switches, dance mat style. Display an image on the matrix indicating which one to stomp.
- Play the dance game but in a smaller way and without the jumping around. You can make it like Simon Says and include buttons and gestures in the mix.
|
http://multiwingspan.co.uk/micro.php?page=pytouch
|
CC-MAIN-2017-22
|
refinedweb
| 285
| 76.52
|
We have a client that would like us to use CSS3 namespaces. However, everything I'm finding indicates that it is specifically used for styling XML and not HTML. Can anyone validate using it for CSS/HTML or clarify how you would do this? What are the negatives of following this method?
@namespace toto "";
toto|Product {
display:block;
}
toto|Code {
color: black
}
Can anyone validate using it for CSS/HTML or clarify how you would do this?
Major browsers use a default namespace of which is XHTML's namespace, even for HTML, and go about their business. Technically though, since HTML isn't XML, there isn't a point to this unless you consider that XML-based languages like SVG and MathML can be embedded within HTML anyway.
If your client wants to make use of CSS namespaces, they'll probably need to provide you with something that's written in a language that has some sort of namespacing mechanism that is compatible with CSS. It is meaningless to try and apply this knowledge to HTML itself because
More information can be found in this answer.
To answer your question title, the document language does not necessarily have to be XML-based:
Besides terms introduced by this specification, CSS Namespaces uses the terminology defined in Namespaces in XML 1.0. However, the syntax defined here is not restricted to representing XML element and attribute names and may represent other kinds of namespaces as defined by the host language.
The CSS Namespaces spec borrows terminology from XML Namespaces as a convenience simply because CSS is most commonly applied to HTML and XML documents (and even then, more people use XSL(T) with the latter instead).
|
https://codedump.io/share/gnwE7K15lM1T/1/are-css3-namespaces-used-for-anything-except-xml
|
CC-MAIN-2017-26
|
refinedweb
| 284
| 57.2
|
In 2001, my favorite programming language was Python. In 2008, my favorite programming language was Scheme. In 2014, my favorite programming language is x64 assembly. For some reason, that progression tends to surprise people. Come on a journey with me.
Python
In this article, we’re going to consider a very simple toy problem: recursively summing up a list of numbers1.
>>> sum_list(range(101)) 5050
Young Carl Gauss would be proud.
>>> sum_list(range(1001)) RuntimeError: maximum recursion depth exceeded
Oops.
Young programmers often learn from this type of experience that recursion sucks. (Or, as a modern young programmer might say, it doesn’t scale.) If they Google around a bit, they might find the following “solution”:
>>> import sys >>> sys.setrecursionlimit(1500) >>> sum_list(range(1001)) 500500
If they have a good computer science teacher, though, they’ll learn that the real solution is to use something called tail recursion. This is a somewhat mysterious, seemingly arbitrary concept. If the result of your recursive call gets returned immediately, without any intervening expessions, then somehow it “doesn’t count” toward the equally arbitrary recursion depth limit. Our example above isn’t tail-recusrive because we add
list[0] to
sum_list(list[1:]) before returning the result. In order to make
sum_list tail-recursive, we have to add an accumulator variable, which represents the sum of those numbers we’ve looked at already. We’ll call this version
sum_sublist, and wrap it in a new
sum_list function which calls
sum_sublist with the initial accumulator 0 (initially, we haven’t looked at any numbers yet, so the sum of them is 0).
>>> sum_list(range(101)) 5050
So far, so good.
>>> sum_list(range(1001)) RuntimeError: maximum recursion depth exceeded
Wait, what?
On Wednesday, April 22, 2009, Guido van Rossum wrote: > A side remark about not supporting tail recursion elimination (TRE) > immediately sparked several comments about what a pity it is that Python > doesn’t do this, including links to recent blog entries by others trying to > “prove” that TRE can be added to Python easily. So let me defend my position > (which is that I don’t want TRE in the language). If you want a short > answer, it’s simply unpythonic. Here’s the long answer:
[snipped]
Third, I don’t believe in recursion as the basis of all programming. This is a fundamental belief of certain computer scientists, especially those who love Scheme…
[snipped]
Still, if someone was determined to add TRE to CPython, they could modify the compiler roughly as follows…
In other words, the only reason this doesn’t work is that Guido van Rossum2 prefers it that way. Guido, I respect your right to your opinion, but the reader and I are switching to Scheme.
Scheme
Here’s a line-by-line translation:
guile> (sum_list (iota 1001)) 500500
Phew! Let’s make sure that we aren’t just getting lucky with a bigger recursion limit:
guile> (sum_list (iota 10000001)) 50000005000000
Well, isn’t that neat? If we go much bigger, it’ll take a long time, but as long as the output fits into memory, we’ll get the right answer3.
Named Let
In our last two versions of
sum_list, we defined a helper function (
sum_sublist), and the rest of the body of
sum_list was just a single invocation of that helper function. This is an inelegant pattern4, which Scheme has a construct to address.
Named let creates a function and invokes it (with the provided initial values) in one step. It is decidedly my favorite control structure of all time. You can have your
while loops and your
for loops, and your
do…
until loops too5. I’ll take named let any day, because it provides the abstraction barrier of recursion without compromising the conciseness and efficiency of iteration. In case you’re not sufficiently impressed, I discuss the delightful properties of using recursion instead of non-recursive loops below.
Assembly
Named let style translates amazingly naturally into assembly.
> sum_list(from(1,100)) 5050 > sum_list(from(1,10000000)) 50000005000000
(Sadly, my assembler doesn’t come with its own REPL; we’re borrowing the LuaJIT REPL instead6.)
In fact, if I weren’t so comfortable with named let, I doubt I’d be an effective assembly coder, because assembly doesn’t really have any other iteration constructs7. But I don’t miss them. What would they look like, anyway?
In the next installment of Python to Scheme to Assembly, we will look at
call-with-current-continuation.
Addendum: C
In this addendum, we’re going to look at the assembly for iteration, non-tail recursion, and tail recursion, as emitted by
gcc, and get to the bottom of what the difference is anyway.
At the top of each C file here, we have the following:
Iteration
If I were solving this problem in the context of a C program, this is how I would do it.
Here’s the generated assembly, translated to
nasm syntax and commented.
This is almost identical to the assembly that I wrote, except that it clobbers one of its inputs (which is perfectly allowed by the C calling convention8), it uses
xor instead of
mov to load
0 (a solid optimization9), it uses
rep ret (less compact and no benefit on Intel chips), and it shuffles the instructions around such that two
tests are needed (almost certainly not helpful with modern branch prediction and loop detection). I haven’t run benchmarks on this, but my guess is that it would come out about even. (Both versions are eight instructions long.) I also think the shuffling makes this “iterative” version more opaque and difficult to reason about (not least because of the duplicated
test) than my “named let”-style code.
Non-Tail Recursion
gcc -O3 can almost completely convert this version to iteration, so let’s look at the generated assembly from
gcc -O1 to get a better sense of what it might look like in a language implementation for which the necessary optimizations are too complex to be made automatically.
We can see immediately that some new instructions (
push,
pop, and
call) have been introduced. These are all stack manipulation instructions10. If we carefully pretend to be the CPU running this program, we can see that it pushes the address of every number in the linked list, and then dereferences and adds them up as it pops them from the stack. This is not good; if we wanted our entire data structure to be replicated on the stack, we would have passed it by value11! It’s generally the amount of memory set aside for the stack that we’ve actually run out of in the case of a
recursion depth exceeded error.
Tail Recursion
What about translating the tail-recursive version into C? Like Scheme and Python,
gcc supports nested function definitions (as a GNU extension to C), so this is no problem:
gcc -O1gives us (translated and commented as before):
In this mode, the tail
call is not being eliminated – although we’re no longer
pushing
rbx, we’re still pushing
rip to stack with every
call, and eventually we’ll run out of stack that way. The only way to get around this is to replace each
call with
jmp: since we’re just going to take the return value of the next recursive invocation and then immediately
ret back to the previous caller on the stack, there’s no point in even inserting our own address on the stack (as
call does); we can just set up the next guy to pass the return value straight back to the previous guy, and quietly disappear.
gcc -O3 does this. In fact, somewhat surprisingly, it generates exactly the same assembly, line for line, for this version as for the purely iterative version above. That’s “tail call optimization” (TCO) or “tail recursion elimination” (TRE) in its most agressive form: it literally just gets rid of all calls and recursions and replaces them with an equivalent iteration (complete with duplicate
test).
The upshot of all this is that not only does Scheme’s “named let” recursion form translate neatly into assembly, it provides – penalty-free – a better abstraction than either iteration (while-loop imitation) or stack-driven recursion, the two options
gcc appears to pick from when dealing with various ways to code a list traversal.
Actually, the real point I’m trying to make here is that, unlike in C, I can naturally do named let directly in assembly, and that’s one of the many reasons working in assembly makes me happy.
Appendix: What’s so great about recursion, anyway?
For me, the most important point in favor of a recursive representation of loops is that I find it easier to reason about correctness that way.
Any function we define ought to implement some ideal mathematical function that maps inputs to outputs12. If our code truly does implement that ideal function, we say that the code is correct. Generally, we can break down the body of a function as a composition of smaller functions; even in imperative languages, we can think of every statement as pulling in a state of the world, making well-defined changes, and passing the new state of the world into the next statement13. At each step, we ask ourselves, “are the outputs of this function going to be what I want them to be?” For loops, though, this gets tricky.
What recursion does for us as aspiring writers of correct functions is automatic translation of the loop verification problem into the much nicer problem of function verification. Intuitively, you can simply assume that all invocations of a recursive function within its own body are going to Do The Right Thing, ensure that the function as a whole Does The Right Thing under that assumption, and then conclude that the function Does The Right Thing in general. If this sounds like circular reasoning, it does14; but it turns out to be valid anyway.
There are many ways to justify this procedure formally, all of which are truly mind-bending15. But once you’ve justified this procedure once, you never have to do it again (unlike ad-hoc reasoning about loops). I’ve determined that the most elegant way to explain it is by expanding our named let example into a non-recursive function, which just happens to accept as a parameter a correct16 version of itself.
Now,
sum_sublist_nonrec is an honest-to-goodness non-recursive function, and we can check that it is correct. Given a correct function
f_correct (which takes as inputs a correct version of itself, a number, and a list, and correctly returns the sum of all the elements in the list plus the number), a number, and a list, does
sum_sublist_nonrec correctly return the sum of all elements in the list plus the number? Why yes, it does. (Constructing a formal proof tree for this claim is left as an exercise for the self-punishing reader.) Note that since
f_correct is assumed to already be correct, the correct version of it is still just
f_correct, so we can safely pass it to itself without violating our assumptions or introducing new ones. So,
sum_sublist_nonrec is correct.
Now let’s consider the correctness of
sum_list. It’s supposed to add up all the numbers in
list. What it actually does is to apply the (correct) function
sum_sublist_nonrec, passing in a correct version of itself (check! it’s already correct), a number to add the sum of the list to (check! adding zero to the sum of the list won’t change it), and the list (check! that’s what we’re supposed to sum up).
We’ve just proved our program correct! The magic of named let is that it generates this clumsy form with a bunch of
f_corrects from a compact and elegant form. In so doing, it lets us get away with much less formal reasoning while still having the confidence that it can be converted into something like what we just slogged through. Rest assured that no matter what you do with named let, no matter how complicated the construct you create, this “assume it does the right thing” technique still applies!
With one tiny caveat. We haven’t proved that the program terminates. If this technique proved termination, then we could just write
and it would be totally correct, no matter what thing we want it to do.
Technically, everywhere I’ve said “correct”, what I mean is partially correct: if it terminates, then the output is correct. (Equivalently, it definitely won’t return something incorrect.)
do-the-right-thing is, in fact, partially correct: it never returns at all, so it won’t give you any incorrect outputs!
Termination proofs of recursive functions can usually be handled by structural induction on possible inputs: you establish that it terminates for minimal elements (e.g. the empty list) and that termination for any non-minimal element is dependent only on termination for some set of smaller elements (e.g. the tail of the list). The structure that you need in order to think about termination this way is also much clearer with recursion than with iteration constructs.
If you doubt my ability to productively use assembly for more complicated toy problems, I direct you to my previous blog post.↩
Guido van Rossum is the author of Python, and the “Benevolent Dictator for Life” of its development process.↩
Unlike most language implementations,
guilenatively supports arbitrarily large integers.↩
Although at least it’s not as inelegant as defining the helper function outside the body of the actual function, thereby polluting the global namespace. Take advantage of nested functions!↩
You can even keep your
for-eachloops, which are no substitute for
mapand
filter.↩
If you’re curious how this works, click here. But I haven’t settled on an ASM REPL solution I’m happy with – this is just a one-off hack. A more legitimate ASM REPL may be the subject of a future blog post.↩
Except for
repprefixes, which can iterate certain single instructions. I think it’s fair to say those don’t really count.↩
I find calling conventions distasteful in general. The calling convention is like a shadow API (in fact, it’s often referred to as the ABI, for application binary interface) that nobody has any control over (except the people at AMD, Intel, and Microsoft who are in a position to decide on such things) and that applies to every function, every component on every computer everywhere. What if we let people define their ABI as part of their API? Would the world come crashing down? I doubt it. You can already cause quite a bit of trouble by misusing APIs; really, both API and ABI usage ought to be formally verified, and as such ought to have much more room for flexibility than they do now. </soapbox>↩
I would have applied this
xoroptimization too if I weren’t trying to literally translate Scheme code as an illustration.↩
“The stack” is not merely a region of memory managed by the OS (like “the heap”, its common counterpart). The stack is a hardware-accelerated mechanism deeply embedded in the CPU. There is a hardware register
rsp(a.k.a. the stack pointer). A
pushinstruction decrements
rsp(usually by 8 at a time, in 64-bit mode, since pointers are expressed as numbers of 8-bit bytes, and 64/8=8) and then stores a value to
[rsp]. A
popinstruction retrieves a value from
[rsp]and then increments
rsp. A
callinstruction
pushes the current value of
rip(a.k.a. the instruction pointer, or the program counter), and then executes an unconditional jump (
jmp). Finally, a
retinstruction
pops from the stack into
rip, returning to wherever the matching
callleft off.↩
You may point out here that C doesn’t actually let you pass entire linked lists by value. Maybe that’s because it’s a bad idea.↩
- If your function cannot be fully specified by an abstract mapping from inputs to outputs, then it is nondeterministic, which is a fancy word for “unpredictable”: there must exist some circumstances under which you cannot predict the behavior of the function, even knowing every input. Intuitively, I’m sure you can see how unpredictable software is a nightmare to debug. Controlling nondeterminism is an active field of computer science research, which is not the subject of this article. However, I hope you are at least convinced that nondeterminism is something you should avoid if possible, and that therefore you should try to design every function in your program as a proper mathematical function.
Note that I’m not talking about “purity” here – it’s fine for “outputs” to include side effects as of function exit, and for “inputs” to include states of the external world as of function entry. What’s important is that the state at function exit of anything the function modifies be uniquely determined by the state at function entry of anything that can affect its execution.↩
Unless we’re dealing with hairy scope issues like hoisting, in which case you should get rid of those first.↩
Pun intended. The sentence within which this footnote is referenced isn’t circular reasoning; it’s a tautology. Therefore, it’s an example of something that sounds like circular reasoning but is valid anyway. Of course, you shouldn’t take the existence of this cute example as evidence that the circular-sounding reasoning preceding it is not, in fact, circular. (That would be a fallacy of inappropriate generalization, which neither is nor sounds like circular reasoning.)↩
Trying to explain it for the purposes of this blog post – while making sure that I’m not missing something – took me over four hours.↩
Technically, I mean “partially correct”. This will be addressed in due time. Be patient, pedantic reader. This argument is hard enough to understand already.↩
|
http://davidad.github.io/blog/2014/02/28/python-to-scheme-to-assembly-1/
|
CC-MAIN-2017-30
|
refinedweb
| 2,989
| 60.14
|
A cross-engine UI automation framework.
Unity3D/
cocos2dx-*/
Android native APP/(Other engines SDK)/...
First you should connect your Android phone, for example, via usb cable and enable the ADB DEBUG MODE.
#.
To retrieve the UI hierarchy of the game, please use our AirtestIDE (an IDE for writing test scripts) or standalone PocoHierarchyViewer (to view the hierarchy and attributes only but lightweight) ! popo instance for
from poco.drivers.unity3d import UnityPoco poco = UnityPoco() # for windows # poco = UnityPoco(('localhost', 5001), unity_editor=True) ui = poco('...') ui.click()
from poco.drivers.android.uiautomation import AndroidUiautomationPoco poco = AndroidUiautomationPoco() poco.device.wake() poco(text='Clock').click()
from poco.drivers.netease.internal import NeteasePoco from airtest.core.api import connect_device # 先连上android设备 connect_device('Android:///') # windows的话这样 # connect_device('Windows:///?title_re=^.*errors and.*$') # 填写可以识别出的窗口标题栏正则表达式,无需urlencode poco = NeteasePoco('g37') # hunter上的项目代号 ui = poco('...') ui.click()
If you are using multiple devices at the same time, please refer to Poco drivers.)
When there is any ambiguity in the selected objects by node names/node types or object unable to select, the relative selector tries to select the element object by hierarchy in following manner
# select by direct child/offspring poco('main_node').child('list_item').offspring('item')
Tree indexing and traversing is performed by default from up to down or from left to right. In case that the 'not-yet-traversed' nodes are removed from the screen, the exception is raised. The exception is not raised in case when the 'already())
Following code snippet shows how to iterate over the collection of UI objects
# traverse through every item items = poco('main_node').child('list_item').offspring('item') for item in items: item.child('icn_item')
This section describes object proxy related operations
The anchorPoint of UI element is attached to the click point by default. When the first argument
(the relative click position) is passed to the function, the coordinates of the top-left corner of the bounding box
become
[0, 0] and the bottom right corner coordinates are
[1, 1]. The click range area can be less than 0 or
larger than 1. If the click range area lies in the interval (0, 1), it means it is beyond the bounding box.
Following example demonstrates how to use
click function
poco('bg_mission').click() poco('bg_mission').click('center') poco('bg_mission').click([0.5, 0.5]) # equivalent to center poco('bg_mission').focus([0.5, 0.5]).click() # equivalent to above expression
The anchorPoint of UI element is taken as the origin, the swipe action is performed towards the given direction with the certain distance.
Following example shows how to use the
swipe function
joystick = poco('movetouch_panel').child('point_img') joystick.swipe('up') joystick.swipe([0.2, -0.2]) # swipe sqrt(0.08) unit distance at 45 degree angle up-and-right joystick.swipe([0.2, -0.2], duration=0.5)
Drag from current UI element to the target UI element.
Following example shows how to use the
drag_to function
poco(text='突破芯片').drag_to(poco(text='岩石司康饼'))
The anchorPoint is set as the origin when conducting operations related to the node coordinates. If the the local click
area is need, the focus function can be used. The coordinate system is similar to the screen coordinates - the origin
is put to the top left corner of the bounding box and with length of unit of 1, i.e the coordinates of the center are
then
[0.5, 0.5] and the bottom right corner has coordinates
[1, 1].
poco('bg_mission').focus('center').click() # click the center
The focus function can also be used as internal positioning within the objects. Following example demonstrates the implementation of scroll operation in ScrollView.
scrollView = poco(type='ScollView') scrollView.focus([0.5, 0.8]).drag_to(scrollView.focus([0.5, 0.2]))
Wait for the target objects to appear on the screen and return the object proxy itself. If the object exists, return immediately.
poco('bg_mission').wait(5).click() # wait 5 seconds at most,click once the object appears poco('bg_mission').wait(5).exists() # wait 5 seconds at most,return Exists or Not Exists
Poco framework also allows to perform the operations without any UI elements selected. These operations are called global operations.
poco.click([0.5, 0.5]) # click the center of screen poco.long_click([0.5, 0.5], duration=3)
# swipe from A to B point_a = [0.1, 0.1] center = [0.5, 0.5] poco.swipe(point_a, center) # swipe from A by given direction direction = [0.1, 0] poco.swipe(point_a, direction=direction)))
This sections describes the Poco framework errors and exceptions.
from poco.exceptions import PocoTargetTimeout try: poco('guide_panel', type='ImageView').wait_for_appearance() except PocoTargetTimeout: # bugs here as the panel not shown raise
from poco.exceptions import PocoNoSuchNodeException img = poco('guide_panel', type='ImageView') try: if not img.exists(): img.click() except PocoNoSuchNodeException: # If attempt to operate inexistent nodes, an exception will be thrown pass
Poco is an automation test framework. For unit testing, please refer to PocoUnit section. PocoUnit provides a full
set of assertion methods and furthermore, it is also compatible with the
unittest in Python standard library.
This section describes some basic concepts of Poco. Basic terminology used in following section
Selectorclass.
Following images show the UI hierarchy represented in Poco.
你可以在登录后,发表评论
|
https://gitee.com/AirtestProject/Poco/blob/master/README.rst
|
CC-MAIN-2018-22
|
refinedweb
| 860
| 50.43
|
import java.util.Random; public class PartA { //loop 10 times private static void loop2() { for(;;) System.out.println("In loop"); //for(int i = 0; i < 10; i++) System.out.println("In loop"); System.out.println("Out of loop"); } private static void runLoops() { loop2(); System.out.println(); } public static void main(String[] args) { runLoops(); } }
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
From novice to tech pro — start learning today.
Open in new windowthis is an infinite loop
Here I get a different error: Unable to execute program; could not compile!
/tmp/java_kmr4s7/PartA.jav
System.out.println("Out of loop");
^
1 error
So is this an infinite loop or something else? And if it is an infinite loop, why? What is the stop condition that is not being met?
thanks
This course will introduce you to Ruby, as well as teach you about classes, methods, variables, data structures, loops, enumerable methods, and finishing touches.
I'm not sure why you're using compilejava.net but it isn't helping you. Why? Because you're not able to distinguish between compilation and execution, which you would be able to do if you were using standard tools (javac+java). If something is able to be killed, it must be running, which means it must have been able to be, and was, compiled.
The differences you're seeing are due to cases where the compiler knows a loop won't ever finish, such that the next statement is unreachable and cases where it doesn't know, but the loop turns out to be infinite anyway
|
https://www.experts-exchange.com/questions/29078200/What-is-the-code-doing-exactly-and-what-is-the-theoretical-reason-why-this-code-is-not-compiling-part-1.html
|
CC-MAIN-2018-26
|
refinedweb
| 280
| 66.54
|
Var patch
I ran over my project recently with the VarScoper tool, and it found a few missed vars.
Patch attached.
Dear Aaron Conran,
In version v0.1 DirectCFM, when I run the code on a server Coldfusion 9 it is returning the following exception:
18:08:23.023 - java.lang.NegativeArraySizeException - in C: \ wamp \ www \ cfdirect \ ServiceBus \ Direct.cfc: line 13
The line in question is the following:
var byteArray = CreateObject ("java", "java.lang.reflect.Array"). newInstance (byteClass, size);
The DirectCFM be incompatible to the ColdFusion 9?
Sincerely,
Vitor Rodrigues S
Negative Array Size
Vitor,
I'm not sure what is causing your issue, but it isn't CF 9 itself. I wrote the Ext Direct chapter of Learning Ext JS 3.2 using ColdFusion 9 for the sample code, and all worked well. You may want to look deeper, like did you add the custom attributes to your component and methods?--
Steve "Cutter" Blades
Adobe Community Professional - ColdFusion
Adobe Certified Professional - Advanced Macromedia ColdFusion MX 7 Developer
_____________________________
Blog: Cutter's Crossing
Co-Author "Learning Ext JS 3.2"
CutterBi,
Excuse me, the problem was lack of attention. The path was wrong.
It worked perfectly now, and I noticed that not all the original files on Aaron Conran run directly in browser.
When you run the link the return of the browser:
Code:
Ext.ns('Ext.ss');Ext.ss.APIDesc = {"url":"servicebus\/Router.cfm","namespace":"Ext.ss","type":"remoting","actions":{"echo":[{"len":1,"name":"send"}],"gridExample":[{"len":1,"name":"addGame"},{"len":2,"name":"updateGame"},{"len":1,"name":"deleteGame"},{"len":4,"name":"getGames"}]}};
And when running link, I get the following message:
My development environment is as follows:
- 2.40GHz Core I3
- 4GB DDR3
- 1 TB SATA HD
- Windows 7 64 Bit
- Coldfusion 9 64 Bits
- Apache 2.2.21
Does it have any problem in this environment or Router.cfm can not really be directly accessed?
Is this plugin still good?
I am fairly new to ExtJs but want to get a better understanding of Ext.direct for CFM use. I downloaded the zip file and followed the instruction but i'm getting a error.Code:
Uncaught TypeError: Cannot read property 'APIDesc' of undefined
Thanks!
I am using Sencha with CF 9. Works FINE
I tweaked some work that was done with others. Give me your email and I will be happy to zip up a folder with all that you need (including example CFCs). It's Easy-Peasy. Just have to add one file in a resource (assume you are using Architect.)
Bruce
|
https://www.sencha.com/forum/showthread.php?67983-DirectCFM-A-ColdFusion-Server-side-Stack/page4
|
CC-MAIN-2016-18
|
refinedweb
| 426
| 58.28
|
I am writing a program for my Java class that takes the length of two sides of a right triangle, and calculates the hypotenuse, as well as the sine, cosine, and tangent of each non-right angle of the right triangle. The calculations for the hypotenuse, sine, cosine, and tangent should be placed in separate methods.
I keep getting two errors when I try to declare my methods, 'class' expected and ')' expected. I cannot figure out what I am doing wrong, any help would be appreciated.
Code Java:
import java.util.*; import static java.lang.Math.*; public class RightTriangle { public static void main(String[] args) { double a, b, c; System.out.println("Please enter first length" + " of side of the triangle", a); a = console.nextDouble(); System.out.println("The opposite side of the" + "triangle is ", a); System.out.println("Please enter second length" + " of side of the triangle", b); b = console.nextDouble(); System.out.println("The adjacent side of the" + "trianlge is ", b); c =((a*a) + (b*b)); hypotenuse = c*c; System.out.println("The hypotenuse of the triangle" + "is ", hypotenuse); sine(double a, double c); cosine(double b, double c); tangent(double b, double c); } public static double getSine(double a, double c) { double opp; double hyp; opp = a.getNum(); hyp = c.getNum(); sine = opp/hyp; System.out.println("The sine of the triangle" + "is ", sine); } public static double getCosine(double b, double c) { double adj; double hyp; adj = b.getNum(); hyp = c.getNum(); cosine = adj/hyp; System.out.println("The cosine of the triangle" + "is ", cosine); } public static double getTangent(double a, double b) { double opp; double adj; opp = a.getNum(); adj = b.getNum(); tangent = a/b; System.out.println("The tangent of the triangle" + "is ", tangent); } }
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/10084-user-defined-methods-printingthethread.html
|
CC-MAIN-2014-52
|
refinedweb
| 289
| 51.44
|
Mock?
1: [TestMethod]
2: public void GetProductsTest()
3: {.
State versus Behavior Verification:
Using Rhino Mocks).
State Verification with Rhino Mocks
1: using System;
2:
3: namespace RhinoMockProject
4: {
1: using System;
2:
3: public interface IProduct
4: {
5: string Name { get; set; }
6:
7: decimal Price { get; set; }
8: }
Imagine that you are adding a new feature to your store. You are creating a method that doubles the price of any product. The method is contained in Listing 8.
Listing 8 – ProductManager.cs
1: using System;
2:
3: namespace RhinoMockProject
4: {
5: public class ProductManager
6: {Interface()
18: {.
Setting Up Return Values from Stub Methods
1: using System;
2: using System.Collections.Generic;
3:
4: public interface IProductRepository
5: {
6: IProduct Get(int ProductId);
7:
8: IEnumerable<IProduct> Select();
9:
10: bool Save(IProduct product);
11: }>();
28:MultipleReturn()
18: {
19: MockRepository mocks = new MockRepository();
20: IProductRepository products = mocks.Stub<IProductRepository>();
21:
22: using (mocks.Record())
23: {
24: SetupResult
25: .For(products.Get(2))
26: .Return(new Product {Name="Beer", Price=12.99m });
27: });
31: }
32:
33: // Test
34: IProduct product1 = products.Get(2);
35: Assert.AreEqual("Beer", product1.Name);
36:
37: IProduct product2 = products.Get(12);
38: Assert.AreEqual("Beer", product2.Name);
39: }
40:
41: }
42: }
Behavior Verification with Rhino Mocks.
Expectations versus Reality:
16: [TestMethod]
17: public void LogTest()
18: {
19: MockRepository mocks = new MockRepository();
20: Logger logger = mocks.CreateMock<Logger>();
21: using (mocks.Record())
22: {
23: logger.Log(27);
24: }
25: using (mocks.Playback())
26: {
27: Customer newCustomer = new Customer(27, logger);
28: newCustomer.Name = "Stephen Walther";
29: newCustomer.Save();
30: }
31: }
32: }
33: }
1: using System;
2:
3: namespace RhinoMockProject
4: {
5: public class Customer
6: {
7: public int Id { get; private set; }
8: public string Name { get; set; }
9: private Logger _logger;
10:
11: public Customer(int Id, Logger logger)
12: {
13: this.Id = Id;
14: _logger = logger;
15: }
16:
17: public void Save()
18: {
19: _logger.Log(this.Id);
20: }
21:
22: }
23: }
The Logger class is contained in Listing 16. This class isn’t really implemented since the Log() method is never really called. The Log() method raises an exception. Since we are mocking the Log() method, this exception is never raised.
Listing 16 – Logger.cs
1: using System;
2:
3: public class Logger
4: {
5: public virtual void Log(int ProductId)
6: {
7: throw new Exception("eeeks!");
8: }
9:
10: }
Strict, Non-Strict, and Partial Mocking
1: using System;
2:
3: namespace RhinoMockProject
4: {
5: public class Rover
6: {
7:
8: public virtual void Bark(int loudness)
9: {
10: // Make loud noise
11: }
12: RoverTest
14: {).
Creating Testable Web Applications)
1: using System;
2:
3: namespace RhinoMockProject
4: {
5: public class Logger
6: {)
1: using System;
2: using System.Collections.Generic;
3:
4: namespace RhinoMockProject
5: {
6:
7: public class DataProvider
8: {
9: public static IEnumerable<Product> GetProducts()
10: {
11: Logger.Write("Getting products");
12:
13: // Get products from database
14: return null;
15: }
16:
17:
18: public static bool SaveProduct(Product product)
19: {
20: Logger.Write("Saving new product");
21:
22: // Save product to database
23: return true;
24: }
25:)
1: using System;
2:
3: namespace RhinoMockProject
4: {
5: public class Logger
6: {
7: public virtual void Write(string message)
8: {
9: // Log message to file system
10: }
11: }
12: }
The DataProvider class in Listing 22 has been revised to support Dependency Injection. The new DataProvider class accepts an instance of the Logger class in its constructor.
Listing 22 – DataProvider.cs (Second Iteration)
1: using System;
2: using System.Collections.Generic;
3:
4: namespace RhinoMockProject
5: {
6: public class DataProvider
7: {
8: private Logger _logger;
9:
10: public DataProvider(Logger logger)
11: {
12: _logger = logger;
13: }
14:
15: public virtual IEnumerable<Product> GetProducts()
16: {
17: _logger.Write("Getting products");
18:
19: // Get products from database
20: return null;
21: }
22:
23: public virtual bool SaveProduct(Product product)
24: {
25: _logger.Write("Saving new product");
26:: [TestMethod]
16: public void WriteTest()
17: {
18: MockRepository mocks = new MockRepository();
19: Logger logger = mocks.CreateMock<Logger>();
20: DataProvider dp = new DataProvider(logger);
21: using (mocks.Record())
22: {
23: logger.Write("Saving new product");
24: logger.Write("Getting products");
25: }
26: using (mocks.Playback())
27: {
28: dp.SaveProduct(null);
29: dp.GetProducts();
30: }
31: }
32:.
Summary.
thanks for this, I find it very useful
great primer. finally someone goes beyond testing a method that adds two numbers. thanks!
This is a great tutorial 🙂 Thanks Stephen
Great tutorial. Clear and very thorough. Thanks for doing this.
Good explanations. Really enjoyed it.
Awesome, thanks.
Very good explained!
Best mock intro to date. Great job!
Great tutorial. I found it very useful.
Excellent Introduction to Mock Object Frameworks. Thank you!.
Great post. I had no clue what a mock was before this article. Now I see the benefits. I am confused as to what Mock Object Framework to use though. I like creating static facade classes that sit on top of my data access layer. These facades are always static, because there is no need to create an instance of these. Why create new instances when you don’t have to. Just takes up time and memory. Minimal but it adds up in large systems. These need to be tested and I like the idea of testing with DI. Does it just come down to choice? What are the pros and cons?
I just want to echo the sentiments of the other comments – this is a really great introduction to mocks and rhino mocks!
This is an awesome article!
Great article. It really helped clarify the practicality of implementing TDD principles in code with many external dependecies.
Great Tutorial ‘Product’
Spent a couple of days trying to get my head around this area and this is simply THE best starter tutorial that exists!!
Great summary – one of the best I have seen so far introducing Rhino mocking.
Thx for writing.
TDD made simple. Good job..
Is there something I’m missing? On listing 11, I’m getting a compile error on line 36 on a Count property on the results which is declared as a var.
Error 1 ‘System.Collections.Generic.IEnumerable
‘ does not contain a definition for ‘Count’ and no extension method ‘Count’ accepting a first argument of type ‘System.Collections.Generic.IEnumerable ‘.
Your post is really good! Exceptional *****
The problem i see here is that using too much of mock seperates you from the real implementation.
I am used to use Code Coverage to see if a fair amount of code is covered.
From my understanding, mocks are use to remove the depencies.
But Let’s say i want do not want to depend on a database and test if a Insert method of my service layer is working well buy returning the id of the inserted row.
If I setup the expectations that it is return an Id of 1 and use mocks. None of the real method call would be executed and my test will pass. Well, what kind of value does this adds to my tests?
There should be a guideline somewhere on when to mock and when not to mock! Otherwise my test will all pass but no real code will be executed!
I would appreciate replies / comments on this!
thanks
I was getting the same error as Peter Betlinski, for the Listing 11 and Listing 12:
Error 103 The type or namespace name ‘Product’ could not be found (are you missing a using directive or an assembly reference?)
Do we need to create the Product class that implements the IProduct interface ?
Thanks,?
Thanks, it was good explanation.
DotNetGuts
Good Information, Its really helped me, Thanks.
I have few more Information on Rihno Mock, it might be of Interest dotnetguts.blogspot.com/…/…l-unit-testing.html
Thx for this article.
I did not understan behaviour testing before reading this and I have read quite a few articles on unit testing.
thanks,this is a good article and tutorial too..
Hi..
great and very useful posts..
I see a lot of code here.. 🙂
Thanks for sharing..
hello…
great topic…thanks for sharing…
now….i can see more clearly on rhino…..thanks a lot.
Hi..
great introduction about Rhino..
i learn it..
Thanks admin..
Hello
Great article…
I am slightly confused about the use of static methods.
You say that static methods cannot be mocked, and yet in Listing 8 there is this line:
public static void DoublePrice(IProduct product)
which listing 9 subsequently calls like this:
ProductManager.DoublePrice(product);
Is the static method in listing 8 allowed because it’s object is not actually being mocked, but rather just called by listing 9 please?
thanks
Stu
Long and really great explaination.. thanks
Thanks for giving the brief about Rhino Mocks.
Thanks for sharing very significant blog.
Very useful, Rhino Mocks this is the first I heard this.
Thanks for the info
CHARGE BACK RW W Great article, though – thanks!
good post
v2q4 I tried to mock a call to a Linq to SQL query, but I am struggling.
v2q4 I tried to mock a call to a Linq to SQL query, but I am struggling. i like.
I think that it is really great post, thanks.
f422
I believe it is a promising (currently version 4.0). So I would stick with it.. thanks
I wish i can learn more. thanks br0rCX
very long and detailed article..
love this..tqvm 🙂
This is a great introduction to the subject of mocking using Moq framework. I believe it is a promising. I have checked the javascript which doesn’t seem to have any error in it.
But, for some reason, i cannot hit the controller action while on onChange of the first dropdown list. I checked the javascript which doesnt seem to have any typos or errors in it.
great primer. finally someone goes beyond testing a method that adds two numbers. thanks!
Thank you for sharing this articles
Long and really great explaination..
Long and really great explaination..
Really interesting articles.I enjoyed reading it. I need to read more on this topic..
Easy option to get useful information as well as share good stuff with good ideas and concepts
Really enjoyed reading this blog, please keep posting new info, have bookmarked your page.
Depending on the type of application, it may require the development of an entirely different browser-based
Wow you went a long way to post this for us to use. I will try to implement this in one of my programs or something.
Spent a couple of days trying to get my head around this area and this is simply THE best starter tutorial that exists!!
|
http://stephenwalther.com/archive/2008/03/23/tdd-introduction-to-rhino-mocks
|
CC-MAIN-2017-43
|
refinedweb
| 1,774
| 67.65
|
Hum, on my computer I have correct permissions set... if you take a look at the PKGBUILD file there is nothing done about permissions...
Did you try reinstalling this package ? Do someone else have this issue ? Also, my user's umask is 022, what's yours ?
Search Criteria
Package Details: python2-odict 1.5.1-1
Dependencies (2)
- python2 (pypy19, python26, stackless-python2)
- python2-distribute (python2-setuptools) (make)
Sources (1)
Latest Comments
fab31 commented on 2014-04-13 13:50
Hum, on my computer I have correct permissions set... if you take a look at the PKGBUILD file there is nothing done about permissions...
samuellittley commented on 2014-04-13 13:21
This PKGBUILD seems to be removing group and world read permissions from odict's egg-info folder.
ls -al /usr/lib/python2.7/site-packages/odict-1.5.1-py2.7.egg-info
drwxr-xr-x 2 root root 4096 10. Feb 02:07 .
drwxr-xr-x 178 root root 20480 13. Apr 02:53 ..
-rw------- 1 root root 1 10. Feb 02:07 dependency_links.txt
-rw------- 1 root root 1 10. Feb 02:07 namespace_packages.txt
-rw------- 1 root root 11955 10. Feb 02:07 PKG-INFO
-rw------- 1 root root 28 10. Feb 02:07 requires.txt
-rw------- 1 root root 368 10. Feb 02:07 SOURCES.txt
-rw------- 1 root root 6 10. Feb 02:07 top_level.txt
-rw------- 1 root root 1 9. Nov 2011 zip-safe
The files in every other egg-info folder on my machine have group and world read permissions, this package when installed prevents building the python2-whoosh package because it can't read the /usr/lib/python2.7/site-packages/odict-1.5.1-py2.7.egg-info/namespace_packages.txt file (although don't ask me why it has to!)
|
https://aur.archlinux.org/packages/python2-odict/
|
CC-MAIN-2016-22
|
refinedweb
| 302
| 79.26
|
Part 4: Introduction to XAML
- Posted: Jun 25, 2013 at 5:21 PM
- 122,175 Views
- 62:
My aim is by the end of this lesson you'll have enough knowledge that you can look at the XAML we write in the remainder of this series and be able to take a pretty good guess at what it's doing before I even try to explain it.
In the previous lesson, I made a passing remark about XAML and how it looks similar to HTML. That's no accident. XAML is really just XML, the eXtensible Markup Language. I'll explain that relationship in a moment, but at a higher level, XML looks like HTML insomuch that they share a common ancestry. Whereas HTML is specific to structuring a web page document, XML is more generic. By "generic" I mean that you can use it for any purpose you devise and you can define the names of the elements and attributes to suit your needs. In the past, developers have used XML for things like storing application settings, or using it as a means of transferring data between two systems that were never meant to work together. To use XML, you define a schema, which declares the proper names of elements and their attributes. A schema is like a contract. Everyone agrees—both the producer of the XML and the consumer of the XML abide by that contract in order to communicate with each other. So, a schema is an important part of XML. Keep that in mind … we’ll come back to that in a moment.
XAML is a special usage of XML. Obviously, we see that, at least in this case, XAML has something to do with defining a user interface in our Phone's interface. So in that regard, it feels very much like HTML. But there’s a big difference … XAML is actually used to create instances of classes and set the values of the properties. So, for example, in the previous lesson we defined a Button control in XAML:
... that line of code is roughly equivalent to this in C#:
I've added this C# code in the constructor of my MainPage class. I'll talk about the relationship between the MainPage.xaml and MainPage.xaml.cs in just a moment, but we've already seen how we can define behavior by writing procedural C# code in the MainPage.xaml.cs file. Here, I'm merely writing code that will execute as soon as a new instance of the MainPage class is created by writing the code in the constructor of that class. class.
The important take away is this: XAML is simply a way to create instances of classes and set those objects' properties in a much more simplified, succinct syntax. What took us 10 lines of C# code we were able to accomplish in just one line of XAML (even if I did separate it on to different lines in my editor, it's still MUCH SHORTER than it would have been had I used C# to create my objects.
Furthermore, using XAML I have this immediate feedback in the Phone preview pane. I can see the impact of my changes instantly. In the case of the procedural C# code I wrote, I would have to run the app each time I wanted to see how my tweaks to the code actually worked.
If you have a keen eye, you might notice the difference in the XAML and C# versions when it comes to the HorizontalAlignment attribute / property … If you tried:
myButton.HorizontalAlignment = “Left”;
… you would get a compilation error. The XAML parser will perform a conversion to turn the string value "Left" into the enumeration value System.Windows.HorizontalAlignment.Left through the use of a Type Converter. A Type Converter is a class that can translate from a string value into a strong type—there are several of these built into the Windows 8 API that we’ll use throughout this series. In this example, the HorizontalAlignment property, when it was developed by Microsoft’s developers, was marked with a special attribute in the source code which signals to the XAML parser to run the string value through a type converter method to try and match the literal string "Left" with the enumeration value will use name / colon combination. So, just to be clear ... the :x or :phone is the NAMESPACE, that is associated with a SCHEMA (what we've called a contract). Each element and attribute in the rest of this MainPage.xaml MUST ADHERE TO AT LEAST ONE OF THESE SCHEMA's, otherwise the document is said to be invalid. In other words, if there's an element or attribute expressed in this XAML file that is not defined in one of these namespaces, then there's no guarantees that the compiler—the program that will parse through our source code and create an executable that will run on the Phone—the compiler will not be able to understand how to carry out that particular instruction.
So, in this example:
<Grid x:
We would expect the element Grid and attribute Background to be part of the default schema corresponding with the default namespace defined at the location in line 3.
However, x:Name is part of the schema corresponding with the x: namespace defined at the location in line 4.
I have a bright idea ... let's try to navigate to default namespace to learn more about what makes up a namespace in the first place:—the schema (and therefore, the namespace in our XAML) keeps class names sorted out, kind of like a last name or surname. This URL, or more properly, we should refer to it as a URI (Uniform Resource IDENTIFIER … rather than LOCATOR) is used as a namespace identifier. The XML namespaces are instructions to the various applications that will parse through the XAML … the Windows Runtime XAML parser will be seeking to turn it into executable code, while the Visual Studio and Blend designers will be seeking to turn it into a design-time experience.
So, the second XML Namespace defines a mapping, x: as belonging to this schema:
Therefore, any elements or attribute names that are preceded by the x: prefix means that they adhere to this second schema. … we could spend a lot of time talking about the specifics, but the main takeaway is that this code at the very top of each XAML file you add to your phone project does have a purpose, it defines the rules that your XAML code must follow. You'll almost never need to modify this code, but if you remove it, you could potentially break your application. So, I would encourage you to not fiddle around with it unless you have a good reason to. There are a few additional attributes in lines 10 through 14 ... we may talk about them later in this series.
In Visual Studio’s Solution Designer, you can see that the XAML files have an arrow which means that we can expand them to reveal a C# file by the same name, the only difference is that it has a .cs file name extension appended to the end. If you look at the the relationship between the two.
Why is this important? This relationship means that the compiler will combine the output of the MainPage.xaml and the MainPage.xaml.cs files into a SINGLE CLASS. This means that they are two parts of a whole. That’s an important concept … that the XAML gets compiled into Intermediate Language just like the C# gets compiled into Intermediate Language and they are both partial implementations of a single class. This allows you to create an instance of a class in one file then use it in the other file, so to speak. This is what allows me to create an instance of theColorBrush(Colors.Red);
... we have to create a new instance of a SolidColorBrush and pass in an enumerated Colors value. This is another great example of a property type converter that we learned about earlier in this lesson. But some attributes are simply too complex to be represented as attributes.
When a properties is not easily represented as a XAML attribute, it's referred to as a "complex property". To demonstrate this, first I'm going to remove the Background="Red" attribute from the Button, remove "Hello World!" as the default property, and add it back with a Content="Hello World!" attribute:
Next, in the Properties pane, I'm going to set the Background property to a linear gradient brush.
I should now see the following in the Phone Preview pane:
... but more importantly, let's look at the XAML that was generated by the Brush editor:
The XAML required to create that background cannot be easily set in a simple literal string like before when we simply used the word "Red". Instead, notice how the Background property is broken out into its own element:
<Button ... >
<Button.Background>
...
</Button.Background>
</Button>
This is called "property element" syntax and is in the form <Control.Property>.
A good example is the LinearGradientBrush. The term “brush” means that we’re working with an object that represents a color or colors. Think of “brush” like a paint brush … this particular paint brush will create a gradient that is linear—the color will change from top to bottom or left to right. Now, admittedly, you would NEVER want to do what I’m doing in this code example because it goes against the aesthetic of all Windows Phone 8 applications. But, let’s pretend for now that we’re expressing our individuality by using a gradient color as the background color for a Button.
As you can see (below), if we want to define a LinearGradientBrush, we have to supply a lot of information in order to render the brush correctly ... the colors, at what point that color should break into the next color, etc. The LinearGradientBrush has a collection of GradientStop objects which define the colors and their positions in the gradient (i.e., their "Offset").
However, the XAML representing the LinearGradientBrush in the code snippet above is actually SHORTENED automatically by Visual Studio. Here's what it should be:
<Button.Background>
<LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0">
<LinearGradientBrush.GradientStops>
<GradientStopCollection>
<GradientStop Color="Red" Offset="1" />
<GradientStop Color="Black" Offset="0" />
</GradientStopCollection>
</LinearGradientBrush.GradientStops>
</LinearGradientBrush>
</Button.Background>
Notice how the <LinearGradientBrush.GradientStops> and <GradientStopCollection> elements are omitted? This is done for conciseness and compactness and is made possible by an intelligent XAML parser. First of all, the GradientStops property is the default property for the LinearGradientBrush. Next, GradientStops is of type GradientStopCollection and implements IList<T>, the T in this case would be of type GradientStop. Given that, it is possible for the XAML parser to deduce that the only thing that could be nested inside the <LinearGradientBrush ... /> is one or more instances of GradientBrush, each being implicitly .Add()'ed to the GradientStopCollection.
So the moral of the story is that XAML allows us to create instances of classes declaratively, and we have a granular fidelity of control to design user interface elements. Even so, the XAML parser is intelligent and doesn’t require us to include redundant code—as long as it has enough information to create the object graph correctly. up in the next lesson.
Remove this comment
Remove this threadclose
|
http://channel9.msdn.com/Series/Windows-Phone-8-Development-for-Absolute-Beginners/Part-4-Introduction-to-XAML?format=smooth
|
CC-MAIN-2014-15
|
refinedweb
| 1,913
| 60.95
|
table of contents
NAME¶
modf, modff, modfl - extract signed integral and fractional values from floating-point number
SYNOPSIS¶
#include <math.h>
double modf(double x, double *iptr); float modff(float x, float *iptr); long double modfl(long double x, long double *iptr);
Link with -lm.
modff(), modfl():
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
DESCRIPTION¶
These functions break the argument x into an integral part and a fractional part, each of which has the same sign as x. The integral part is stored in the location pointed to by iptr.
RETURN VALUE¶
These functions return the fractional part of x.
If x is a NaN, a NaN is returned, and *iptr is set to a NaN.
If x is positive infinity (negative infinity), +0 (-0) is returned, and *iptr is set to positive infinity (negative infinity).
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
|
https://manpages.debian.org/unstable/manpages-dev/modff.3.en.html
|
CC-MAIN-2021-49
|
refinedweb
| 175
| 57.47
|
In this post I would like to show you the most example about Token Authentication with Claims and ASP.NET WebAPI.
The sense behind this is:
- We ask the Server for a token
- We receive the token, store it client side and…
- …send it in the header on every request
The “problem” is that we do want to use all build in things Asp.Net WebAPI provides us. Microsoft serves us everything we need. So lets do this :)
First of all we configure our WebAPI to create a “controller” which is taking our requests. Here is the first unusual thing: The controller we create is kind of a virtual controller. We only provide it as a string.
OAuthOptions = new OAuthAuthorizationServerOptions { TokenEndpointPath = new PathString("/Token"), Provider = new ApplicationOAuthProvider(), AuthorizeEndpointPath = new PathString("/api/Account/ExternalLogin"), AccessTokenExpireTimeSpan = TimeSpan.FromDays(14), //ONLY FOR DEVELOPING: ALLOW INSECURE HTTP! AllowInsecureHttp = true }; // Enable the application to use bearer tokens to authenticate users app.UseOAuthBearerTokens(OAuthOptions);
The “TokenEndpointPath” can be treated like a controller without really having one in your project. You will not find such a class there, so stop looking ;-) Other Properties speak for themselves. Well, now we have to take a look at the ApplicationOAuthProvider, we mentioned in the code, because this is a class which consumes the token request and gives us the token in the end.
Lets have a look at this.
public class ApplicationOAuth[] { "*" }); if(context.UserName != context.Password) { context.SetError("invalid_grant", "The user name or password is incorrect."); return; } var identity = new ClaimsIdentity(context.Options.AuthenticationType); identity.AddClaim(new Claim("sub", context.UserName)); identity.AddClaim(new Claim(ClaimTypes.Role, "user")); context.Validated(identity); } }
The first line is a CORS-Line. You can get information about CORS looking here or here.
ATTENTION: I am only comparing username and password here for equality. Normally you yould take your own User-Repository or the Asp.Net-Identity thing.
If everything is alright we can create a new identity and add claims to it.
Thats it! For server side.
But how to consume it?
So we have created the endpoint…lets request it with a POST-Request. (I am using Postman here)
So send a post request to the token endpoint we created. Take a look at the “x-www-form-urlencoded” which is very important! Also see the “grant_type” which is set to “password”. Without this you will not reach the token endpoint. username and password are equal due to the fact we check it for equality in your OAuthProvider we introduced before.
Also check that in the Headers-Section we set the content-type to “application/x-www-form-encoded”. Firing this request reaches the endpoint and is giving us a valid token:
There you go. if we now copy this token and send it to a controller we tagged with the [authorize]-Attribute like this:
[Authorize] public class ValuesController : ApiController { // GET api/<controller> public IHttpActionResult Get() { ClaimsIdentity claimsIdentity = User.Identity as ClaimsIdentity; var claims = claimsIdentity.Claims.Select(x => new { type = x.Type, value = x.Value }); return Ok(claims); } }
Note that we added the “Authorization”.Header with the “Bearer” and the token we received. We can send it and receive the protected resource.
Thats it :)
You can also check the roles you added in the claims by mentioning the roles in your Autorize-Attribute:
[Authorize(Roles = "user")] public class ValuesController : ApiController { // GET api/<controller> public IHttpActionResult Get() { ClaimsIdentity claimsIdentity = User.Identity as ClaimsIdentity; var claims = claimsIdentity.Claims.Select(x => new { type = x.Type, value = x.Value }); return Ok(claims); } }
The roles are added via claims in your OAuthProvider.
Hope this helps anybody.
Happy coding :)
Fabian
|
https://offering.solutions/blog/articles/2015/10/03/token-authentication-with-claims-and-asp.net-webapi/
|
CC-MAIN-2022-21
|
refinedweb
| 602
| 51.65
|
Lossless Fraps 'FPS1' decoder. More...
#include "avcodec.h"
#include "get_bits.h"
#include "huffman.h"
#include "bytestream.h"
#include "bswapdsp.h"
#include "internal.h"
#include "thread.h"
Go to the source code of this file.
Lossless Fraps 'FPS1' decoder.
Codec algorithm for version 0 is taken from Transcode <>
Version 2 files support by Konstantin Shishkov
Definition in file fraps.c.
Definition at line 42 of file fraps.c.
Referenced by decode_frame().
Definition at line 43 of file fraps.c.
Referenced by fraps2_decode_plane().
Comparator - our nodes should ascend by count but with preserved symbol order.
Definition at line 77 of file fraps.c.
Referenced by fraps2_decode_plane().
decode Fraps v2 packed plane
Definition at line 86 of file fraps.c.
Referenced by decode_frame().
Definition at line 341 of file fraps.c.
|
https://ffmpeg.org/doxygen/trunk/fraps_8c.html
|
CC-MAIN-2018-47
|
refinedweb
| 128
| 65.59
|
- Extract particle information
- Maya 2013 Node Editor not working
- MEL - Angle of View
- Python - selecting geoConnector from emitter
- Possible to MEL import skin weights?
- imgcvt method of converting image sequence
- turning off render stats on selection
- Display > Wireframe color > Custom
- callback select change
- How to view or list script nodes?
- Can't figure out the problem
- MG_pathSpine : fast and flexible spine
- Run a script on multiple nodes?
- How to find tip joint name automatically
- [Python] Toggle button BG color on/off
- [Python]Efficient way to group points for delaunay triangulation
- scripted traffic - freelance
- would it be feasible to create a node that parses python ?
- Newbie needs help!
- problem with script due to poly count change
- Storing Mesh Orientation Info
- Selecting random objects in selection MEL script
- Find spec/reflect angle
- Programming Custom Materials
- naming with add_single VRayDisplacement ? (mel)
- custom context menu in specified View
- Query DG Evaluation Order??
- joints to string array
- Excluding objects from being saved with the scene?
- MEL Beginner Problem
- pass textScrollList to proc
- getAttr returning unpredictable data types - Very Puzzled
- Scripting a maze
- How to get vertex color between vertexes?
- QT and visual studio 2010
- MEL newbie - select shaders
- Get coordinate of any point on a polygon
- Instanced Particles Aimed Along Normals?
- Placing a deformer oriented to two joints
- QtDesigner/Eclipse for maya
- tool for selecting multiple objects
- Smooth skin weights flood
- incrementing uValue
- Python: passing arguments from a button to a function
- Python Dynamic popuMenu - menuItem crash
- How could I change emitter's type in my UI??
- Simple If State - Noob
- Rotate a Locator
- Get variable defined in a function out of the function
- How to delete a RunTimeCommand?
- How can I parent multiple objects by using a loop?
- Applying a Texture in Python
- selection list for edges is buggered
- Dealing with Namespaces & Xform giving incorrect values
- How to deliberately break the renderview?
- error adding expression through script editor
- Replace a view port in Maya
- Spawning curves at center of Polygon
- Can update Particle during mel runnig?
- Random Smooth Transition Opacity PP
- Python issues
- Maya Graph Editor in PyQt Gui
- Control HIK Effectors using MEL
- Shelf icon label Problem .
- ntCopy - Copy SOP for Maya
- Force Selection List instead of Selection Range
- importing python
- return value
- wxPython in Maya
- Odd "while loop" issue
- Procedural UI element creation: v2
- polyEditUV eats all memory
- Maya File somehow losing it's Name
- Need some help with shader exporting with mayapy
- Control HIK Effectors using MEL
- get references namespaces
- node names in callbacks
- listConnections returns NONE?????
- Trax editor clip
- printing something in note pad like UI
- Python: querying an optionMenu when a button is pressed
- Python match translation problem
- cotrolling edit points with a controller
- commandPort result size
- MFnMesh gets slow
- python commandPort in __main__ with valid result
- Creating a lambert for each new object.
- Change attributes of objects inside/intersecting with a mesh
- Red9 MetaData Api - Vimeo Part1!
- De-Select percentage of objects in list?
- Sourcing and envoking script over the server...again
- Center pivot, but preserve Y and Z
- Cpu usage low when running scripts
- playblast specific window
- yellow cursor script?
- Freezing transforms doesn't freeze pivot?
- Reorder buttons by drag and drop
- Maya's binary format
- lighterpro
- converting network path to local for playblast playback
- Python Workshop
- Batch script python
- Maya API - Read texture attribute in deform() method?
- explicitly name a wrap deformer
- OCEAN spray on waves and collision
- fileDialog2 2D window coordinates
- Group by type
- Emiting particles from geo on the frame just before its visibilty is turned off
- Matching component selection pivot to object pivot
- projectCurve on hemisphere result: segmented curve
- Help with a little python script
- Having trouble making Python communicate with MEL
- running a script on startup
- xyz coordinate updates during iteration
- changing/deleting sub menuItems command
- MEL Rendering script - global vatiables issue
- Building Qt in Maya plug-in
- Get constraint child node? [python]
- Selecting a Vertex using python
- MDGModifier return values
- help: query current character set ?
- Plugin Id's
- Intersecting Curves
- AETemplates embedded in a plugin.
- Python - Query Rotation Order
- Get CV normals.
- Finding available codecs with mel
- Assistance with RGBPP animation
- Bake animation mel
- Python - Maya and After Effects Interaction?
- Extract velocity field from fluidShape?
- MEL: String syntax errors giving me a headache
- texMoveContext bugged?
- Shorten expression
- setFaceVertexColor question (Maya Python API 2.0)
- Having trouble with the polySplitRing command
- Custom Node Output Behavior
- Dispaly message when hover mouse
- Maya possibilities
- select objects in selected animlayer mel
- setAttr function style
- MayaCloudCache Problem crashes!
- Trapping window-close events
- Python - Fractal Terrain Generator
- cmds.button (python, callbacks, maya 2013) , takes no arguments, one given
- python - xform command
- Dockable Areas, UI
- [Python] How to pass array instead obj by obj list?
- Python - Issues passing along dictionaries to a function.
- please help maya dynamic expression
- How to convert a single curve to dynamic hair curve using mel/python and get its name
- Terrain Generator
- outliner isolate node types
- Gravity in Python???HELP PLEASE
- Error: An array expression element must be a scalar value
- Polygon from intersected curves
- scriptJob attached to a file
- Raytrace with Mel/Python ?
- Qt Designer GUI.... .ui > .py ???? PLEASE HELP ASAP!
- Viewport 2.0 + Custom Locator + Windows
- MEL Script
- instancer.inputpoints
- UI in Python, New window everytime I run script ??? :(
- MEL script running too fast, causing problems
- [PyMEL] Callbacks and functional arguments
- Extracting text file data to position locators
- Orient Object to Normals Average
- External Python-Based UI accessing an open Maya Session
- Algorithm for Jiggling Object?
- openEXR 2.0 ?
- Get position of particles in pymel
- Script for mapping 2d ramp to particles in 3d
- [C++] good comprehension of dirty propagation
- GUI undo problem
- undo
- Help with Python For loop?
- Python Assign list values to an object's translateXYZ?
- Mel passing textFieldGrp text to proc
- User defined colors in UI
- Running Maya from the command line
- QT UI not playing nice with MEL
- How to get a list of the index numbers of edges that make a face
- Need plans for a nuclear bomb! (MEL explode all groups)
- IK elbow rotation value....
- Need basic python help :P
- textScrollList question
- Texture Path Genie 1.0.1 (free script)
- Convert Material type
- Get a list of newly created vertices
- Delete all keyframes from objects?
- python for maya training?
- Detect keyframe if it is a decimal number.
- Is it worth properly finishing this?
- newer version of metaball
- [MEL] apply polyColorPerVertex w/o adding input ?
- Light Rig Python script for Maya 2013
- Python - UI building
- get all maya nodes with new scene
- [Script] Converting curve deformation in world space to local space
- MEL: Script prints values, no print command.
- Set maya Hardware 2.0 render settings
- [Mel] Selecting reference editor nodes
- Check for 2 Objects Touching?
- Maya : Unable to Activate Viewport 2.0 by script.
- Python Question
- PyMel autocomplete not complete?
- help with script to create a shading network for mia x
- Need help with grouping objects
- Get position relative to locator coordinate system
- Online image search integration into maya
- Return Selection orderd by edge loop (Maya API)
- Maya python api tutorial : dealing with "pointers"
- Break Connections with translation and rotation
- Need help with basic one click OBJ exporter to project folder
- Get length of object's animation in frames (not number of total keyframes)
- local rotation axis - py
- setAttr stops my for loop?
- What IDE do you use for PyMEL development?
- Automate 'set keyframe' code in MEL?
- Help with polySeparate command [Python]
- A noob's Python/PyMel/Eclipse thread
- Toggle w, e, r via MEL-Script?
- Grey out options in UI
- Qt in Maya
- MFnMesh deformer with CUDA
- RigHelper Beta
- Can anyone help me with some programming? I'm new to this
- Remove Passive collisions
- make chain script
- Python loop question
- Android Maya Plugin
- Getting joint animation data for skeletal animation(Pymel)
- Proximity detection over Timeline [Python]
- Writing a Hypershade utility node? How to get started?
- Is multithreading possible?
- Compound attr names break in Attribute Editor
- Help with importing .mb relative path
- Remote debugging pyMel via Eclipse
- scriptjob triggerd by position change
- Drawing maya particles as something like a + or *
- Question about Maya's Lasso Select Tool
- modify hardware render file output
- How to draw EP cuve along each joint in hierarchy
- pymel question (setting attrs)
- Disable one UI button with another [Python]
- Storing User Data in Maya Node
- finding the root joint with child joint selected
- checker size tool for maya 2012?
- maya UI question, tab layout problem
- How to get Mental Ray render information in a custom Maya api node
|
http://forums.cgsociety.org/archive/index.php?f-89-p-48.html
|
CC-MAIN-2015-32
|
refinedweb
| 1,407
| 51.68
|
This document contains rules useful when you are porting a KDE library to win32. Most of these rules are also valid for porting external libraries code, like application's libraries and even application's private code.
Look for '/' and "/" and change every single code like:
if (path[0]=='/')
or:
if (path.startsWith('/'))
with:
if (!QDir::isRelativePath(path))
(or "QDir::isRelativePath(path)" if there was used path[[0]!='/')..
Note that qglobal.h is C++-only, so instead use
#ifdef _WIN32 .... #endif.
MSVC 6 templates support was heavily broker. Since then the situation has improved a lot. However some specific cases do exist and these are explained in this section.
This won't work with forward declaration:
class QColor; #include <QList> QList<QColor> foo; // error
You need to include full QColor declaration too:
#include <QColor> #include <QList> QList<QColor> foo; // ok
So this is different when compared to GCC..
Windows keeps icon data within .exe binaries. For KDE applications use CMake's KDE4_ADD_APP_ICON(appsources pattern) macro in automatically assign to add .png images for .exe files. More information on KDE4_ADD_APP_ICON() macro...
|
https://techbase.kde.org/index.php?title=Projects/KDE_on_Windows/Porting_Guidelines&diff=67672&oldid=59538
|
CC-MAIN-2015-27
|
refinedweb
| 179
| 60.41
|
I am currently using he revealing module pattern in a project and i have a page that needs some js code to be used that will be quite large.I'd like to add this page's js as a module but it will only work if it can access the main modules private functions.Is this possible?
If you're attempting to access private modules, that is almost certainly a bad-code smell that indicates that something else should be done instead, for which well-known patterns and solutions are all but guaranteed to exist.
Can you give us further information on what you're wanting to achieve, and what you're considering on doing?
Thanks for the reply. Essentially i have a main script that is using the revealing module pattern. I am now creating a new page that has some heavy js on it which i d rather have in a separate file and only include it on this page. The problem i have is that the main js file has some private functions that id like to use in the second file. I was hoping there would be a way to write it so the second file could be added to the main one.
So if first file has a namespace of mod then i can in the second file use either mod.second for example or still use mod but add new functions to it.
The main file has functions for Ajax calls and loaders and generic items i need to use in the second file.hope that helps
It's normally more beneficial to keep modules separate, so from what I've heard so far, it seems that it would be a good idea to either extract those private functions, or to provide a public interface for them so that the can be used.
I found a Sitepoitn article:
Where they use these functions but im not sure if this is good or not?
//protected utility functions
utils =
{
//add properties to an object
extend : function(root, props)
{
for(var key in props)
{
if(props.hasOwnProperty(key))
{
root[key] = props[key];
}
}
return root;
},
//copy an object property then delete the original
privatise : function(root, prop)
{
var data = root[prop];
try { delete root[prop]; }
catch(ex) { root[prop] = null; }
return data;
}
};
It is not a good idea to chuck your code in to another module, just because you want to use some private parts from within there. There are better solutions to the problem.
Can you give us more information about your specific situation?
If you're after a nice modular approach that gives you a dependency management system as well, I would recommend using RequireJS. As an added advantage, it loads your scripts asynchronously.
<script src="//cdnjs.cloudflare.com/ajax/libs/require.js/2.1.6/require.min.js" data-</script>
Note that the <script> has a data-main attribute that should point to your RequireJS configuration/bootstrap.
For the purposes of this demo we'll assume we have the following file structure:
/scripts/main.js
/scripts/app/app.js
/scripts/app/some-module.js
/scripts/lib/util.js
/index.html
We only need a really basic configuration file called "main.js":
// Define the baseURL if your scripts live in a different directory than the
// "main.js" that you indicated as the "main" script on the RequireJS script tag
require.config({
baseURL: "scripts"
});
// Define this module with it's only dependency being the app.js module.
define(["app/app"], function(App){
"use strict";
// we can do whatever here - I expose the
// main App to the window because it helps with debugging
window.App = App;
});
A module is defined as follows:
define(["some/dependency/module1", "another/module2"], function(Module1, Module2) {
// RequireJS expects that an object of some sort is returned.
// This can be an object literal, an instantiated class, etc.
return {
something: "someValue"
}
});
I've set up a demo on (which includes a link to a ZIP file to download that example).
Hope this helps
Thanks AussieJohn i have used require js before but that doesnt help with my current problem which is using the module pattern to be able to have two modules that can use the main module's methods. Thank you for your detailed help.
Paul: i currently have one script.js file that using the module pattern has many private methods and only 2 public methods. On a particular page i need to do bespoke code that will be quite large but only for this one page so i really do not want to load this on any other page. I want to be able to use 2 or 3 of the private methods on this page.i don't want to make them public and id rather not have to copy them over as maintaining them will be a pain. One of the main private methods is for ajax calls and some of the others id like to be able to use are for modal windows.
Those sound like perfect methods to expose publicly so all other modules can see them. As Paul suggested earlier you could either put them in their own library/utility module or create a public accessor for them on their current module.
is it not better to have your ajax function private?
I'm sure from an (ideal) application design point of view there might be some benefits to having those methods private, however I tend to go with whichever approach is more pragmatic. Privacy is a question of "do I want to expose or hide this property?" - you'll need to weigh up the benefits of one over the other. In your case it makes perfect sense to expose these private methods publicly so your entire application can take advantage of them. It is of course also useful to create appropriate abstractions of those methods so they can be reused across your app with ease.
If your application design is making your job hard, change the design
thanks for the advice
|
https://www.sitepoint.com/community/t/revealing-module-pattern-adding-modules/32424
|
CC-MAIN-2017-09
|
refinedweb
| 1,006
| 68.81
|
In this article by Pascal Bugnion, the author of Scala for Data Science, we will look at the ways of parallelizing computation and data processing over a single computer. Virtually, all new computers have more than one processing unit, and distributing a calculation over these cores can be an effective way of hastening medium-sized calculations.
(For more resources related to this topic, see here.)
Data science often involves the processing of medium or large amounts of data. As the previously exponential growth in the speed of a CPU has stalled while the amount of data continues to increase, leveraging the computers effectively must entail parallel computation.
Parallelizing the calculations over a single chip is suitable for the calculations involving gigabytes or a few terabytes of data. For the larger data flows, we must resort to the distribution of the computation over several computers in parallel.
Parallel collections
Parallel collections offer an extremely easy way to parallelize independent tasks. The reader, being familiar with Scala, will know that many tasks can be phrased as operations on collections, such as map, reduce, filter, or groupBy. Parallel collections are an implementation of the Scala collections that perform these operations in parallel over several threads.
Let's start with an example. We want to calculate the frequency of an occurrence of each letter in a sentence:
scala> val sentence = "The quick brown fox jumped over the lazy dog" scala> val characters = sentence.toCharArray.toVector Vector[Char] = Vector(T, h, e, , q, u, i, c, k, ...)
Note that we converted characters to a Scala Vector rather than keeping it as an array so as to guarantee immutability. All the examples in this section would work equally well with an array but using Vector is a good practice when we do not explicitly need a mutable iterable.
Let's convert the characters to a parallel vector, ParVector. To do this, we will use the following par method:
scala> val charactersPar = charaters.par ParVector[Char] = ParVector(T, h, e, , q, u, i, c, k, , ...)
The ParVector objects support the same operations as a regular vector but they perform the operations in parallel over several threads.
Let's start by filtering out the spaces in charactersPar:
scala> val lettersPar = charactersPar.filter { _ != ' ' } ParVector[Char] = ParVector(T, h, e, q, u, i, c, k, ...)
Notice how Scala hides the execution details. The interface and behavior of a parallel vector is identical to its serial counterpart, save for a few details that we will explore in the next section.
Let's use the toLower function to make all the letters lowercase in our sentence:
scala> val lowerLettersPar = lettersPar.map { _.toLower } ParVector[Char] = ParVector(t, h, e, q, u, i, c, k, ...)
To find the frequency of the occurrence of each letter, we will use the groupBy method to group the characters in vectors containing all the occurrences of that character:
scala> val intermediateMap = lowerLettersPar.groupBy(identity) ParMap[ParVector[Char]] = ParMap(e -> ParVector(e, e, e, e), ...)
Note how the groupBy method has created ParMap, the parallel equivalent of an immutable map. To get the number of the occurrences of each letter, we will do a mapValues call on intermediateMap, replacing each vector by its length:
scala> val occurenceNumber = intermediateMap.mapValues { _.length } ParMap[Char,Int] = ParMap(e -> 4, x -> 1, n -> 1, j -> 1, ...)
Parallel collections make it very easy to parallelize some operation pipelines: all we had to do was call .par on the characters vector. All the subsequent operations were parallelized. This makes switching from a serial to a parallel implementation very easy.
Limitations of parallel collections
A part of the power and appeal of parallel collections is that they present the same interface as their serial counterparts: they have a map method, a foreach method, a filter method, and so on. By and large, these methods work in the same way in parallel collections as they do in serial. There are, however, some notable caveats. The most important one has to do with side effects. If an operation in a parallel collection has a side effect, this may result in a race condition: a situation where the final result depends on the order in which the threads perform their operations.
Side effects in collections arise most commonly when we update a variable defined outside of the collection. To give a trivial example of an unexpected behavior, let's define a count variable and increment it a thousand times using a range:
scala> var count = 0 scala> (0 until 1000).par.foreach { i => count += 1 } scala> count Int = 874 // not 1000!
What happened here? The function passed to foreach has a side effect: it increments count, a variable outside of the scope of the function. This is a problem because the +=operator is a sequence of the following two operations:
- Retrieve the value of countand add 1 to it.
- Assign the result back to count.
Let's imagine that the foreach loop has been parallelized over two threads. Thread A might read a count of 832 and add 1 to it to give 833. Before it has time to reassign 833 to the count, thread B reads the count, still at 832, and adds 1 to give 833. Thread A then assigns 833 to the count. Thread B then assigns 833 to the count. We've run through two updates but only incremented the count by 1. The problem arises because += can be separated in two instructions. This leaves room for the threads to interweave their operations. It is said to be nonatomic, which is shown as follows:
The anatomy of a race condition is that both the threads, A and B, are trying to update count concurrently, resulting in one of the updates being overwritten. The final value of count is 833 instead of 834.
To give a somewhat more realistic example, let's look back at the example described in the previous section. To count the number of the occurrences of each letter, we could, conceivably, define a mutable Char -> Int hash map outside of the loop and increment the values as they arise, as follows:
scala> val occurenceNumber = scala.collection.mutable.Map.empty[Char, Int] scala> lowerLettersPar.foreach { | c => occurenceNumber(c) = occurenceNumber.getOrElse(c, 0) + 1 } scala> occurenceNumber('e') // Should be 4 Int = 2
Again, the discrepancy occurs because of the nonatomicity of the operations in the foreach loop.
In general, it is a good practice to avoid the side effects when using collections. They make the code harder to understand and preclude switching from the serial to the parallel collections.
Another limitation occurs in the reduction (or folding) operations. The function used to combine the items together must be associative. For instance:
scala> (0 until 1000).par.reduce(_ - _) // should be -499500 Int = 63620
As this seems like a rare use case, we will not dwell on it.
Error handling
In single-threaded programs, exception handling is relatively straightforward: if an exception occurs, the function can either handle it or escalate it. This is not nearly as obvious when parallelism is introduced; a single thread might fail, but the others might return successfully.
A parallel collection will throw an exception if any of its threads fail, as follows:
scala> Vector(2, 1, 3).par.map { case(1) => throw new Exception("error") case(x) => x } java.lang.Exception: error ...
There are cases when this isn't the behavior that we want. For instance, we might be using a parallel collection to retrieve a large number of web pages in parallel. We might not mind if a few of the pages cannot be fetched.
Scala's Try type was designed for this purpose. It is similar to Option in that it is a one element container:
scala> scala.util.Try(2) Try[Int] = Success(2)
Unlike the Option type, which indicates whether an expression has a useful value, the Try type indicates whether an expression can execute without throwing an exception. Try(expression) will have a Success(expression) value if an expression evaluates without throwing an exception. If an exception occurs, it will have a Failure(exception) value.
This will make more sense with an example. Let's start by importing scala.util to avoid unnecessary typing:
scala> import scala.util._
To see the Try type in action, let's use it to wrap a call to Source.fromURL. Source.fromURL fetches a web page, opening a stream to the page's content if it executes successfully. If it fails, it throws an error:
scala> import scala.io.{Source, BufferedSource} scala> val html = Source.fromURL("") scala.io.BufferedSource = non-empty iterator scala> val html = Source.fromURL("garbage") java.net.MalformedURLException: no protocol: garbage ...
Instead of letting the expression propagate and crash the rest of our code, we will wrap the call to Source.fromURL in Try:
scala> def fetchURL(url:String):Try[BufferedSource] = | Try(Source.fromURL(url)) scala> fetchURL("") Try[BufferedSource] = Success(non-empty iterator) scala> fetchURL("garbage") Try[BufferedSource] = Failure(java.net.MalformedURLException: no protocol: garbage)
All we need to do to retrieve URLs in a fault tolerant manner is to map fetchURL over a vector of URLs. If this vector is parallel, URLs will be fetched concurrently:
scala> val URLs = Vector("", | "", | "not-a-url" |) scala> val pages = URLs.par.map(fetchURL) ParVector[Try[BufferedSource]] = ParVector( Success(non-empty iterator), Success(non-empty iterator), Failure(java.net.MalformedURLException: no protocol: not-a-url)) )
We can then use a collect statement to act on the pages that we could fetch successfully:
scala> pages.collect { case(Success(it)) => it.size } ParVector[Int] = ParVector(17880, 102968)
By making good use of Scala's built-in Try classes and parallel collections, we built a fault tolerant, multithreaded URL retriever in a few lines of code.
The Try type versus the try/catch statements:
The programmers with imperative or object-oriented backgrounds will be more familiar with the try/catch method to handle exceptions. We could have accomplished a similar functionality here by wrapping the code to fetch URLs in a try block, returning null if the call raises an exception.
However, besides being more verbose, returning null is less satisfactory: we lose all the information about the exception and null is less expressive than Failure(exception). Furthermore, returning a Try[T] type forces the caller to consider the possibility that the function might fail. By contrast, just returning T and coding failure with a null value allows the caller to ignore failure, raising the possibility of a confusing NullPointerException being thrown at a completely different point in the program.
Setting the parallelism level
So far, we considered parallel collections as black boxes: add par to a normal collection and all the operations are performed in parallel. Often, we will want more control over how the tasks are executed.
Internally, parallel collections work by distributing an operation over multiple threads. As the threads share the memory, parallel collections do not need to copy any data. Changing the number of threads available to the parallel collection will change the number of CPUs that are used to perform the tasks.
Parallel collections have a tasksupport attribute that controls the task execution, as follows:
scala> val parRange = (0 to 100).par scala> parRange.tasksupport TaskSupport = scala.collection.parallel.ExecutionContextTaskSupport@311a0b3e scala> parRange.tasksupport.parallelismLevel Int: 8 // Number of threads to be used
The task support object of a collection is an execution context and an abstraction capable of executing a Scala expression in parallel.
By default, the execution context in Scala 2.11 is a work-stealing thread pool. When a parallel collection submits the tasks, the context allocates the tasks to its threads. If a thread finds that it has finished its queued tasks, it will try and steal the outstanding tasks from the other threads. The default execution context maintains a thread pool with a number of threads that are equal to the number of CPUs.
The number of threads over which the parallel collection distributes the work can be changed by changing the task support. For instance, parallelizing the operations performed by a range over four threads can be done in the following way:
scala> val parRange = (0 to 1000).par scala> parRange.tasksupport = new ForkJoinTaskSupport( new scala.concurrent.forkjoin.ForkJoinPool(4) ) parRange.tasksupport: scala.collection.parallel.TaskSupport = scala.collection.parallel.ForkJoinTaskSupport@6e1134e1 scala> parRange.tasksupport.parallelismLevel Int: 4
Futures
Parallel collections offer a simple yet powerful framework for parallel operations. However, they are limited in one respect: the total amount of work must be known in advance. This limitation is prohibitive for some problems.
Imagine that we want to write, for instance, a web crawler. The crawler is given an initial list of pages to crawl. It reads each page that it fetches, looking for additional URLs and adds these URLs to the set of pages to crawl. We could not use parallel collections for this problem as we will build the list of URLs to crawl on the fly.
We can, instead, use futures. A future represents the result of a unit of work that is being executed in a non-blocking manner. For instance, let's create a function that simulates a long calculation:
scala> def calculation(x:Int):Int = { Thread.sleep(10000) ; 2*x }
If we run calculation in the shell, execution is blocked for ten seconds while the calculation gets completed. Instead, let's run it in a different thread using a future so that we can carry on working while the calculation runs. We start by importing the concurrent package:
scala> import scala.concurrent._
To use futures, we need an execution context that will manage the tasks submitted to the threads. We have already come across execution contexts when discussing parallel collections: execution contexts are an abstraction to control the concurrent execution of the programming tasks. Futures expect an implicit execution context to be defined. For now, let's just use the default execution context. Let's bring it in the namespace:
scala> import scala.concurrent.ExecutionContext.Implicits.global
We are now ready to define the future:
scala> val f = Future { calculation(10) } Future[Int] = scala.concurrent.impl.Promise$DefaultPromise@156b88f5
Note that the shell doesn't block: it returns instantly. The calculation runs in a background thread. We can check whether the calculation has finished by checking the future's isCompleted attribute:
scala> f.isCompleted Boolean = true
Let's see what the function returned:
scala> f.value Option[scala.util.Try[Int]] = Some(Success(20))
The value attribute of a future has an Option[Try[T]] type. We have already seen how to use the Try type to handle exceptions gracefully in the context of parallel collections. It is used in much the same way here. A future's value is None until the future is completed, then it is set to Some(Success(value)) if the future is completed successfully or Some(Failure(error)) if an exception is thrown.
Repeatedly calling f.value until the future is completed works well in the shell but it does not generalize to the more complex programs. Instead, we want to tell the computer to do something once the future is complete. In the context of our web crawler, we might wrap the function that fetches a web page in a future. Once the web page has been fetched, the program should run a function to read the source code and extract a list of the links that it finds.
We can do this by setting the future's onComplete attribute:
scala> val f = Future { calculation(10) } scala> f.onComplete(processWebPage) scala> // wait 10 seconds Success(20)
The function passed to onComplete is run when the future is finished. It takes a single argument of the Try[T] type, containing the result of the future.
Futures let us very easily wrap a functionality to make it execute asynchronously, abstracting away much of the complexity of multithreading. They provide a very versatile alternative to parallel collections when the total amount of work is not known in advance or when we want to access the intermediate results as soon as they appear.
Failure is normal: how to build resilient applications
By wrapping the output of the code such that it runs in a Try type, futures force the client code to consider the possibility that the code might fail. The client can isolate the effect of the failure to avoid the crashing of the whole application. He might, for instance, log the exception. In the case of the web crawler example, he might also readd the offending website to be scraped again at a later date. In the case of a database failure, he might roll back the transaction.
By treating the failure as a first-class citizen, rather than through an exceptional control flow bolted on at the end, we can build applications that are much more resilient.
Summary
By providing very high-level abstractions, Scala makes the writing of the parallel code intuitive and straightforward. Parallel collections and futures form an invaluable part of a data scientist's toolbox, allowing them to parallelize their code with minimal effort. However, while these high-level abstractions obviate the need to deal directly with threads, an understanding of the internals of Scala's concurrency model is necessary to avoid the race conditions.
Resources for Article:
Further resources on this subject:
- Differences in style between Java and Scala code [Article]
- Integrating Scala, Groovy, and Flex Development with Apache Maven [Article]
- Data Modeling and Scalability in Google App [Article]
|
https://www.packtpub.com/books/content/parallel-collections
|
CC-MAIN-2017-22
|
refinedweb
| 2,923
| 55.24
|
31 January 2011 23:25 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
Petroquim increased homopolymer prices in early January to a minimum of $1,670/tonne (€1,236/tonne) and raised them again in the second half of January to $1,720/tonne to keep up with the rising cost of feedstock propylene.
For February, the price for high-volume PP customers goes up to $1,790/tonne for homopolymers. Export prices were already at $1,770 during January and at $1,790/tonne effective 1 February.
The implementation of the increases in
Transformers in the
The gradual increases in
($1 = €0.74)
|
http://www.icis.com/Articles/2011/01/31/9430995/chile-pp-prices-to-rise-again-in-february-on-higher-propylene-costs.html
|
CC-MAIN-2015-11
|
refinedweb
| 103
| 53.81
|
In this series, we will develop a Twitter client using the jQuery Mobile and PhoneGap frameworks. This project will be deployed to Android and iOS environments as a native application.
Also available in this series:
- Build a Cross-Platform Twitter Client: Overview
- Build a Cross-Platform Twitter Client: Twitter API & Code Review
- Build a Cross-Platform Twitter Client: Completing the Code Review
- Build a Cross-Platform Twitter Client: Deployment
Organization Of Part IV Of This Series
In Part III, we continued inspecting the "Core Business Logic Functions" taking up from where we left in Part II and finished the code review of the Tweets application by looking at "Event Handlers" & "Page Display Functions".
In this the final part of our tutorial, we will start with the section named "Files In Download Archive", where we will describe the contents of the archive file accompanying this article. In "Environment Specific Topics", we will explain how to import the application into Eclipse for the Android platform and into Xcode for the iOS platform. In that section, we will also give screen images of the application in an Android phone, an iPod touch, and an iPad 2. Finally, we will give concluding remarks in the aptly named "Conclusions" section.
Files In Download Archive
In this section, we will describe the contents of the archive file accompanying this article.
The Android folder consists of a single file,
Tweets.zip. This is an Eclipse project that has all the required files to create the Tweets application in an Android 2.2 device. See also "Android Development Environment" below.
The iOS folder consists of three sub-folders:
www,
icons, and
splash. Those folders store various files that need to be copied to the Xcode project for creating the Tweets application in iOS devices. For details see the "iOS Development Environment" section below.
Please note that all the icons and splash images in the download archive have been created based on an icon set provided by. As stated on the web site "You can use the set for all of your projects for free and without any restrictions. However, it ís forbidden to sell them."
An example of a startup icon for the Tweets application:
Figure: A Start-up Icon
An example of a splash screen image for the Tweets application:
Figure: A Splash Image
Environment Specific Topics
In this section, we will discuss topics regarding the development environments for Android and iOS versions of the sample application.
For both Android and iOS environments, the latest version of jQuery Mobile can be downloaded from and related documentation is available via the Documentation link in.
As a prerequisite for both Android and iOS environments, PhoneGap must be installed in the development machine. PhoneGap documentation can be accessed from. From that page, follow the links to tutorials for Android and iOS specific installation instructions and other environment specific documentation. The most recent version of PhoneGap can be downloaded from the download link on.
Android Development Environment
For the Android platform, the sample application in this article has been developed based on the following configuration.
- Development machine: Windows Operating System
- Development tool: Android SDK Tools, revision 8, Eclipse 3.5, Java SDK version 1.5
- PhoneGap version: 1.1.0
- jQuery Mobile version: 1.0rc1
- Android version (tested): Android 2.2
Importing The Eclipse Project
Before importing the project into your Eclipse environment, make sure that the Eclipse ADT plugin points to the correct location of the Android SDK on your local system. To check this, in Eclipse menu go to
Window -> Preferences -> Android. The
SDK Location window must be set to the location of the Android SDK. Once set up correctly, you should see something similar to the image below:
Figure: Eclipse Preferences
The project files are provided in an archive file named
Tweets.zip. To import the project, in Eclipse menu go to
File -> Import and then in the file import wizard select
General -> Existing Projects into Workspace (see below).
Figure: Project Import
On the next page of the wizard, choose
Select archive file: and browse to where
Tweets.zip is located in your file system. The
Projects window will be automatically populated where the
Tweets project is already selected. This is shown below. Press the
Finish button to complete the import.
Figure: Project File Selection
Eclipse will build the application automatically after import. Now, you should see the Tweets project in the project explorer, as shown below:
Figure: Project Explorer
This project has been built and tested for the Android 2.2 platform. To verify this, select the
Tweets project in project explorer and from the right-click menu choose
Properties. On the left hand listing of properties, select
Android as the property. The available build targets are displayed on the right, as shown below. You should see that Android 2.2 has been selected.
Figure: Android Build Target
File Listing
A list of the files in the project is given below:
Figure: File Listing
The
src folder stores the Java code. The only Java file in our application is named
App.java and is under the
com.jquerymobile.tweets package. This Java file is a typical example of an Android application built on the PhoneGap framework.
package com.jquerymobile.tweets; import com.phonegap.DroidGap; import android.os.Bundle; public class App extends DroidGap { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); super.setIntegerProperty("splashscreen", R.drawable.app_splash); super.loadUrl(""); } }
Observe that the Java class extends
DroidGap, which is provided by the PhoneGap framework. The
loadURL() method loads the
index.html file that has the actual application code implemented in JavaScript.
Let us continue our review of the files in the Eclipse project.
- The
genfolder contains various files automatically generated by the Eclipse ADT plugin.
- The
assetsfolder stores
index.html, jQuery Mobile/jQuery JavaScript libraries, the jquery-mobile-960 grid library, and the PhoneGap JavaScript library. It also contains an images folder. The images folder consists of
wait.gif, the spinning wheel image, and various icon images that are needed by the jQuery Mobile style sheets.
- The
libfolder stores the PhoneGap Java library.
- The
resfolder stores various resources needed by the application. Specifically the icon images and configuration files
strings.xml,
main.xmland
plugins.xml.
default.propertiesis a system generated file that defines the API version for the Android application.
- The
proguard.cfgfile is automatically created by the development environment and is used by the ProGuard tool. Details can be found in ProGuard Documentation.
- AndroidManifest.xml is the application manifest file.
iOS Development Environment
For the iOS platform, the sample application in this article has been developed based on the following configuration.
- Development machine: MacOS Operating System
- Development tool: Xcode
- PhoneGap version; 1.1.0
- jQuery Mobile version: 1.0rc1
- iOS version (tested): 4.3
Developing The Application in Xcode
Let us review the steps to develop the Tweets application in Xcode.
From the Xcode File menu, select
New Project, as shown below:
Figure: Launch New Xcode Project
Under
User Templates, select
PhoneGap. As mentioned previously, as a prerequisite, PhoneGap must be installed in your development machine. Otherwise, it will not show up in the Xcode environment as a template! You should see something similar to the below. Click
Choose.
Figure: PhoneGap-based Application
Save the project. In the example below, we name the project TweetsTest.
Figure: Save the Xcode Project
Let us assume that the
TweetsTest is the root folder for your Xcode project. From the download archive accompanying this article, copy the
css-js folder and the
index.html file under
TweetsTest/www, as shown below:
Figure: Copy Files Under
www
Because we created the Xcode project based on a PhoneGap template, the
index.html already exists under
www. As a result, you will be prompted whether the existing file should be replaced or not. Click
Replace, as shown below:
Figure: Replace
index.html
Similarly, replace the following 4 files under
TweetsTest/Resources/splash with the ones in the download archive accompanying this article:
Default-Landscape.png,
Default-Portrait.png,
Default.png,
Default@2x.png. Also, replace the following 3 files under
TweetsTest/Resources/icons with the ones in the download archive:
icon-72.png,
icon.png,
icon@2x.png.
Open
TweetsTest/PhoneGap.plist with the Property List Editor and add to the key named
ExternalHosts an array consisting of a single element of type
String. The value of the element corresponding to
Item 0 should be
*. This is shown below:
Figure: Edit
PhoneGap.plist
Now, when you look at the source of
PhoneGap.plist, e.g. via Dashcode, it should look like this:
Figure:
PhoneGap.plist
Open
TweetsTest/TweetsTest-Info.plist with the Property List Editor and change the bundle display name to
Tweets, as shown below.
Figure: Change Bundle Display Name
Finally, edit
TweetsTest/Classes/AppDelegate.m, to replace the
shouldStartLoadWithRequest method as follows:
/** * { NSURL *url = [request URL]; // Intercept the external http/https requests and forward them to Safari // See if( [[url scheme] isEqualToString:@"http"] || [[url scheme] isEqualToString:@"https"]) { [[UIApplication sharedApplication] openURL:url]; return NO; } else { return [ super webView:theWebView shouldStartLoadWithRequest:request navigationType:navigationType ]; } }
We are making this change to ensure that, when the user presses on a link on the list of results associated with a user timeline or search query, a new Safari browser window opens to display the web page. Had we not made that change, the web page would be displayed in the Tweets application itself and it would be impossible to go back to the application. For details, see.
After The Installation
In this section, we will provide various screen images of the application after it has been installed on Android and iOS devices.
The following is the launch icon of the Tweets application on an Android phone.
Figure: Android Home Screen
The below image shows the Inputs and Results pages of the Tweets application on an Android phone.
Figure: Android Screens
The following is the launch icon of the Tweets application on an iPod touch device.
Figure: iPod Touch Home Screen
The below image shows the Inputs and Results pages of the Tweets application on an iPod touch device.
Figure: iPod Touch Screens
The following is the launch icon of the Tweets application on an iPad 2 device.
Figure: iPad 2 Home Screen
The following is the main screen of the Tweets application on an iPad 2 device.
Figure: iPad 2 Screen
Conclusions
In this article, we developed a Twitter client using jQuery Mobile and the PhoneGap frameworks and deployed it to the Android and iOS platforms as a native applications. We illustrated how jQuery Mobile and PhoneGap could work together when creating cross-platform native applications. In particular:
- jQuery Mobile is used to develop the user interface functions.
- PhoneGap is used to create the native application and to access device specific APIs.
- There is a single code set for the application, which is written in HTML & JavaScript using jQuery, jQuery Mobile, and the PhoneGap libraries.
We also discussed several pros/cons of using jQuery Mobile and the PhoneGap frameworks when creating cross-platform native applications. Because there is a single code set for user interface and device functions, the "write once, deploy anywhere" approach saves development time. A user interface developer experienced with JavaScript and jQuery can quickly develop cross platform native applications because there is no need to learn device specific programming languages and APIs. On the other hand, the user interface will have a distinctive web application look and feel and will not fully leverage the user interface features available on the device. Similarly, the application code will be limited to the subset of the native device API as "bridged" by PhoneGap. It is the author's opinion that, for many real life applications, the advantages mentioned above will outweigh the disadvantages.
In this article, we additionally discussed the Twitter API and explained how to obtain a user's timeline, e.g. the most recent tweets posted by the user, and how to find tweets by any user where the tweet content matches a search query. We showed how to access those API methods using the jQuery
ajax() function. Finally, we demonstrated jquery-mobile-960, a grid implementation for jQuery Mobile, which is very useful to define layout in wide-screen devices, e.g. an iPad tablet.
|
http://code.tutsplus.com/tutorials/build-a-cross-platform-twitter-client-deployment--pre-23717
|
CC-MAIN-2015-18
|
refinedweb
| 2,050
| 55.54
|
Discussion board where members can learn more about Integration, Extensions and API’s for Qlik Sense.
Hi all,
I have to admit, I'm not really very confident about how requirejs works - particularly as I am trying to create my own javascript modules to organize my script. Here's the specific issue.
I am building an extension. I would like to keep my controller in a separate js file from the main js file. I need the "qlik" object in the controller. So the main js file will look something like this:
import myController from './myController.js';
export default window.define( [
'qlik'
],
function (qlik) {
'use strict';
return {
definition: {},
initialProperties: {},
controller: myController
This all works fine and imports successfully from myController, but I don't know how to get 'qlik' into myController so that I can use its functionality there. For example, if this is the body of myController.js, qlik isn't available.
export default ['$scope', '$element', function($scope, $element){
$scope.hello = "hi";
console.log("hello", qlik); // hello displays, but qlik is empty
This doesn't work either:
export default ['$scope', '$element', 'qlik', function($scope, $element, qlik){
Nor does using the ES6 import, it can't find the module:
import {qlik} from 'qlik';
I am using the AxisGroup's qext extension boilerplate code (uses Webpack to transpile ES6)
Jon. See my response to your question GitHub here
Hello Jonathan,
is it possible to export a "creator"-function from controller.js? E.g:
controller.js:
export default function(qlikEngine){
return ['$scope', '$element', function($scope, $element) {
qlikEngine.theme.apply('abc');
}]
};
extension-template.js:
...
import createController from 'controller.js';
export default window.define(['qlik'], function(qlik) {
return {
initialProperties: initialProperties,
template: template,
definition: definition,
controller: createController(qlik),
paint: paint,
resize: resize
}
});
Cheers,
Mathias
Jon. See my response to your question GitHub here
Grazie John,
That worked, and thanks for being responsive over on Github, sometimes you never here from folks over there.
John's answer is the most straightforward, but yours may come in handy at some point as well. Thank you for taking the time.
If you are still around here (I don't want to clutter up your Github issues section), I have another question.
I am trying to use the async / await functionality in ES6. I have a class with an async method and an await keyword on a function that returns a Promise. Seems to work fine in my up-to-date chrome; however, in Qlik Sense Desktop, which presumably is not using the latest Chrome, I get an error (unexpected identifier for async).
Do you know if there is a way to get some kind of polyfil into the webpack pipeline so that we can use this syntax, or is it just better to avoid for now?
Jonathan
|
https://community.qlik.com/t5/Qlik-Sense-Integration-Extensions-APIs/Injecting-qlik-into-external-controller-for-extension/m-p/14403
|
CC-MAIN-2020-50
|
refinedweb
| 459
| 55.54
|
It is a common practice for big players in the cloud market to allow their users to have more than one method to access their data. With Google, for example, you can have one single account and easy access to a bunch of free services like Gmail and Drive.
Google also provides public APIs for developers to be able to access data via other applications. The whole process happens through the usual OAuth +, an application provided by the player.
With Microsoft, there’s Microsoft Graph. It provides some REST endpoints that allow access to their services like Office 365 (which is in the cloud, unlike the older versions), Azure and Outlook.
Here’s a single example for you to get a taste. A common URL endpoint to retrieve personal information about a specific account is. Here’s what you’ll get if you try it out in a browser tab:
That’s when OAuth takes place. The URL, in turn, follows some pattern rules that you’ll see right away.
Microsoft Graph API Structure
The access to each resource must follow a pattern. Here’s the request construction:
First, as usual, you define the HTTP method for the operation. Note that it’s not up to you to decide that; you must go to the docs and check which method is available for that operation. Plus, the endpoints follow the RESTful principles, so make sure to refer to it for better comprehension.
Then it comes the version of the Graph API you’re targeting. As per the writing of this article, the only available versions are v1.0 and beta. As you may have noticed, for production purposes, only the v1.0 version should be used. The beta version brings some new stuff, which might include breaking changes, so be careful when using it.
The resource defines the entity (and its properties) you’re willing to access. These resources usually come alone like in the previous example (/me), so they’re called top-level entity resources. However, you may want additional inheriting data, like /me/messages, for example. You can refer to the list of available resources here.
Remember that each resource is secure information, so it requires permission to be accessed. You’ll see more about how to do it soon.
Finally, you have the parameters. Like the other REST API endpoints, it’s necessary to provide the filters for the endpoints that demand that, like when retrieving all of your email messages filtered by the sender, for example.
Microsoft Graph API makes use of the OData (Open Data Protocol) namespace, which defines a set of best practices for building and consuming RESTful APIs. You can read more detailed information about the whole API architecture here.
Seeing it in Action
Microsoft Graph also provides a great tool to easily explore the endpoints, and it’s the Graph Explorer. Open it, check out at the top of the screen if you’re already logged in, otherwise log in to your Microsoft account. Figure 1 shows how it looks.
data-src="" data-lazy-load>
Figure 1. Microsoft Graph Explorer view.
In the top bar of this screen, you’ll see a few combo boxes and a text field to customize your search. Number 1 shows the option to select which HTTP method you want this search to be run. Number 2 states the API version (v1.0 or beta).
Number 3 represents the full URL of the necessary resource. In this example, the /me was filled by the first option available at Sample queries tab (number 6). It helps you to figure out the available options, pre-filling the required values and making the Explorer ready to run the request. The History tab lists the searches executed until now.
Number 5 brings four other options:
- The request body with the request data your search may need.
- The request headers.
- Modify permissions: once you’ve authorized the Explorer to have access to your account data, you can customize each permission individually.
- Access token: the access token needed to perform each request. Explorer automatically generates it. In case you need to search in another environment other than here, the token must be generated each time.
Number 7, finally, deals with the tabs of the API responses. Here you’ll have the response body preview (once it’s completed) and the response headers.
The Adaptive cards functionality is just a nice feature in which Microsoft tries to adapt the returned information into cards, like a business card.
The Code snippets tab is very interesting. It auto-generates the code for the current request in four different languages: CSharp (Figure 2), JavaScript, Java and Objective-C:
data-src="" data-lazy-load>
Figure 2. Code snippets for 4 languages.
Besides all that, you still need to sign in to Graph Explorer with your Microsoft account to fetch its data. For this, click the Sign in to Graph Explorer button in the left side panel. Then, you’ll be redirected to the access page to permit the Graph Explorer app (Figure 3). Click the Yes button.
data-src="" data-lazy-load>
Figure 3. Giving access to the Graph Explorer app.
Once you’re logged, run the /me request. Listing 1 shows the response content that’ll be shown in the Response preview tab.
Listing 1. Response content for /me request.
Sometimes, depending on the operation you’re performing, you’ll be prompted to consent to the specific permissions, in the Modify permissions tab.
Run another sample query. Go to the Sample queries tab and search for my mails from an address, then click the GET option shown in Figure 4:
data-src="" data-lazy-load>
Figure 4. Searching for a sample query.
When you try to run this query, you’ll get the following error message: Forbidden – 403 – 260ms. You need to consent to the permissions on the Modify permissions tab.
Opening the referred tab, you’ll see the Consent buttons in the Status column, like in Figure 5. If you’re in a small screen, pay attention that the buttons are “hidden”, so you have to scroll to see them horizontally.
data-src="" data-lazy-load>
Figure 5. Consenting to the permissions.
If the operation goes on successfully, you’ll see the status changing to Consented. Now, run the query. Don’t forget to change the email parameter to one of yours at the query text field.
The result should be similar to Figure 6.
data-src="" data-lazy-load>
Figure 6. Querying email messages by sender email.
Now to move to an example that creates data in the server, using the API to send an email. For this, go again to the Sample queries tab and search for send an email, then click the button. You’ll notice that Graph Explorer fills in the Request body area with a pre-generated JSON. It is pretty much what’s necessary to send a basic email. Obviously, you may change its contents to your preferences. Listing 2 shows what the JSON looks.
Listing 2. JSON example to send an email.
Before submitting the query, remember that you have to authorize Graph Explorer with the right permissions. Go to the Modify permissions tab again and allow the required accesses.
Run it. If everything went right, you’ll see the successful message with an HTTP 201 – Accepted code. It means that the request was OK, and something was created in the server. In this case, the sent email.
Go to your email inbox and check that the email was sent (Figure 7).
data-src="" data-lazy-load>
Figure 7. Email sent successfully.
Integrating .NET Core with Microsoft Graph
Here’s another example. This time, you’ll integrate a simple .NET Core application with Microsoft Graph to retrieve user’s data and send an email as well.
First, create a new app by running the following command:
This command creates a Console app. Then, add the required NuGet dependencies:
To enable the use of Graph API within .NET applications, you’ll need to set up an Azure AD application. For this, go to the Azure Admin Center and log in to your Microsoft account.
In the home page, click the All resources > Manage Azure Active Directory option and, finally, go to the App registrations option. Click the New registration button. In the next screen, give the app a name (simple-talk-graph-app, for example) and fill the options like shown in Figure 8.
data-src="" data-lazy-load>
Figure 8. Registering a new application.
The next screen you will see shows the app details, including the Application id, which is going to be used soon in the C# code. It’s also necessary that this app is treated as a public client, so you need to toggle it going to the Authentication > Advanced Settings > Default client type option. Toggle it and click Save.
Next, you need to initialize the .NET development secret store. Since it’s necessary to generate new OAuth tokens for every new request, the automatic way to do it is by Azure AD:
After it’s initialized, you can add the credentials related to the client id and the scopes (permissions):
Remember to replace the
appId with yours. Now, open the created project into Visual Studio and create two new folders: Auth (to deal with the auth flow) and Graph (to store the graph helpers).
Start with the authentication process. Create a new class into the Auth folder called DeviceCodeAuthProvider.cs. The name already suggests that this is the flow this example uses to authenticate users. Listing 3 shows the code.
Listing 3. DeviceCodeAuthProvider code.
The code is designed under the MSAL patterns. It injects the scopes and credentials you’ve previously set in the command line. There are two main methods: one to generate new access tokens and another one to authenticate each of the requests, feeding them with the proper bearer tokens.
Listing 4 shows the code to be placed in the GraphHelper.cs file (please, create it under the Graph folder).
Listing 4. GraphHelper code.
The code of these methods was extracted from the auto-generated ones shown in the Graph Explorer before. Once you have the auth provider, you can instantiate the graph client and call the respective Graph operation.
Finally, move on to the code of Program.cs file, which calls these methods. Listing 5 shows the content.
Listing 5. Program class code.
First, you need to load the app settings where the credentials and scopes were placed. After extracting the app id and scopes array, you may retrieve a valid access token (which is going to be printed to make sure it works), initialize the graph helper (that will, in turn, create the graph client from the auth provider) and, finally, call the /me and /sendMail operations.
When you run the app, a new console window opens and asks you to access the URL and enter a printed code to authenticate. Copy the code, access the URL and paste it. Click Next, then authenticate to your Microsoft account.
The final screen shows what type of information the app is trying to access and ask for your permission (Figure 9):
data-src="" data-lazy-load>
Figure 9. Giving access to the Graph app.
Click Yes and close the window. When you get back to the console, the username is printed, and the email was sent as shown in Figure 10.
data-src="" data-lazy-load>
Figure 10. Second email sent via SDK.
Note: Be aware that, due to occasional instabilities, sometimes this service can fail or time out. If you face an error, wait a minute and try again.
Conclusion
From here, the best place you can follow up for more accurate information, especially regarding the latest updates, is the official docs.
Microsoft Graph is continuously working to add more and more resources to the API. The possibility to manage the Outlook Calendar and Cloud communications, for example, are very recent due to that constant upgrade process. In the end, it is a powerful API if you want to embody your own projects with Microsoft product’s data. Best of luck!
Load comments
|
https://www.red-gate.com/simple-talk/development/dotnet-development/getting-started-with-microsoft-graph-api/
|
CC-MAIN-2022-40
|
refinedweb
| 2,032
| 66.54
|
FAQs
Search
Recent Topics
Flagged Topics
Hot Topics
Best Topics
Register / Login
Post Reply
Bookmark Topic
Watch Topic
New Topic
programming forums
Java
Java JSRs
Mobile
Certification
Databases
Caching
Books
Engineering
Languages
Frameworks
Products
This Site
Careers
Other
all forums
Forum:
Web Services
Call a Servlet from a JAX-WS Web service
Gaby Bag
Greenhorn
Posts: 1
posted 4 years ago
Hello all,
As part of a research project, we use a Web application implemented in
JSF
1.2 to realize some computations for us. In order to be able to invoke methods from this application, which is deployed on a Apache
Tomcat
6 server contained in the application itself, we implement a new
servlet
inside it to call what needed. This servlet is executed from the "outside" world through a
Java
program from terminal and everything works perfectly. Here is the code of this program
import java.io.*; import java.net.*; import java.util.Scanner; public class ContactBemServlet { public static void main( String [] args ) { Scanner input = new Scanner(System.in); System.out.println("Please give the CaseID to suspend :"); String caseNbr = input.nextLine(); String arg = "caseID=" + caseNbr; System.out.println(arg); try { URL url = new URL(""); URLConnection conn = url.openConnection(); conn.setDoOutput(true); BufferedWriter out = new BufferedWriter( new OutputStreamWriter( conn.getOutputStream() ) ); out.write(arg); out.flush(); out.close(); } catch ( MalformedURLException ex ) { // a real program would need to handle this exception } catch ( IOException ex ) { // a real program would need to handle this exception } } }
As our project will be also deployed on a Tomcat Server, we must port this program in a Web service and because the SOA is written in Java, we choose JAX-WS. I'm not a specialist of this kind of technology so, please, be compassionate with me if my stuff is a little bit stupid or naive
. Here is the code of the implementation class :
package org.bem.ws.impl; import org.bem.ws.ContactBem; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStreamWriter; import java.net.MalformedURLException; import java.net.URL; import java.net.URLConnection; import javax.jws.WebService; @WebService(endpointInterface = "org.bem.ws.ContactBem") public class ContactBemImpl implements ContactBem { @Override public String CallBem(String caseID) { String response = "tete"; String param = "caseID=" + caseID; try { URL url = new URL(""); URLConnection conn = url.openConnection(); conn.setDoOutput(true); BufferedWriter out = new BufferedWriter(new OutputStreamWriter(conn.getOutputStream())); out.write(param); out.flush(); out.close(); response = "toto"; } catch (MalformedURLException ex) { response = "titi"; } catch (IOException ex) { response = "tata"; } return (response); } }
As you can see, I added some stuff (toto, titi and tata strintgs) in the code to see where the problem might be. I use soapUI to realize tests and see if my this Web service works.
Unfortunately, it does not work at all. There is no exception in the try block as I received "toto" as response but nothing is done by the contacted servlet. To be honest, I don't see where could be the problem. I made some searches on Internet and forums but I found nothing related to my problem.
I someone could give me the reason of this failure or explain why it cannot work ? Am I miss the point ? Is it forbidden to do that ? Is there a way to do this particular kind of stuff ?
Thanks in advance for any answer or advice :-)
Cheers
Gaby
Post Reply
Bookmark Topic
Watch Topic
New Topic
Similar Threads
problem with request.getParameter()
Execute a JSP page with client side Java.
Sending an Array to a Servlet
send data in java application through post method
How to call servlet do post method from a pojo
|
https://coderanch.com/t/555366/Web-Services/java/Call-Servlet-JAX-WS-Web
|
CC-MAIN-2016-40
|
refinedweb
| 608
| 57.37
|
Good morning friends,
I am having a weird little issue with python that I am certain stems from me failing to understand wtf is going on.
I will give you the general disclaimer that I am a total programming beginner so I'll need some latitude :)
The following snippet is a function I am working on for a larger project that I can't seem to figure out.
The idea is to have python grab all the names of the files in the directory, open them in turn and print the results on screen.
import OS path = "/home/blahblah/Desktop/projData/" projData = os.listdir(path) while (i <= 300): for file in projData[i]: openedfile = open("/home/blahblah/Desktop/projData/" + file) i + 1 print openedfile
I figured this was pretty straight forward but I was getting I/O errors that the files that it pulled didn't exist.
I've isolated it down to it being when the contents of "file" is appended to the string something funny happens that it doesn't like.
For example, let's say the first file in the directory is named '1', if I run the script I get an I/O error that says '/home/blahblah/Desktop/projData/1' doesn't exist, but if I type the one by hand into '/home/blahblah/Desktop/projData/1' then the file opens just fine.
Obviously there's something I am not understanding! Any help\constructive criticism would be greatly appreciated.
Thanks for your time.
|
https://www.daniweb.com/programming/software-development/threads/347717/stumped-on-something-simple-help
|
CC-MAIN-2018-43
|
refinedweb
| 248
| 57.71
|
Let me start this tutorial by taking some theoretical jargon out of your way. When we talk about image enhancement, this basically means that we want a new version of the image that is more suitable than the original one.
For instance, when you scan a document, the output image might have a lower quality than the original input image. We thus need a way to improve the quality of output images so they can be visually more expressive for the viewer, and this is where image enhancement comes into play. When we enhance an image, what we are doing is sharpening the image features such as its contrast and edges.
It is important to note that image enhancement does not increase the information content of the image, but rather increases the dynamic range of the chosen features, eventually increasing the image's quality. So here we actually don't know what the output image would look like, but we should be able to tell (subjectively) whether there were any improvements or not, like observing more details in the output image, for instance.
Image enhancement is usually used as a preprocessing step in the fundamental steps involved in digital image processing (i.e. segmentation, representation). There are many techniques for image enhancement, but I will be covering two techniques in this tutorial: image inverse and power law transformation. We'll have a look at how we can implement them in Python. So, let's get started!
As you might have guessed from the title of this section (which can also be referred to as image negation), image inverse aims to transform the dark intensities in the input image to bright intensities in the output image, and bright intensities in the input image to dark intensities in the output image. In other words, the dark areas become lighter, and the light areas become darker.
Say that I(i,j) refers to the intensity value of the pixel located at (i,j). To clarify a bit here, the intensity values in the grayscale image fall in the range [0,255], and (i,j) refers to the row and column values, respectively. When we apply the image inverse operator on a grayscale image, the output pixel O(i,j) value will be:
I(i,j)
(i,j)
[0,255]
O(i,j)
O(i,j) = 255 - I(i,j)
Nowadays, most of our images are color images. Those images contain three channels, red, green, and blue, referred to as RGB images. In this case, as opposed to the above formula, we need to subtract the intensity of each channel from 255. So the output image will have the following values at pixel (i,j):
RGB
O_R(i,j) = 255 - R(i,j)
O_G(i,j) = 255 - G(i,j)
O-B)i,j) = 255 - B(i,j)
After this introduction, let's see how we can implement the image inverse operator in Python. I would like to mention that for the sake of simplicity, I will run the operator on a grayscale image. But I will give you some thoughts about applying the operator on a color image, and I will leave the full program for you as an exercise.
The first thing you need to do for a color image is extract each pixel channel (i.e. RGB) intensity value. For this purpose, you can use the Python Imaging Library (PIL). Go ahead and download a sample baboon image from baboon.png. The size of the image is 500x500. Let's say you want to extract the red, green, and blue intensity values located at the pixel location (325, 432). This can be done as follows:
500x500
(325, 432)
from PIL import Image
im = Image.open('baboon.png')
print im.getpixel((325,432))
Based on the documentation, what the method getpixel() does is:
getpixel()
Returns the pixel value at a given position.
After running the above script, you will notice that you only get the following result: 138! But where are the three channels' (RGB) intensity values? The issue seems to be with the mode of the image being read. Check the mode by running the following statement:
138
mode
print im.mode
You will get the output P, meaning that the image was read in a palette mode. One thing you can do is convert the image to RGB mode before returning the intensity values of the different channels. To do that, you can use the convert() method, as follows:
P
convert()
rgb_im = im.convert('RGB')
In this case, you would get the following value returned: (180, 168, 178). This means that the intensity values for the red, green, and blue channels are 180, 168, and 178, respectively.
(180, 168, 178)
To put together everything we have described so far, the Python script which would return the RGB values of an image looks as follows:
from PIL import Image
im = Image.open('baboon.png')
rgb_im = im.convert('RGB')
print rgb_im.getpixel((325,432))
There is one point left before you move forward to the image inverse operator. The above example shows how to retrieve the RGB value of one pixel only, but when performing the inverse operator, you need to perform that on all the pixels.
To print out all the intensity values for the different channels of each pixel, you can do the following:
from PIL import Image
im = Image.open('baboon.png')
rgb_im = im.convert('RGB')
width, height = im.size
for w in range(width):
for h in range(height):
print rgb_im.getpixel((w,h))
At this point, I will leave it as an exercise for you to figure out how to apply the image inverse operator on all the color image channels (i.e. RGB) of each pixel.
Let's have a look at an example that applies the image inverse operator on a grayscale image. Go ahead and download boat.tiff, which will serve as our test image in this section. This is what it looks like:
I'm going to use the scipy library for this task. The Python script for applying the image inverse operator on the above image should look as follows:
scipy
import scipy.misc
from scipy import misc
from scipy.misc.pilutil import Image
im = Image.open('boat.tiff')
im_array = scipy.misc.fromimage(im)
im_inverse = 255 - im_array
im_result = scipy.misc.toimage(im_inverse)
misc.imsave('result.tiff',im_result)
The first thing we did after reading the image is to convert it to an ndarray in order to apply the image inverse operator on it. After applying the operator, we simply convert the ndarray back to an image and save that image as result.tiff. The figure below displays the result of applying image inverse to the above image (the original image is on the left, and the result of applying the image inverse operator is on the right):
result.tiff
Notice that some features of the image became clearer after applying the operator. Look, for instance, at the clouds and the lighthouse in the right image.
This operator, also called gamma correction, is another operator we can use to enhance an image. Let's see the operator's equation. At the pixel (i,j), the operator looks as follows:
p(i,j) = kI(i,j)^gamma
I(i,j) is the intensity value at the image location (i,j); and k and gamma are positive constants. I will not go into mathematical details here, but I believe that you can find thorough explanations of this topic in image processing books. However, it is important to note that in most cases, k=1, so we will mainly be changing the value of gamma. The above equation can thus be reduced to:
k
gamma
k=1
p(i,j) = I(i,j)^gamma
I'm going to use the OpenCV and NumPy libraries here. You can kindly check my tutorial Introducing NumPy should you need to learn more about the library. Our test image will again be boat.tiff (go ahead and download it).
OpenCV
NumPy
The Python script to perform the Power Law Transformation operator looks as follows:
import cv2
import numpy as np
im = cv2.imread('boat.tiff')
im = im/255.0
im_power_law_transformation = cv2.pow(im,0.6)
cv2.imshow('Original Image',im)
cv2.imshow('Power Law Transformation',im_power_law_transformation)
cv2.waitKey(0)
Notice that the gamma value we chose is 0.6. The figure below shows the original image and the result of applying the Power Law Transformation operator on that image (the left image shows the original image, and the right image shows the result after applying the power law transformation operator).
0.6
The result above was when gamma = 0.6. Let's see what happens when we increase gamma to 1.5, for instance:
gamma = 0.6
1.5
Notice that as we increase the value of gamma, the image becomes darker, and vice versa.
One might be asking what the use of the power law transformation could be. In fact, the different devices used for image acquisition, printing, and display respond according to the power law transformation operator. This is due to the fact that the human brain uses gamma correction to process an image. For instance, gamma correction is considered important when we want an image to be displayed correctly (the best image contrast is displayed in all the images) on a computer monitor or television screens.
In this tutorial, you have learned how to enhance images using Python. You have seen how to highlight features using the image inverse operator, and how the power law transformation is considered a crucial operator for displaying images correctly on computer monitors and television screens.
Furthermore, don’t hesitate to see what we have available for sale and for study in the Envato Market, and please ask any questions and provide your valuable feedback using the feed below.
|
http://mindmapengineers.com/mmeblog/image-enhancement-python?page=1
|
CC-MAIN-2018-51
|
refinedweb
| 1,647
| 63.09
|
RESVPORT(3) OpenBSD Programmer's Manual BINDRESVPORT(3)
NAME
bindresvport - bind a socket to a privileged IP port
SYNOPSIS
#include <sys/types.h>
#include <netinet/in.h>
int
bindresvport(int sd, struct sockaddr_in *sin);
DESCRIPTION
bindresvport() is used to bind a socket descriptor to a privileged IP
port, that is, a port number in the range 0-1023. sd is a socket de-
scriptor that was returned by a call to socket(2).
Only root can bind to a privileged port; this call will fail for any oth-
er users.
If the value of sin->sin_port is non-zero, bindresvport() attempts to use
the specified port. If that fails, it chooses another privileged port
number automatically.
RETURN VALUES
bindresvport() returns 0 if it is successful, otherwise -1 is returned
and errno set to reflect the cause of the error.
ERRORS
The bindresvport() function fails if:
[EBADF] sd is not a valid descriptor.
[ENOTSOCK] sd system
or no implementation for it exists.
SEE ALSO
bind(2), socket(2), rresvport(3)
OpenBSD 2.6 August 9, 1997 1
|
http://www.rocketaware.com/man/man3/bindresvport.3.htm
|
crawl-002
|
refinedweb
| 176
| 66.44
|
Ok im kinda new to C++
How can i increment a string by adding an integer value onto the end?
i am converting a decimal number to binary using the remainder method
in other words divide x by 2 and modulate x by two so im grabbing the remainder each time
12 in binary is 1100
wen i run the program if i include "cout << modx" in the loop i get a result of
0011
which is the reverse of what i need.
when i implemented this in VB6 i was able to have a string a increment it by addding the result of the MOD onto the end. Then after that i was able to reverse the string using another algorithm
Here is my code using C++
People on other forums have been getting confused about what i am trying to achieve. So i have implented this in VB6People on other forums have been getting confused about what i am trying to achieve. So i have implented this in VB6Code:#include <iostream> #include <string> using namespace std; int main (){ int x; int modx; string Reslt; cout << "Enter a Decimal"; cin >> x; while (x > 0){ modx = x % 2; Reslt = Reslt + modx; <----- This is my problem area x = x / 2; } cout << Reslt return 0; }
VB6 snippit
and my result will then be kand my result will then be kCode:Do Until x = 0 intMod = x Mod 2 Result = Result & intMod x = x / 2 x = x - 0.49 x = CLng(x) Loop and i could then reverse the string y = Len(Result) Do Until y = 0 Rev = Mid(Result, y, 1) k = k + Rev y = y - 1 Loop
|
http://cboard.cprogramming.com/cplusplus-programming/72990-strings-binary.html
|
CC-MAIN-2014-42
|
refinedweb
| 276
| 57.68
|
Opened 14 months ago
Closed 6 months ago
#13604 closed bug (fixed)
ghci no longer loads dynamic .o files by default if they were built with -O
Description
In 8.2.1-rc1 loading a file compiled with -O2 into ghci results in ghci recompiling the file into interpreted byte code. In 8.0.2 it simply loads the compiled object file.
8.2.1
ghc -dynamic -O2 eh2.hs [1 of 1] Compiling Main ( eh2.hs, eh2.o ) Linking eh2 ... bash-3.2$ ghci -ignore-dot-ghci GHCi, version 8.2.0.20170404: :? for help Prelude> :load eh2 [1 of 1] Compiling Main ( eh2.hs, interpreted ) [flags changed] Ok, modules loaded: Main.
8.0.2
ghc --version The Glorious Glasgow Haskell Compilation System, version 8.0.2 bash-3.2$ pwd /Users/gcolpitts/haskell bash-3.2$ ghc -dynamic -O2 eh2.hs [1 of 1] Compiling Main ( eh2.hs, eh2.o ) Linking eh2 ... bash-3.2$ ghci -ignore-dot-ghci GHCi, version 8.0.2: :? for help Prelude> :load eh2 Ok, modules loaded: Main (eh2.o).
Change History (52)
comment:1 Changed 14 months ago by
comment:2 Changed 14 months ago by
comment:3 Changed 14 months ago by
comment:4 Changed 14 months ago by
comment:5 Changed 14 months ago by
Ah, never mind. I tried using GHC HEAD with Phab:D3398 (the proposed fix for #13500) applied, but this bug was still present.
comment:6 Changed 14 months ago by
I've identified commit 818760d68c0e5e4479a4f64fc863303ff5f23a3a (Fix #10923 by fingerprinting optimization level) as the culprit. cc'ing ezyang, the author of that commit.
comment:7 Changed 14 months ago by
Makes sense: GHCi asks for no optimization, but the object files are optimized, so GHCi has no choice but to reinterpret the files.
It would be an easy matter to revert this patch, but if we also want to keep #10923 fixed, we will need to have a discussion about what the intended semantics here are.
comment:8 Changed 14 months ago by
Suppose I wanted ghci to load optimized files, specifically -O2, how would I do that? See #13002 as mentioned above. Assuming I could specify it then ghci would simply load the compiled file, right?
Currently there is no way to have a file compiled with -O2 loaded or compiled and loaded into ghci, right? That is a deficency and a regression. right?
comment:9 Changed 14 months ago by
In principle, one ought to be able to pass
-O2 to GHCi to make this happen. However, it turns out you also must pass
-fobject-code, at which the desired behavior is seen.
ezyang@sabre:~$ ghc-8.2 -O2 -c A.hs ezyang@sabre:~$ ghc-8.2 -O2 -c A.hs -dynamic ezyang@sabre:~$ ghc-8.2 --interactive -O2 when making flags consistent: warning: -O conflicts with --interactive; -O ignored. GHCi, version 8.2.0.20170413: :? for help Prelude> :load A.hs [1 of 1] Compiling A ( A.hs, interpreted ) [flags changed] Ok, modules loaded: A. *A> Leaving GHCi. ezyang@sabre:~$ ghc-8.2 --interactive -O2 -fobject-code GHCi, version 8.2.0.20170413: :? for help Prelude> :load A.hs Ok, modules loaded: A (A.o). Prelude A>
comment:10 Changed 14 months ago by
Thanks, that answers my question, you can do it as you described. Also you can specify those options in your .ghci file and that works as would be expected.
So the question seems to be, if you have an object file compiled with flags different than the ghci flags (possibly the default ones) should ghci load it (as it did in 8.0.2) or should it compile the source into interpreted byte code and load that (as it does now)
I think I prefer the old behavior, if you want interpreted byte code, remove the object file otherwise load will load your object file as is. However if we want to change this behavior to what it is currently I could live with that as long as the documentation is clear about the change and the current behavior.
comment:11 Changed 14 months ago by
So, the motivating principle for the change in behavior for
ghc --make is that, if you run a
ghc command, the end result should always be the same as if you had run GHC on a clean project. This means that, yes, if the optimization level changed, you better recompile your files.
I don't think this is necessarily what GHCi users are looking for. Without
-fobject-code, optimization level doesn't matter at all and I can definitely see an argument for the semantics, "Please ignore my flags and use the on-disk compiled products as much as possible, so long as they accurately reflect the source code of my program." This interpretation is supported by the fact that
-O flag doesn't do anything right now. (But, it is probable that
-fobject-code should shunt us back into the
--make style semantics.)
comment:12 Changed 14 months ago by
Whether or not it is the "right" behaviour, it would be good if the user manual documented the behaviour and the underlying principles. And describes how to work around it if you want something different.
comment:13 Changed 14 months ago by
I don't think a proper fix can make it for 8.2, so I've put up a revert for review.
comment:14 Changed 14 months ago by
comment:15 Changed 13 months ago by
Frankly, I've been surprised by the converse of this bug: GHC doesn't recompile when I request different optimization options. I can see the argument for a "go fast" mode, which uses any existing build products to avoid recompilation if at all possible, but it's not clear to me that this should be the default behavior.
comment:16 Changed 13 months ago by
I think I will be happy with whatever people come up with here. I filed the bug originally as I was surprised by the change in behavior. As Simon wrote above: it would be good if the user manual documented the behaviour and the underlying principles. And describes how to work around it if you want something different.
As Ben wrote above ideally the final fix will also address the converse of this bug.
comment:17 Changed 11 months ago by
By 8.4.1, as Simon says in comment 12: it would be good if [at least] the user manual documented the behaviour and the underlying principles. And describes how to work around it if you want something different.
comment:18 Changed 11 months ago by
I just ran into this issue myself. The main thing I have to add is to note that this issue also affects -fhpc, so whatever solution should take that into account too. I took a look through FlagChecker.hs for other candidates, but didn't see anything else.
A simple fix would be for ghci to note that -fhpc or -O are being ignored, but still include them in its flag hash when it does the recompilation check. I didn't notice that suggestion in the discussion above.
comment:19 Changed 11 months ago by
By the way, if the fix is really put off until 8.4.1, it means I'll have to skip a whole major version! This issue is a show stopper for my program, and I'd expect for any program that uses the GHC API for a REPL (maybe not that many, but still, it's a compelling GHC feature for me). It also makes interactive testing really slow, which for me makes it a show stopper for my style of development as well.
If people like the "ghci allows but ignores -fhpc and -O" idea, I'd be happy to give the implementation a shot.
comment:20 Changed 11 months ago by
I like the "ghci allows but ignores -fhpc and -O" idea and I'd like to see this in 8.2.2. As my original summary wrote this is a regression and I'd like to see it resolved sooner rather than later. However see also, not sure if "ghci allows but ignores -fhpc and -O" means that 13002 would not get resolved.
comment:21 Changed 11 months ago by
We can fix the checker for 8.2.2.
comment:22 Changed 9 months ago by
comment:23 Changed 9 months ago by
elaforge, George, can you describe precisely what you would propose that you help with this? I'm not sure teaching
ghci to ignore
-O and
-fhpc is a great idea since there may be users that want to use these flags from within an interactive session.
comment:24 Changed 9 months ago by
I'll settle for what Simon suggested: " if the user manual documented the behaviour and the underlying principles. And describes how to work around it if you want something different." Whatever the solution, I'd like to be able to specify the optimization level of compilation, e.g. -O or -02 in a .ghci file as well as by an argument to ghci, so that it will take effect when I compile inside of emacs. i.e. in other words I'd like #13002 fixed too, if possible. The use case here is working with optimized compiled code in emacs/ghci, making changes and measuring the performance; thus you want to be able to compile those changes in emacs at a given optimization level.
I think elaforge may want something more specific but I'll let him speak for himself.
comment:25 Changed 9 months ago by
David is taking care of this one.
comment:26 Changed 9 months ago by
Sorry about the delay, I guess trac doesn't email me when tickets are updated.
The desired end result is that I have a bunch of .o files compiled with -O, and I need to load them into the GHC API. Similarly, I have a set of .o files compiled with -fhpc, and I need to load them into ghci. Any solution that reaches that result will probably work for me!
[ bgamari ]
I'm not sure teaching ghci to ignore -O and -fhpc is a great idea since there may be users that want to use these flags from within an interactive session.
I don't understand this, ghci already ignores -O and -fhpc, and as far as I know always has, whether or not people want to use them. So the request is to continue ignoring those flags as always, but to be able to load files compiled with them... which presumably means include them in the hash.
comment:27 Changed 8 months ago by
What we really want, I think, is for users to be able to specify (globally, and perhaps also for individual loaded files) whether they want extra-aggressive recompilation avoidance. That would include optimization level and HPC (is that covered under the
prof bit?), and perhaps other profiling options.
comment:28 Changed 8 months ago by
I guess this probably means teasing apart fingerprints that are currently merged, recording these options separately.
comment:29 Changed 8 months ago by
comment:30 Changed 8 months ago by
Some summary
The essence of this ticket is explained quite well by Edward in comment:11. I don't think I agree with Edward about how the interpretation of
-fobject-code should play into everything, but his essential points stand:
- We want to be able to run
ghc --makeand be sure that we get compilation products entirely equivalent to compiling from scratch.
- We want
ghci(especially) to be able to load
-dynamic-compiled modules even if those modules were compiled with slightly different options.
The question of what "slightly different" means is really up to the user.
The solution
Fortunately, it looks like this is probably not hard! Currently, we use
fingerprintDynFlags to calculate a fingerprint of all the dflags that it believes can affect the compilation result and (in
addFingerprints) record that fingerprint in the
ModIface. When we are compiling with flags that don't match, we recompile the dependencies. What we want to do, I believe, is record not only the fingerprint but also information about some of the individual options.
Some thoughts:
- A change in whether cost center profiling is enabled
gopt Opt_SccProfilingOn dflagsabsolutely mandates recompilation. I believe we want users to be able to (selectively) ignore changes to
-O
-fhpc
-fignore-asserts
- Automatic cost-center insertion (
-fprof-...)
I think we can do this by using one fingerprint for each of these options, or, even simpler, for each option and each module, either "This module and its dependencies use value X" or "At least one dependency uses a different value than this module".
- I believe we're currently somewhat too conservative about language flags in general. For example, I wouldn't expect enabling
-X+
DataKinds,
AllowAmbiguousTypes,
ExplicitNamespaces,
ConstraintKinds,
MultiParamTypeClasses,
FunctionalDependencies,
FlexibleInstances,
FlexibleContexts,
UndecidableInstances,
TupleSections,
TypeSynonymInstances,
StandaloneDeriving,
DefaultSignatures,
NullaryTypeClasses,
EmptyCase,
MultiWayIf, or
ConstrainedClassMethodsto be able to change the result of compiling any module that was successfully compiled before. For these options, we're really only interested if one of them is turned off that was previously turned on. For these, rather than a proper fingerprint, we want to record, for each option and each module, whether the module or at least one of its dependencies was compiled with that flag.
- We should consider fingerprinting the result of running the preprocessor(s) over the source. If the
-D,
-U, or
-Ioptions change, or an
#included file changes, we only need to recompile if the results of preprocessing have actually changed.
comment:31 Changed 8 months ago by
I trust that this will be a good solution but I think it would be worthwhile to provide a draft of how this will be documented in the GHC user's guide so that end users can understand, at that level, what they will be getting with this fix.
comment:32 Changed 8 months ago by
Yes, having a description of what this will look like from the users' perspective would help us ascertain whether or not this will address the issue.
Once we have that perhaps elaforge could also comment.
comment:33 Changed 8 months ago by
comment:34 Changed 8 months ago by
I'm working on the separate optimization level tracking. I'll probably have that done by the end of the day. I'll likely need a bit of help to get the user interface sorted. I'm not sure how that should look. Maybe a separate
-frecompile-for-opt-level-change? By the way, as far as I can tell, we don't track changes in individual optimization flags, like
-ffull-laziness. I imagine we should do something about that.
comment:35 Changed 8 months ago by
One flag we should ideally treat specially (at some point) is
-fignore-interface-pragmas. If the importing module uses that pragma, we can be much more aggressive about recompilation avoidance. In particular, if we don't already do this, we should really produce two interface hashes, one of which ignores interface pragmas. That way we won't recompile a module just because the pieces it's explicitly said it doesn't care about have changed.
comment:36 Changed 8 months ago by
comment:37 Changed 8 months ago by
It took me a bit to understand enough about how the fingerprinting process worked to do this right. I think the differential I just put up should fix the optimization issue. If others agree that's the right approach, it can easily be applied to
-fhpc as well.
comment:38 follow-up: 39 Changed 8 months ago by
So after this fix if I load a file compiled with -O2 into ghci will ghci just load it without recompiling it?
comment:39 follow-up: 42 Changed 8 months ago by
So after this fix if I load a file compiled with -O2 into ghci will ghci just load it without recompiling it?
After this fix, you'll be able to load a compiled module (including one compiled with
-O and several others as well. The one that might be most surprising is
-fignore-asserts. If we need to add additional flags in the future to refine the way we handle such, we can consider it.
Will this let you do what you need?
I intend to do something similar for HPC, but I haven't yet.
comment:40 Changed 8 months ago by
It feels like an odd approach. The situation implied by -fignore-optim-changes is that I'm not passing the same flags, but I want to load '.o's anyway. But from my point of view, I *am* passing the same flags, the problem is that ghci is filtering them out. So with that new flag, it becomes: pass the same flags, ghci filters out some making them not the same, pass another flag that says ignore when some flags are not the same. We'll need another flag that does the same thing for -fhpc (as you mention) and then possibly in the future some more to ignore any other symptoms of ghci filtering out flags. Doesn't it seem a bit convoluted? If I weren't following this thread, and ran into this problem, I'm not sure I'd be able to find all proper flags to get it to work.
Compare that to making ghci no longer change the flags you pass, even if it can't implement them: it just works, no flags needed. You could add one to suppress the warning about "ignoring some of your flags" but we have some general verbosity level stuff already.
That said, I am following this thread, so I will know about the flags, so they (once you put in one for -fhpc of course) will fix my problem. So aside from my worry that it seems overcomplicated, I'm in favor.
comment:41 Changed 8 months ago by
elaforge, so you want GHCi to load everything with the optimization level you specify? The downside I see is that if you have a bunch of object code (for dependencies) but you want to be able to set breakpoints and such in the specific module you're working on right now, you're stuck; you'll have to load everything to get that module in interpreted mode. Or do you want to load the modules you list on the command line in interpreted mode and load object code for the dependencies? Or something else? I'm not sure exactly what else you want.
comment:42 Changed 8 months ago by
So after this fix if I load a file compiled with -O2 into ghci will ghci just load it without recompiling it?
After this fix, you'll be able to load a compiled module (including one compiled with
-Oand several others as well. The one that might be most surprising is
-fignore-asserts. If we need to add additional flags in the future to refine the way we handle such, we can consider it.
Will this let you do what you need?
Works perfectly for me. Thanks!
I intend to do something similar for HPC, but I haven't yet.
comment:43 Changed 8 months ago by
I don't totally understand the point about the debugger. I thought ghci always loads binary if it can? I usually use the * syntax for :load, so the current module is always interpreted, so I can see private names if there an export list. Hasn't it always been true that to set a breakpoint you have to force the module to load as bytecode, either with * or by touching it so it thinks the binary is out of date? I don't really use the debugger so I might be missing some detail.
For context, I'm loading modules in two situations: one is from command line ghci, where I'm loading test modules, which were compiled with -fhpc. I also link the modules into a test binary, which I do I want -fhpc for so I can get coverage, but when testing from ghci I don't care about that stuff, I just want it to load the binary modules, not recompile everything as bytecode every time.
The other situation is that I use the GHC API to load modules into a running application. Those modules are compiled with -O, and I use GHC.setSessionDynFlags to get the same flags used to compile them when compiling the application itself so I can load them. But the GHCI API then goes and filters out -O, making the flags different... if I'm remembering the results of my research correctly. After that, I'll give it some expressions to evaluate, or maybe reload some modules as bytecode, just like you might do in ghci. Similar to the -fhpc case, I don't actually care that the interpreted code is not optimized, I just want to load the binary modules.
My suggestion was to turn off the thing that filters the flags. Of course even if it retains -O it doesn't mean the bytecode interpreter can magically do optimizations, so it would be a bit of a lie. But it seems like the lie is not so bad. It would be optimizing if it could, and it will act as if the flag is set for the purposes of loading modules, but by its nature bytecode is not optimized, so it just doesn't apply when it compiles bytecode.
comment:44 Changed 8 months ago by
elaforge, I think I understand kind of what you're asking for now, but there are some tricky questions about the UI. The biggest question is probably how to make interaction with
-fobject-code sensible. In particular, we don't currently produce object code (ever) when
-fobject-code is off. So if I type
ghci -O2 A, where
A depends on
B, what should happen if
B was not compiled
-O2? Should we generate object code for
B anyway to obey
-O2? That seems a bit surprising. Should we load it interpreted? That seems inconsistent. Or perhaps we should change the interpretation of
-fobject-code in a slightly different direction. What if we make
:load *A guarantee that it loads
A interpreted whether or not
A has been compiled already and whether or not GHCi was run with
-fobject-code?
comment:45 Changed 8 months ago by
elaforge, thoughts? I would really like to wrap this up soon so we can get 8.2.2 out. If it's not done by Monday I'm afraid we'll need to punt this to 8.4.
comment:46 Changed 8 months ago by
For
ghci -O2 A B example, I think it should load B as bytecode if the flags don't match. It doesn't seem inconsistent to me, here are the rules:
With -fobject-code, always load binary, which means recompile (as binary) if the flags don't match.
With -fbyte-code, load binary if there already is one, and the flags match, otherwise load as bytecode. Flags that don't apply to bytecode (namely -O and -fhpc) are ignored, but do affect whether or not the flags match when loading binary.
Can you expand on how it seems inconsistent? I'm guessing that you're thinking that -O means "binary and bytecode are optimized" while I'm happy for it to mean "binary is optimized" with no implication for bytecode. I admit the case might be weaker for -fhpc, in that people might not expect that -fhpc means binary only. But I guess that's just an aspect of bytecode that it doesn't support those things, and if there's a warning that says "we're ignoring these for bytecode", as there already currently is, then it seems fine to me.
I think the only change would be to have DynFlags.makeFlagsConsistent emit the warnings, but not mutate the dflags. Of course it might then trigger assertion failures down the line, but presumably they would be easy to fix.
I just did an experiment with -prof, because presumably it's also not supported by bytedcode, but unlike -O it doesn't warn for ghci. But it looks like while it's happy to load binary modules compiled with -prof, even if you don't pass it to ghci, it will then crash trying to run things:
ghc: panic! (the 'impossible' happened) (GHC version 8.0.2 for x86_64-apple-darwin): Loading temp shared object failed: dlopen(/var/folders/9p/tb878hlx67sdym1sndy4sxf40000gn/T/ghc93652_0/libghc_1.dylib, 5): Symbol not found: _CCS_DONT_CARE Referenced from: /var/folders/9p/tb878hlx67sdym1sndy4sxf40000gn/T/ghc93652_0/libghc_1.dylib Expected in: flat namespace in /var/folders/9p/tb878hlx67sdym1sndy4sxf40000gn/T/ghc93652_0/libghc_1.dylib Please report this as a GHC bug:
Maybe I should file this as a separate bug.
comment:47 Changed 8 months ago by
Maybe I should file this as a separate bug.
elaforge, yes, please do.
comment:48 Changed 8 months ago by
elaforge, I have an idea that feels like it provides a reasonably consistent UI, but I'd like to see what you think.
- Optimization flags (including
-O0) imply
-fobject-code. This ensures that GHC respects optimization flags regardless of
--interactive.
- Even when
-fobject-codeis on,
:load *Mwill load
Mas bytecode. This provides the "escape hatch" from
-fobject-codethat you need to use debugging features, etc.
- We add
-fignore-optim-changesand
-fignore-hpc-changes(Phab:D4123), enabling users to put together object code and bytecode with diverse optimization levels and HPC info while still updating automatically based on source changes and whether profiling is enabled.
comment:49 Changed 8 months ago by
Alright, I'm afraid we are going to have to punt on this for 8.2.2. Sorry elaforge!
comment:50 Changed 8 months ago by
bgamari: No problem, I understand about release schedules. I'm sorry to drag it out a bit, but on the other hand it's good to be careful about flag design since it's one of those APIs that is hard to fix later.
I'll copy paste this in the mailing list thread, just so there's a record in both places.
I still don't feel like 1 is necessary, I'd rather flags cause other flags to be ignored with a warning, rather than turn on other flags. But that's just a vague preference, with no strong evidence for it. Maybe it could emit a warning if you didn't put -fobject-code in explicitly, e.g. "-O implies -fobject-code, adding that flag." So as long as we accept 1, then 2 and 3 follow naturally. Given that, I support this UI.
Thanks for looking into it!
comment:51 Changed 6 months ago by
In 708ed9c/ghc:
This was on a Mac with 8.2.1-rc1 compiled from source. Setting OS and architecture to Unknown/Multiple as I don't see any reason why this would be Mac specific. See also #13002
|
https://ghc.haskell.org/trac/ghc/ticket/13604
|
CC-MAIN-2018-26
|
refinedweb
| 4,524
| 70.94
|
Heres the homework problem:
Write a program that displays the status of an order. The program should have a function that asks for the following data:
1. The number of spools ordered
2. The number of spools in stock
3. If there are special shipping and handling charges
(Shipping and handling is normally $10 per spool) If there are special charges, it should ask for special charges per spool.
The gathered data should be passed as arguments to another function that displays:
1. The number of spools ready to ship from current stock.
2. The number of spools on backorder (if the number ordered is greater than what is in stock)
3. Subtotal of the portion ready to ship (the number of spools ready to ship times $100)
4. Total shipping and handling charges on the portion ready to ship
5. Total of the order ready to ship
The shipping and handling parameter in the second function should have the default arguments 10.00.
Input validation:
1. Do not accept numbers less than 1 for spools ordered
2. Do not accept number less than 0 for spools in stock or shipping and handling charges.
===============================
I get an error "Void getInfo(void)" : overloaded function differs only by return type from "double getInfo(void)"
I have several return commands but I do not know how to write the code in sections for the first function. I think that maybe the issue. Here is what I got if anyone can lend a hand and Remember I am a BEGINNER so please explain any changes you make so I understand what is the purpose, etc.
Note: I have not learned arrays yet so please omit. Thanks!
#include <iostream> #include <iomanip> using namespace std; void getInfo(double &, double &, double = 10.0); void display(double backOrder, double spoolsOrdered, double subTotal, double shipping, double total); int main() { double getInfo(), display(); void getInfo() { char s&h; double spools, stock, charges; cout << "Enter the amount of spools ordered: "; cin >> spools; while(spools < 1) { cout << "Enter a number greater than 0: "; cin >> spools; } return spools; cout << "Enter the amount of spools in stock: "; cin >> stock; while(stock < 0) { cout << "Enter a number greater than 0: "; cin >> stock; } return stock; cout << "Are there any special S&H Charges?: "; cin >> s&h; if(s&h == 'Y' || s&h == 'y') { cout << "Please enter in the amount of the special charges: "; cin >> charges; while(charges < 0) { cout << "Enter number greater than 0: "; cin >> charges; } return charges; } void display(double spools, double stock, double charges) { double backOrder, spoolsOrdered, subTotal, shipping, total; backOrder = spools - stock; cout << "The amount of spools on back order is: " << backOrder << ".\n"; spoolsOrdered = spools - backOrder; cout << "Spools ready to ship now: " << spoolsOrdered << ".\n"; subTotal = spoolsOrdered * 100; cout << "Your subtotal before Shipping & Handling fees are $" << subTotal << endl; shipping = subTotal + charges; cout << "Your cost of shipping the spools of copper is $" << shipping << endl; total = shipping + subTotal; cout << "Your total cost of order is: $" << total << ".\n"; } }
|
https://www.daniweb.com/programming/software-development/threads/239079/c-function-homework-help
|
CC-MAIN-2019-09
|
refinedweb
| 491
| 67.38
|
Short History Of St. Paul's Epistle to The Corinthians
The Second Epistle to the Corinthians, also known as Second Corinthians, is the 8th book of the New Testament. The book, originally written in Greek, is a letter from Paul of Tarsus to the Christians of Corinth, Greece. The Epistle was purportedly written in the same year as the first, just two months after it. Given this historical analysis it would have been written a short time previous to the Apostle’s three months stay in Achaia (Acts xx. 3)
There is general acceptance of the reference in the book itself identifying the place of writing as Macedonia (chaps, vii.5, viii.1, ix. 2), which he reached after traveling through Troas (Chap, ii. 12) where he waited for a short time for Titus to return (Chap. Ii. 13).
The Epistle was instigated by information regarding the reception of the first Epistle. These reports were mainly favorable, with the majority of the church returning to their spiritual allegiance to their founder. However, pockets of resistance still remained, and in fact some had become even more entrenched in their opposition to Paul, strenuously denying his claim to apostleship (chap, x. 1-10).
The contents of this Epistle are thus very varied, but may be roughly divided into three parts: 1st, The apostle's account of the character of his spiritual labors (chap, i.—vii.); 2nd, Directions about the collections (chaps, viii., ix.); 3rd, Defense of his own apostolical character (chaps, x.-xiii. 10).
Content of Second Corinthians
More Bible pictures
- Bible Pictures
From: The Devotional and Practical Pictorial Family Bible, Copyright, by J. R. Jones, 1879.
|
https://hubpages.com/religion-philosophy/Epistle-to-The-Corinthians
|
CC-MAIN-2018-30
|
refinedweb
| 275
| 61.87
|
tensorflow::
ops::
RestoreV2
#include <io_ops.h>
Restores tensors from a V2 checkpoint.
Summary
For backward compatibility with the V1 format, this Op currently allows restoring from a V1 checkpoint as well:
- This Op first attempts to find the V2 index file pointed to by "prefix", and if found proceed to read it as a V2 checkpoint;
- Otherwise the V1 read path is invoked. Relying on this behavior is not recommended, as the ability to fall back to read V1 might be deprecated and eventually removed.
By default, restores the named tensors in full. If the caller wishes to restore specific slices of stored tensors, "shape_and_slices" should be non-empty strings and correspondingly well-formed.
Callers must ensure all the named tensors are indeed stored in the checkpoint.
Args:
-
|
https://www.tensorflow.org/versions/r2.5/api_docs/cc/class/tensorflow/ops/restore-v2?hl=zh-tw
|
CC-MAIN-2022-21
|
refinedweb
| 128
| 61.77
|
I am new to neural network and for sure PyTorch, I am working on simple feed forward NN to predict groundwater level form precipitation and temperature daily data.
I’m facing some problems and seeking help:
First problem: data loader
So I should be using data loader so to feed the input data in batches (for example I have 300 temperature values and I want a batch size of 4) my understanding, the dataloader will take the first four data, feed it forward and then move to the next four data, my question is, is there a way for the dataloader to take the first four and then move 1 reading ahead (to use three of the temperature data used in the previous batch, ex. First batch = temperatures 1, 2, 3 and 4, second batch= temperatures= 2, 3, 4 and 5, and so on till it get to the last reading)
Second problem: data loader and output(target) data
- Will there be a need to use dataloader for the output(target) data if there will be no batch size as I want it to take one reading only, just one output
- If I have many output nodes, with different batch size to the input data, should I construct a separate dataloader for it or is it possible to combine the input and output within the same dataloader function.
Third problem: forward function
In defining forward function within the class (nn.module), and struggling with the input data; if I am using data loader and batches, should I use dataloader as input, if so how. or should I use the entire data frame.
This is my code, and Im asking about xin(xinput),
def forward(self, xin):
xinhi = self.fc1(xin)
xhi = self.Sigmoid(xinhi)
xhiout = self.fc2(xhi)
xout = self.Sigmoid(hiout)
return xout
|
https://discuss.pytorch.org/t/forward-function-and-data-loader/26854
|
CC-MAIN-2022-21
|
refinedweb
| 304
| 53.14
|
Mozilla::DOM::Node
Mozilla::DOM::Node is a wrapper around an instance of Mozilla's nsIDOMNode interface. This class inherits from Supports.
* The nsIDOMNode interface is the primary datatype for the entire * Document Object Model. * It represents a single node in the document tree. * * For more information on this interface please see * L<http:E<sol>E<sol><sol>TRE<sol>DOM-Level-2-CoreE<sol>>
Pass this to QueryInterface.
A Mozilla::DOM::NamedNodeMap containing the attributes of this node (if it is an Element) or null otherwise.
In list context, returns a list of Mozilla::DOM::Attr, instead. (I considered returning a hash ($attr->GetName => $attr->GetValue), but then you couldn't set the attributes.)
Returns whether this node (if it is an element) has any attributes.
A Mozilla::DOM::NodeList that contains all children of this node. If there are no children, this is a NodeList containing no nodes.
In list context, this returns a list of Mozilla::DOM::Node, instead.
This is a convenience method to allow easy determination of whether a node has any children.
The first child of this node. If there is no such node, this returns null.
The last child of this node. If there is no such node, this returns null.
The node immediately preceding this node. If there is no such node, this returns null.
The node immediately following this node. If there is no such node, this returns null.
The name of this node, depending on its type:
The name of the attribute
#cdata-section
#comment
#document
#document-fragment
The document type name
The tag name
The entity name
The name of the entity referenced
The name of the notation
The target
#text
Matches one of the following constants, which you can export with
use Mozilla::DOM::Node qw(:types), or export them individually.
The node is a Mozilla::DOM::Attr.
The node is a Mozilla::DOM::CDATASection.
The node is a Mozilla::DOM::Comment.
The node is a Mozilla::DOM::Document.
The node is a Mozilla::DOM::DocumentType.
The node is a Mozilla::DOM::DocumentFragment.
The node is a Mozilla::DOM::Element.
The node is a Mozilla::DOM::EntityReference.
The node is a Mozilla::DOM::Entity.
The node is a Mozilla::DOM::Notation.
The node is a Mozilla::DOM::ProcessingInstruction.
The node is a Mozilla::DOM::Text.
The value of this node, depending on its type:
The value of the attribute
The content of the CDATA section
The content of the comment
[null]
[null]
[null]
[null]
[null]
[null]
[null]
The entire content excluding the target
The content of the text node
- $value (string)
The Mozilla::DOM:.
Returns the local part of the qualified name of this node. For nodes of any type other than ELEMENT_NODE and ATTRIBUTE_NODE and nodes created with a DOM Level 1 method, such as createElement from the Document interface, this is always null..
The namespace prefix of this node, or null if it is unspecified.
For nodes of any type other than ELEMENT_NODE and ATTRIBUTE_NODE and nodes created with a DOM Level 1 method, such as createElement from the Document interface, this is always null.
Note that setting this attribute, when permitted, changes the nodeName attribute, which holds the qualified name, as well as the tagName and name attributes of the Element and Attr interfaces, when applicable.
Note also that changing the prefix of an attribute that is known to have a default value, does not make a new attribute with the default value and the original prefix appear, since the namespaceURI and localName do not change.
- $aPrefix (string)
Tests whether the DOM implementation implements a specific feature and that feature is supported by this node.
- $feature (string)
The name of the feature to test. This is the same name which can be passed to the method hasFeature on DOMImplementation.
- $version (string)
This is the version number of the feature to test. In Level 2, version 1, this is the string "2.0". If the version is not specified, supporting any version of the feature will cause the method to return true.
two string args
Adds the node newChildNode to the end of the list of children of this node. If the newChild is already in the tree, it is first removed.
- $newChild (Mozilla::DOM::Node)
The node to add. If it is a DocumentFragment object, the entire contents of the document fragment are moved into the child list of this node.
Returns a duplicate of this node. (See DOM 1 spec for details.)
- $deep (boolean)
If true, recursively clone the subtree under the specified node; if false, clone only the node itself (and its attributes, if it is an Element).
DOM 2 spec:.
- $newChild (Mozilla::DOM::Node)
The node to insert.
- $refChild (Mozilla::DOM::Node)
The reference node, i.e., the node before which the new node must be inserted.
Removes the child node indicated by oldChild from the list of children, and returns it.
- $oldChild (Mozilla::DOM::Node)
Replaces the child node oldChild with newChild in the list of children, and returns the oldChild node.
If newChild is a DocumentFragment object, oldChild is replaced by all of the DocumentFragment children, which are inserted in the same order. If the newChild is already in the tree, it is first removed.
- $newChild (Mozilla::DOM::Node)
The new node to put in the child list.
- $oldChild (Mozilla::DOM::Node)
The node being replaced in the list..
See DOM 2 spec for details.
This software is licensed under the LGPL. See Mozilla::DOM for a full notice.
|
http://search.cpan.org/dist/Mozilla-DOM/lib/Mozilla/DOM/Node.pod
|
CC-MAIN-2017-09
|
refinedweb
| 920
| 67.15
|
An extensible wiki app for Django with a Git backend
Project description
**Waliki** is an extensible wiki app for Django with a Git backend.
.. attention:: It's in an early development stage. I'll appreciate your feedback and help.
.. image::
:target:
.. image::
:target:
.. image::
:target:
.. image::
:target:
:alt: Documentation Status
.. image::
:target:
:alt: Wheel Status
:demo:
:documentation:
:twitter: `@Waliki_ <>`_ // `@tin_nqn_ <>`_
:group:
:license: `BSD <>`_.
.. _project:
.. _demo:
.. _git backed wiki engines:
Getting started
----------------.
.. [1] *wiki* itself is a hawaiian word
.. _moin2git:
.. _`PyPA Code of Conduct`:
Changelog
---------
0.8.1 (2017-03-26)
++++++++++++++++
- Fixed compatibiltiy with Django 1.10 (thangs to `Martí Bosch`_)
- Fixed `#125 <>`__
- Upgraded demo's setting to recent format
- Added missing migration
.. _Martí Bosch:
0.7 (2016-12-19)
++++++++++++++++
- Fix compatibility with Django 1.9.x and Markup 2.x (thanks to `Oleg Girko`_ for the contribution)
.. _Oleg Girko:
0.6 (2015-10-25)
+++++++++++++++++
- Slides view use the cache. Fix `#81 <>`__
- Implemented an RSS feed listing lastest changes. It's part of `#32 <>`__
- Added a `configurable "sanitize" <>`_ function.
- Links to attachments doesn't relay on IDs by default (but it's backaward compatible). `#96 <>`_
- Added an optional "`breadcrumb <>`_ " hierarchical links for pages. `#110 <>`_
- Run git with output to pipe instead of virtual terminal. `#111 <>`_
0.5 (2015-04-12)
++++++++++++++++++
- Per page markup is now fully functional. It allows to
have a mixed rst & markdown wiki. Fixed `#2 <>`__
- Allow save a page without changes in a body.
Fixed `#85 <>`__
- Fixed `#84 <>`__, that marked deleted but no commited after a move
- Allow to choice markup from new page dialog. `#82 <>`__
- Fix wrong encoding for raw of an old revision. `#75 <>`__
0.4.2 (2015-03-31)
++++++++++++++++++
- Fixed conflict with a broken dependecy
0.4.1 (2015-03-31)
++++++++++++++++++
- Marked the release as beta (instead of alpha)
- Improves on setup.py and the README
0.4 (2015-03-31)
++++++++++++++++
- Implemented views to add a new, move and delete pages
- Implemented real-time collaborative editing via together.js
(`#33 <>`__)
- Added pagination in *what changed* page
- Added a way to extend waliki's docutils with directives and transformation for
- A deep docs proofreading by `chuna <>`__
- Edit view redirect to detail if the page doesn't exist
(`#37 <>`__)
- waliki\_box fails with missing slug
`#40 <>`__
- can't view diffs on LMDE
`#60 <>`__
- fix typos in tutorial
`#76 <>`__
(`martenson <>`__)
- Fix build with Markups 0.6.
`#63 <>`__
(`loganchien <>`__)
- fixed roundoff error for whatchanged pagination
`#61 <>`__
(`aszepieniec <>`__)
- Enhance slides `#59 <>`__
(`loganchien <>`__)
- Fix UnicodeDecodeError in waliki.git.view.
`#58 <>`__
(`loganchien <>`__)
0.3.3 (2014-11-24)
++++++++++++++++++
- Tracking page redirections
- fix bugs related to attachments in `sync_waliki`
- The edition form uses crispy forms if it's installed
- many small improvements to help the integration/customization
0.3.2 (2014-11-17)
++++++++++++++++++
- Url pattern is configurable now. By default allow uppercase and underscores
- Added ``moin_migration_cleanup``, a tool to cleanup the result of a moin2git_ import
- Improve git parsers for *page history* and *what changed*
.. _moin2git:
0.3.1 (2014-11-11)
++++++++++++++++++
- Plugin *attachments*
- Implemented *per namespace* ACL rules
- Added the ``waliki_box`` templatetag: use waliki content in any app
- Added ``entry_point`` to extend templates from plugins
- Added a webhook to pull and sync change from a remote repository (Git)
- Fixed a bug in git that left the repo unclean
0.2 (2014-09-29)
++++++++++++++++
- Support concurrent edition
- Added a simple ACL system
- ``i18n`` support (and locales for ``es``)
- Editor based in Codemirror
- Migrated templates to Bootstrap 3
- Added the management command ``waliki_sync``
- Added a basic test suite and setup Travis CI.
- Added "What changed" page (from Git)
- Plugins can register links in the nabvar (``{% navbar_links %}``)
0.1.2 / 0.1.3 (2014-10-02)
++++++++++++++++++++++++++
* "Get as PDF" plugin
* rst2html5 fixes
0.1.1 (2014-10-02)
++++++++++++++++++
* Many Python 2/3 compatibility fixes
0.1.0 (2014-10-01)
++++++++++++++++++
* First release on PyPI.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/waliki/
|
CC-MAIN-2020-05
|
refinedweb
| 690
| 66.44
|
Background edit.Null itself comes in a variety of forms. In SQL NULL is a special token that is used to indicate that a particular data value is missing for some reason (usually it is unknown). This results in sentences in a database which cannot be interpreted as either true or false, leading to so-called three-valued logic (3VL), where instead of the usual true and false of classical logic, there is also a 3rd option: unknown that NULL represents. SQL is slightly unusual in defining a NULL as token that can appear in the place of any other value. Other languages typically confine null to a particular type: that of references. For instance, in C, NULL represents a pointer that points at an inaccessible part of memory. In Java, null is a non-valid reference. In Python, Smalltalk and some other OO languages, null (or None or Nil) is an actual ([singleton]) object that defines no methods (or defines methods that quietly ignore all messages).
Sources of nullThe main source of nulls that arise in discussions within the Tcl community seems to be related to database transactions, since SQL (but not necessarily non-SQL RDBMSes) uses it to express missing data.DKF writes [1] on this subject:
-
- When dealing with the result of an SQL database query, you've got a result set (in effect a row of a view). The values in the row are only part of the information available (there's also the name of each column, the type of the columns, etc.) and so adding the ability to ask whether a column was NULL is no big deal. The information might not be exposed in some of the short-cut interfaces, but they're just syntactic sugar; people who really need to know about nullity should use the more detailed API. (I suppose you could even have a method on the result set to set the string representation of NULL...)
Why the concept of a NULL contradicts EIAS
- If everything is a string, everything has the same data type.
- If everything has the same data type, i.e. no distiction, then we speak of a typelessness.
- But the concept of null requires an own data type.
- So, null cannot be in Tcl. QED.
McVoy's nullLarry McVoy has suggested [2] into:
- Writing data to file (e.g. preferences) and then reading them back.
- Sending data to another process or computer via a channel.
- Passing data to another thread.
Special null behavioursLanguages that do have nulls sometimes give them special treatment. Some examples of such behaviour, which are sometimes expected also of a "null in Tcl", are:
- null as a non-argument
- Nulls are skipped when forming a list of arguments. Consequences if done in Tcl: Doing anything with a null value becomes difficult, e.g. you can't use set to store it in a variable or list to put it in a list.
- null as missing argument
- Passing null for a command argument is the same as not passing an argument, so if the argument has a default then that should be used. Similar to the above, but subsequent arguments aren't shifted. Sometimes requested as a way of specifying a value for the second argument with default without specifying a value for the first argument with default.
- null as boolean
- Null should be a valid value for all boolean operations (&&, ||, etc.), and they should treat it according to (some) three-value logic.
Solutions editEven though Tcl doesn't have a null as such, there are plenty of mechanisms in Tcl for handling what in other languages might be conveniently implemented by use of null.
Out-of-domain dataIf.
Unicode nullIf [3] for internal use in applications precisely to express situations such as "data missing".The most easily remembered non-character is probably \uFFFF.
Return codes.
Missing dictionary/array keysIf.
Unset variablesA
Wrap raw value in a listIf.
Tag with typeThis is a variant on "wrap in list", which however extends to a more general device for handling data where "type" matters.NEM has written an extension (Maybe package) which does this with minimal storage overhead. He describes it thus:
-
- In the belief that actions speak louder than words, and code even more so, I hereby present the "maybe" package for Tcl that provides complete support for handling missing/unknown data in much the same way as a NULL pointer does in C, only nicer. The package comes with both Tcl and C implementations -- the C implementation efficiently represents such values as a single pointer, which is either NULL or points to a valid Tcl_Obj. The interface provided is simple:
[Nothing] -- creates a NULL pointer, string rep: "Nothing" [Just $foo] -- creates a non-NULL pointer, string rep: {Just $foo} if {[maybe exists $val var]} { puts "exists: $var" } else { puts "doesn't exist" }
-
- The [maybe exists] command both tests for whether a value is not Nothing and extracts the value into a variable in one operation. If this command returns true then the variable var is guaranteed to contain a valid value (without any Just wrapper around it).
[Nothing] -> [NewNull] [Just $foo] -> [NewNullable $foo] [maybe exists $val var] -> [NotNull $val var](or even just Null, Nullable & NotNull )NEM: The names come from Haskell[4]. The exists subcommand is equivalent to dict exists, so I don't see it causing confusion (indeed, you can think of maybe as being like a 0/1-element dictionary).
Discussion editAM [5]. Basically, {null}! is recognized by the parser as a null, which is not a string; it is distinct from all possible strings. "{null}!" is, of course, a seven-character-long string, and it's also a one-element list whose sole element is a null.Note : TIP 185 was rejected in 2008.I (AMG) have several strong comments regarding the TIP:
- I prefer to say "null" instead of "null string" because I feel that a null is not a string at all. It's the one thing that isn't a string! I guess we'll need to change our motto. :^)
- Likewise, I'd rather not tack the null management functionality onto the [string] command.
- I think I'd prefer a [null] command for generating nulls and testing for nullity. It's best not to use the == and != expr operators for this purpose; null isn't equal to anything, not even null.
- We can ditch the {null}! syntax in favor of using the [null] command to generate nulls, but then [null] cannot be implemented in pure script. This might be an important concern for safe interps.
- Automatic compatibility with "null-dumb" commands is a mistake; it's the responsibility of the script to perform this interfacing.
- When passed a null, the Tcl_GetType() and Tcl_GetTypeFromObj() functions should return TCL_ERROR or NULL (in the case of Tcl_GetString() and Tcl_GetStringFromObj()).
- Most commands should be "null-dumb". Only make a command handle nulls when it is clear how they should be interpreted.
- The non-object Tcl commands can probably represent nulls as null pointers ((void*)0 or NULL). If for some reason that can't work, reserve a special address for nulls by creating a global variable.:
- In the switch statement, the word default impacts the value "default".
- In proc's arg list, the word args impacts the choice of argument names.
- In Snit and Itcl, the argument #auto or %AUTO% impacts the choice of instance name.
- And so on.:
- If the external function returns a value or NULL, then have the corresponding Tcl command return a list of one element for non-NULL values or an empty list for a NULL value. In Tcl 8.5, {*} greatly simplifies using such commands.
- If the external function returns a "record" where some of the entries may be NULLs, then have the corresponding Tcl command return a dictionary which only has entries for the fields with non-NULL values. [7]. :
- Extend the meaning of a string to include a null string. I will call this TIP 185a.
- Extend the meaning of list and dicts to allow the representation of unknown elements. I will call this TIP 185b.
GAM There has been a lot of churn on this page lately, and it has prompted me, since my name has been thrown about, to clarify my perspective on the above comment. Although I do vaguely remember the conversation referred to above (it having been more than two years now), the primary reason that I did not wish to get into a discussion of nulls is that it has no real effect on TclRAL. TclRAL is not a database management system and was not designed to be a front end for one (although I am aware of at least one person using it in that manner -- good luck with that) nor is it a Tcl interface to some other data storage mechanism. TclRAL is a Tcl extension that brings formal relational values to the Tcl language. Those values are not different, conceptually, than dict or list values. The motivations behind TclRAL are all about relation-oriented (or table-oriented, if you wish) programming and the belief, guided by experience, that a single unified data theory is better than the collection of data structuring techniques that we currently typically employ. TclRAL uses Tcl_Obj structures to hold all of its attribute values and uses expr to evaluate all its expressions. If by some means there were a Tcl_Obj implementation of NULL and all the byzantine, nonsense of three-valued logic could be implemented in expr then TclRAL would just work because it builds strictly on the mechanisms already in Tcl.That being said, I still believe nulls are just wrong in so many ways that have been discussed and written about by those much more capable than I. I can't believe anyone would design a data schema using them. However, I am sympathetic to the legacy problem; there is just nothing I can do about it. And that has nothing to do with how practical a programmer I may or may not be.
Later I talked to Jean-Claude Wippler jcw, author of Vlerq + Ratcl = Easy Data Management [9
- A common interface for relational databases, so database APIs could be more uniform.
- Providing null handling -- a sort of "standard workaround."
- A more complete implementation of relational algebra than can be offered by most database engines, optimized for large databases and high query volumes.:
- Using a list encoding: llength $data == 0 for NULL, otherwise lindex $data 0 is the real data.
- Using a tagged list encoding: Nothing vs {Just $data} (a variant of the above).
- Using a dict or array encoding where missing data is represented by a missing key, so that dict exists can be used to check for null/missing data. $fIndeed:
- Have a separate, parallel data stream. [llength $args], [info exists], and the "contrived" SQL queries are examples.
- Encode with some kind of quoting. The backslash character is an example. Note how the backslash must itself be quoted; null would not have the same problem since it's outside the domain.
- Signal special data with a value that cannot appear in the normal data stream. {null}! can be used when the normal data stream can be any string. When the data stream is numbers, any non-numbers will do.
- null: A non-string, non-value object. As jhh says, it is used to indicate an unknown.
- NULL: (void*)0, the value assumed by a C pointer when it doesn't point to anything in particular.
- NUL: (char)0, an ASCII value used by C to terminate a string.. [10:
- a string value
- an empty string
- a null
set val1 [Just 12] set val2 [Nothing] proc maybeDouble x { switch [lindex $x 0] { Just { expr {[lindex $x 1] * 2} } Nothing { error "No value!" } } } maybeDouble $val1 maybeDouble $val2YouThese itsThisBut [11]. [12] its own type. Instead it is because Tcl requires everything to have a string representation: all values are merely different types of strings (the unfortunate implementation of arrays notwithstanding). And null does not want to be a string.DKF: Tcl has named and unnamed entities. Named entities are commands, variables, namespaces, interpreters, channels, etc. Unnamed entities are values (including the names of named entities). The fundamental datatype of values is that of a string (implemented as a Tcl_Obj because Tcl_Value was taken for something else that's now obsolete); all other value datatypes (numbers, lists, dicts, etc.) are effectively subtypes of string; the implementations might be a bit complex, but that's the principle. That's why, for example, the C API has Tcl_GetString which operates on all values.Nulls represent values that are not "values". In a reference-based language, they're relatively sane. In a language like Tcl where values are literal absolutes, they're completely crazy. (They are easy to do in the space of named entities, either by making a variable unset or through using metadata.)AM (23 june 2008) During a refreshing, but windy, bicycle ride to work I thought of two ways of dealing with nulls within the constraints of current Tcl. I can not pretend to have followed the discussion (I have not even read this page in full yet), but I do know that "nulls" can be used for many things in the world of data bases - such as: the value is simply not known, the value is not known yet, the value is of no relevance in this case, the value we got is completely unreliable (reconstructing from an article I read many years ago :)). All represented by a single value that is not even a value.The ways I thought of are these:
- Represent a null by an undefined variable:
foreach var $varlist column $columnlist { if { [hasValue $column] } { set $var [columnValue $column] } else { unset $var } }
- Represent a null by a read/write trace: the read trace throws an error whenever you try to use the value of the variable, whereas the write trace will delete the read trace, when the variable gets a perfectly ordinary value. Auxiliary procs: hasValue, setNull or the like.
|
http://wiki.tcl.tk/17441
|
CC-MAIN-2017-04
|
refinedweb
| 2,346
| 61.87
|
Python: distribution systems world
At the very beginning Python packaging world may seem too confusing. But only at first sight.
There are a lot of different packages package formats, libraries and tools for distributing and packages management in Python world. There is also PyPI – the Python Package Index is a repository of software for the Python.
Most often used formats are tarbolls (tar.gz or zip files), eggs and system packages (like rpm, deb or other). Tarbolls are just source files archives. rpm or deb are binary files built with some system tools like rpmbuild. Eggs can be thought of as a kind of zipped source code with some meta information (for instance, dependencies).
There are also various libraries: distutils, setuptools, distribute and zc.buildout.
It’s a built-in library. To distribute package developer need to write simple setup.py file which will be used to built your package in various formats. It’s definitely lacks some features but clean and easy to use.
Simple example of setup.py:
from distutils.core import setup setup(name='foo', version='1.0', py_modules=['foo'], )
For instance, building RPM:
python setup.py bdist_rpm
That is all.
It is a third-party software based on distutils. setuptools was created to make Python packaging more powerful then it was possible with distutils (for instance, dealing with dependencies).
Basic example of setup.py:from setuptools import setup, find_packages setup( name = "HelloWorld", version = "0.1", packages = find_packages(), )
Like distutils, setuptools supports different formats (bdist_rpm, bdist_wininst, etc.). For a long time, setuptools was widely used as one of the most popular packaging system. For instance, redis-py by Andy McCurdy uses setuptools and many others do.
It’s also a third-patty library and a fork of setuptools. It was born after setuptools was not maintained for a long time. The main goal of distribute is ‘to replace setuptools as the standard method for working with Python module distributions.’ It supports both Python 2 and Python 3. distribute is developing in two branches: 0.6.x (provides a setuptools-0.6cX compatible version and also fixes a lot of bugs which were not fixed in setuptools) and 0.7.x (will be refactored version with a lot of changes).
Project was started by Jim Fulton in 2006 and is based on setuptools and easy_install. It is possibly more complicated then others building systems but it’s also more ambitious. Buildbot uses additional configurations files (e.g., boostrap.py, bootstrap.cfg). It’s Python-based and works with eggs. Buildbot also introduces concept of recipes – plugins which aim to add new functionalities to software building, like it’s stated in official documentation. Recipes can installed from PyPI. It allows zc.buildbot to extend possibilities since recipes can be used for different things, like setting up Apache or managing Cron tasks and so on. It’s also possible to use pip with buildbot.
And finally, packages management tools: easy_install and pip.
It is a part of setuptools and is a package manager. It’s really easy to install packages with it. It allows to download, build, install, and manage Python packages. It can also be used for installing packages in form of Eggs and packages from PyPI. easy_install was a main tool for installing third-party packages before pip was developed. On the dark side, it can’t be used for packages uninstalling. easy_install will be deprecated in 0.7.x of distribute.
Here is an example if package installation with easy_install:easy_install install foobar
It’s a tool for Python packages management created by Ian Bicking. Primary it’s used for installing and management packages from PyPI. Pip is intended to replace easy_install and contains extra features (e.g., unlikely easy_install it can be used for uninstalling ). It can not be used for installing eggs but it seems it is not really a problem. pip uses setuptools or distribute. pip will be automatically installed into each virtual environment created with virtualenv.
Example of package installation with pip:pip install foobar
One more important tool you should know about is a virtualenv. It’s widely used for creating isolated Python environments which can be useful in both production and development. We will cover it later.
At the time of writing (Aug 2011), the most preferable way to install packages is pip. And distribute is for packaging. easy_install and setuptool can be considered as deprecated.
Now you can easily understand what co-creator of Django Jacob Kaplan-Moss meant:
Python has one package distribution system: source files and setup.py install.
And easy_install.
Python has two package distribution systems: setup.py install and easy_install. And zc.buildout.
Python has three package distribution systems: setup.py install, easy_install, and zc.buildout. And pip.
Amongst Python’s package distribution are such diverse elements as…
Future reading
- On packaging, Why I like pip, both by James Bennett, Django release manager
- Chapter 16. Packaging Python Libraries from Dive Into Python 3 by Mark Pilgrim
- A small introduction to Python Eggs by by Christian Scholz
- Python Packaging, Distribution, and Deployment: Volume 1
- A Few Corrections To “On Packaging” by Ian Bicking
- A history of Python packaging by Martijn Faassen
- Developing Django apps with zc.buildout by Jacob Kaplan-Moss
- Chapter 14. Python Packaging by Tarek Ziadé
Didn’t find the answer to your question? Ask it our administrators to reply we will publish on website.
|
http://supportex.net/2011/08/python-distribution-systems-world/
|
CC-MAIN-2017-47
|
refinedweb
| 900
| 60.11
|
This week has been a busy one. I’ve had interviews, worked on some labs, and have had social events in between. So far, I’m feeling pretty good about all of it.
More specifically, I’ve been keeping up with my Object Oriented Programming. As much as I would like to get into the specifics of what I’ve been keeping up with, I have to save it for next week.
I started brushing up on my Ruby and using OOP to create simple classes and methods, however, I recently had an interviewer ask me to do the same thing, but using JavaScript.
I hope to write a blog discerning the differences between the two and hopefully help identify the key similarities and differences.
Until then…
A Lesson from an Interview Question
Recently, I had an interview where this question came into play. Normally, I would be able to solve this pretty easily, however, the circumstances of the interview denied me the use of any external assistance and didn’t allow me to test my code.
That last part was the more difficult part for me. Anyway, I started pseudo-coding and was able to workout a pretty safe framework for how I thought the solution should look. …
Current Work-in-progress, JokeBook App
I’ve been working on my JokeBook app for a while, adding features and styling here and there, but I wanted to share how I was able to create an update function that allows a User to edit their Bio.
Since I am using a modal to open and close on click of ‘edit’, I created functions that opened and closed the modal in App.js.
Next, I created a handleUpdateUserBio function (still in App.js) that took in an event (in this case, I wanted to pass in the bio ONLY) and send it to the backend as…
What is a Block?
A Block in Ruby is a chunk of code between braces or between do..end that you can associate with method invocations, almost as if they were parameters.
Let’s look at a simple example:
def invoke_block
puts 'Beginning of Method'
yield
puts 'End of Method'
end
In this case, we would see the following code returned:
>Beginning of Method
>In the 'yield' block
>End of Method
A block is simply a chunk of code, and
yield allows you to "inject" that code at some place into a function. So if you want your function to work in…
Recently, I made a blog post that acted as an re-introduction to keeping my Ruby skills sharp. As promised, here is my follow-up to what I’ve been doing.
I’ve learned a few new things while going through my old online assignments, primarily how different Ruby is compared to other programs (duh). However, I wanted to write down the pros as I get back into all this.
Ruby on Rails is pretty time efficient (considering). Although it is relatively easy to learn the basics of this framework, it will take some time for you to unlock its true potential. …
Getting back into coding after a brief time getting your post-graduated ducks in order can be a challenge in itself. Like any skill, coding is one that (if not practiced) can be dulled and rusted.
After brushing up on my Ruby, I definitely felt my mental muscles slowly remember themselves. With all the varied coding languages out there, it’s strange how they can all be very similar, yet, implemented in such a different way. Almost annoyingly so.
Today, I spent the better part of the day flexing those old muscles and trying my hand at some Ruby practice problems. …
A Simple Set-Up Guide
touch src/components/Map.js
Next, we need to set-up our Map Component…
A Beginners Guide To Setting-Up a Basic React App
Get excited, because “there must be a better way” isn’t referring to some ethereal future notion of understanding…it’s referring to React.
If “excited” isn’t the right word, then I’ll let you stick to a react-ion your most comfortable with
Let’s start with going into our console.
create-react-app blog-example
This will install all the appropriate packages and scripts that a basic React app will need.
Once your console is done downloading everything it needs, go into your new project directory and start your app to make sure it is set-up properly.
…
Adding Audio to EventListeners
For one of my Flatiron projects, I made an application in Javascript called SoundScribe. It was an app that allowed you to play a simple melody of notes and save that melody for future reference. It was meant for remembering a tune that you didn’t want to commit to memory.
For that project, I created both the front-end and back-end code. In this blog, I’m going to go through the steps I took to add audio functionality for when the notes on my NoteBar were clicked.
Just for reference, I created a migration file in…
HTML and CSS usually go hand-in-hand. As we all know, the HTML is the ‘skeleton’ of the website and the CSS is how you want to style your website and make it look nice.We’re going to look at the basics behind styling HTML content with CSS Selectors.
CSS Selectors are used to find HTML elements and apply stylistic changes to those elements based on how specific you want that styling to be. There are 3 basic Selectors that can do this by calling on an element’s name, id, and class.
The element selector is used to…
|
https://medium.com/@prmeister89?source=post_internal_links---------0----------------------------
|
CC-MAIN-2021-39
|
refinedweb
| 953
| 68.4
|
Is there a function or easy way to transpose a stream to a given key?
I want to use it in a loop, e.g, take a set of major streams and transpose all of then to C major (so then I can do some statistical work with them).
All the transpose tools I saw work with intervals or number of tones, not fixed keys. It shouldn't be so hard to write my function, but I suppose that it has to be already done... Thanks
If
s is a
Stream (such as a
Score or
Part), then
s.transpose('P4') will move it up a Perfect Fourth, etc. If you know the key of
s as
k major, then
i = interval.Interval(k, 'C') will let you do
s.transpose(i) to move from
k to C. If you don't know the key of
s, then
k = s.analyze('key') will do a pretty decent job of figuring it out (using the Krumhansl probe-tone method). Putting it all together.
from music21 import * for fn in filenameList: s = converter.parse(fn) k = s.analyze('key') i = interval.Interval(k.tonic, pitch.Pitch('C')) sNew = s.transpose(i) # do something with sNew
This assumes that your piece is likely to be in major. If not you can either treat it as the parallel major (f-minor -> F-major) or find in
k.alternativeInterpretations the best major key analysis. Or transpose it to a minor if it's minor, etc.
|
https://codedump.io/share/9ivu81X9SqSv/1/music21---transpose-streams-to-a-given-key
|
CC-MAIN-2017-34
|
refinedweb
| 250
| 86.1
|
Hello! Its return a Json with source type PLSQL? Not want to use sys.htp.print.
Example:
begin
select .... or procedure... or function...
return X;
end;
You might want to checkout my JSON library from PL/SQL, if you need to generate JSON data:
Your question is not very clear.
If "source type" is "Query", "Query one row", "Feed", or "Media Resource": You must supply a SELECT statement.
yes, you can use a function in a SELECT statement.
select f( :input ) as pseudo_col_name from dual;
Also, PIPELINE FUNCTION comes to mind.
PL/SQL
If "source type" is "PL/SQL"
(as far as i know) The only way to return data from "an anonymous PL/SQL block" to the web browser is by using the sys.htp package.
If you have a large amount of data to return (eg > 32,767 bytes), you'll need to LOOP over the CLOB.
Since you want JSON format, you will need to use a package like dtr's package or build-your-own.
(I like this site: )
MK
|
https://community.oracle.com/thread/3568125
|
CC-MAIN-2017-47
|
refinedweb
| 174
| 81.93
|
public class KeyEvent extends TypedEvent.
KeyListener,
TraverseListener, Sample code and further information, Serialized Form
data, display, time, widget
source
getSource
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
public char character
public int keyCode
SWT. When the character field of the event is ambiguous, this field contains the unicode value of the original character. For example, typing Ctrl+M or Return both result in the character '\r' but the keyCode field will also contain '\r' when Return on the keyboard. For example, a key down event with the key code equal to SWT.SHIFT can be generated by the left and the right shift keys on the keyboard.
The location field can only be used to determine the location of the key code or character in the current event. It does not include information about the location of modifiers in the state mask.
SWT.LEFT,
SWT.RIGHT,
SWT.KEYPAD
public int stateMask
SWT.MODIFIER_MASK,
SWT.BUTTON_MASK
public boolean doit
falsewill cancel the operation.
public KeyEvent(Event e)
e- the untyped event containing the information
public String toString()
toStringin class
TypedEvent
Copyright (c) 2000, 2014 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
|
http://help.eclipse.org/luna/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/events/KeyEvent.html
|
CC-MAIN-2017-17
|
refinedweb
| 198
| 56.76
|
Risk, Return and Diversification
Laura Higgins from ASIC’s MoneySmart Australia recently spoke to ABC Breakfast, and highlighted that ‘85% of women under 35 don’t understand the fundamental investment concepts like risk and return or diversification’.
Risk, return and diversification are definitely topics that we are interested in here at How To Money, and we’d love to dive deeper and explore each of these concepts. From my experience this is not something covered in high school and only covered in specific tertiary education courses, so it is understandably not something that comes naturally to people.
Let’s get started with risk and return (from an investment perspective)…
Risk is the possibility that your investment won’t give you the outcome you were looking for. Return on the other hand is the money you gain or lose on an investment. The potential of a higher investment return usually correlates with a higher risk. ASIC offers a great guide on understanding your investment risk tolerance level, and I’d highly recommend having a read!
We tie a lot of emotions to our money and investment decisions, so it’s really important to consider our risk tolerance before making investment decisions.
Investing is a great way to grow your money, but it is definitely not a risk-free strategy. Now some investments are of a lower risk, such as a term deposit at an Authorised Deposit-taking Institutions (ADI’s), but that offers a lower return than many other investment choices. In a standard term deposit you know exactly what your return will be (e.g. 2.50% p.a. for 6 months) and when your capital will mature (e.g. 6 months).
Higher risk investments can come with the potential for higher returns, but also come with tradeoffs such a the potential for loss of capital and much more volatile returns. Some example of higher risk investments are shares, ETFs, managed funds, cryptocurrencies, currencies, futures, options…and the list goes on. Essentially any investment that does not guarantee a return of your capital can be considered a higher risk investment.
There are definitely things that investors can do to manage and mitigate their investment risks and they’re worth looking into such as; increasing their risk tolerance, diversifying their investment portfolio, increasing their knowledge of the markets and products and planning to invest with a longer time frame in order to ride out market downturns.
Now let’s check out diversification…
At its core, diversification is the practice of ‘not putting all of your eggs in one basket’ — a very old saying, but one that has held true throughout time. When you diversify your investment portfolio, you ensure that your money is spread across a range of investments. This means that your entire portfolio is not solely invested in a single company or through a single managed fund or through a single provider — rather a broad mix of shares, funds and providers.
Occasionally you hear on the news that someone has poured their entire life savings into a particular investment because of the promise of low risks and high returns, and has lost everything. Please don’t be that person that loses everything because you put all of your eggs in one basket.
A healthy degree of skepticism is always worth employing when it comes to investments, because at the end of the day it’s your money on the line.
Having a diversified portfolio reduces your overall investment risk, and it also means that if one of your investments goes wrong, you will not lose all of your capital. Sometimes your investments will fail, but if you’ve diversified your portfolio well — you’ll limit your exposure to the loss.
If you’re interested in learning more about money and personal finance, catch us weekly on the How To Money podcast which you can find on iTunes, or via the online web-player.
Follow us on Twitter @HowToMoneyAUS, on Medium at How To Money Australia or on our podcast over on iTunes.
If you have any questions or comments please leave them in the comments below!
Kate — HTM Founder and Editor
Kate is the founder and editor of the How To Money (HTM) platform. Kate created HTM to help young Australians start talking about personal finance, and share the resources she finds along her financial education journey. Kate started her own journey a few years back, and now works in the finance industry..
|
https://medium.com/how-to-money/https-medium-com-how-to-money-risk-return-and-diversification-d4e3599107f2
|
CC-MAIN-2019-18
|
refinedweb
| 744
| 55.58
|
import "golang.org/x/tools/internal/memoize"
Package memoize supports memoizing the return values of functions with idempotent results that are expensive to compute.
The memoized result is returned again the next time the function is invoked. To prevent excessive memory use, the return values are only remembered for as long as they still have a user.
To use this package, build a store and use it to acquire handles with the Bind method.
Function is the type for functions that can be memoized. The result must be a pointer.
Handle is returned from a store when a key is bound to a function. It is then used to access the results of that function.
Cached returns the value associated with a handle.
It will never cause the value to be generated. It will return the cached value, if present.
Get returns the value associated with a handle.
If the value is not yet ready, the underlying function will be invoked. This activates the handle, and it will remember the value for as long as it exists. This will cause any other handles for the same key to also return the same value.
NoCopy is a type with no public methods that will trigger a vet check if it is ever copied. You can embed this in any type intended to be used as a value. This helps avoid accidentally holding a copy of a value instead of the value itself.
Store binds keys to functions, returning handles that can be used to access the functions results.
Bind returns a handle for the given key and function.
Each call to bind will generate a new handle. All of of the handles for a single key will refer to the same value. Only the first handle to get the value will cause the function to be invoked. The value will be held for as long as there are handles through which it has been accessed. Bind does not cause the value to be generated.
Cached returns the value associated with a key.
It cannot cause the value to be generated. It will return the cached value, if present.
Delete removes a key from the store, if present.
Has returns true if they key is currently valid for this store.
Package memoize imports 5 packages (graph) and is imported by 3 packages. Updated 2019-09-15. Refresh now. Tools for package owners.
|
https://godoc.org/golang.org/x/tools/internal/memoize
|
CC-MAIN-2019-39
|
refinedweb
| 400
| 75.91
|
React apps with Redux are built around an immutable state model. Your entire application state is one immutable data structure stored in a single variable. Changes in your application are not made by mutating fields on model objects or controllers, but by deriving new versions of your application state when you make a single change to the previous version.
This approach comes with some huge benefits in tooling and simplicity, but it requires some care.
It’s certainly possible to model your Redux application state using plain old JavaScript/TypeScript objects and arrays, but this approach has a few downsides:
- The built-in data types aren’t immutable. There’s no real protection against an errant mutation of your application state, violating the basic assumptions of Redux.
- Creating copies of arrays and maps is inefficient, requiring a
O(n)copy of the data.
Fortunately, Facebook has also released Immutable.js, a library of persistent immutable data structures. This library provides maps, sequences, and record types that are structured to enforce immutability efficiently. Immutable.js data structures can’t be changed–instead, each mutation creates a brand new data structure that shares most of its structure with the previous version.
Since it’s immutable, this is safe. It’s also efficient. By using fancy data structures, each change is
O(log32 n), effectively constant time.
Immutable.js comes with type definitions for TypeScript’s built-in data structures. Less fortunately, creating immutable record types (e.g., classes) with heterogeneous values isn’t as straightforward as
Map<K,V>.
I looked for a solution and found a few libraries, but they all were too complicated. I wanted something that:
- Is simple and easy to understand
- Provides typed access to properties
- Offers a type-safe, statically enforced way to perform updates
- Resists incorrect use, without requiring static enforcement
Example
Here’s an example of the simplest approach I could come up with:
type CounterParams = { value?: number, status?: Status, }; export class Counter extends Record({ value: 0, status: OK }) { value: number; status: Status; constructor(params?: CounterParams) { params ? super(params) : super(); } with(values: CounterParams) { return this.merge(values) as this; } }
(Assume
Status is an enumeration type with a value called
OK.)
This approach supports:
- Creation via a strongly typed constructor with defaults for omitted values
- Statically-typed updates using the
withmethod
- Statically-typed property access
Immutable.js includes a bunch of other methods, such as
merge, that are inherited from
Record or
Map and present but untyped. I don’t see these as part of the usual API for my records, and that’s OK with me. I may not get a static error if I try to set a property, but I’ll get a runtime error. That’s safe enough for me, given that the convention will be to use the
with method.
Use
Here’s how you’d use one of these immutable records:
let c1 = new Counter({ value: 2 }); c1.value // => 2 c1.status // => OK c1.foo // Type error let c2 = c1.with({ value: 3}); c2.value // => 3 c1.with({ status: 'bogus' }) // Type error
How It Works
All of the actual functionality is inherited from the Immutable.js
Record class. We’re using the type system to create a simple typed facade. There are a few key steps.
1. Define a parameter type
First, define a type that contains all of the fields you’ll have in your record. Each one is optional:
type CounterParams = { value?: number, status?: Status, };
Since
Counter will have a
value and a
status, our
CounterParams type has optional entries for both of those. This is an object type that may have up to a
value of type
number and a
status of type
Status, but it may be missing either or both of those.
This would be the logical type of the Immutable.js constructor argument and
update method, if we’d written it from scratch. Since we’re inheriting the implementation from JavaScript, it doesn’t have more detailed type information.
2. Inherit from Record
Next, we define our class and inherit from
Record:
export class Counter extends Record({ value: 0, status: OK }) {
There’s nothing special here. This is just straight-up, normal, untyped Immutable.js record type definition with zero type help. We’re just providing default values for
value and
status to the
Record class creation function.
3. Type the constructor
Now, we define our constructor to take an optional
CounterParams argument and delegate to super. If we construct our object with no argument, it gets all default values. If we construct it with params, we end up overriding just the supplied arguments:
constructor(params?: CounterParams) { params ? super(params) : super(); }
Note that we don’t call
super with undefined to get the proper default behavior.
4. Define
with
Our
with method is just a typed alias for the
merge method inherited from Immutable’s
Map type. Unfortunately,
merge won’t statically enforce our type signature for parameters, and is typed to return a
Map. We use
CounterParams as our type argument to solve the former issue. We cast as
this to solve the latter, which works because
merge does, in fact, return an instance of our type–it just doesn’t specify that in its signature at the time of writing this. Using
as this convinces TypeScript that
with will, in fact, return a
Counter.
with(values: CounterParams) { return this.merge(values) as this; }
And that’s it! A little repetitive, but simple and easy to understand. And we now have nice, statically typed persistent immutable model types.
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy11 Comments
Awesome post, the 4th piece is exactly what I was missing.
One addition: you can now get the static error on assignment by marking your properties as readonly on your class.
I just created a little library meant to do structural-sharing tree updates, with full static analysis and completion.
It has the advantage to use Plain-Old Javascript Objects, compared to ImmutableJS Records that wrap your object and make it more difficult to handle, and do not provide great static analysis.
You can use it very easily with Redux, and we use it everyday on our projects.
Nice post! You might even get more typing assistance by doing this:
“`
export interface CParams {
value: number;
value2: number;
}
export class C extends Record({ value: 1, value2: 2 } as CParams) implements CParams {
readonly value: number;
readonly value2: number;
constructor(params?: Partial) {
params ? super(params) : super();
}
with(vals: Partial): C {
return this.merge(vals) as this;
}
}
“`
I think you can accomplish the same objectives without any libraries.
interface State {
readonly a: string,
readonly b: string
}
const astate: State = { a: “high”, b: “low”}
astate.b = “lower” // compile error
const another = { …astate, b: “lower”}
Am I missing anything?
Good point Jeremy! This post was released before spread syntax was added in TS 2.1. Using that combined with readonly properties is probably a better default at this point.
Hi,
You might want to read immutablejs documentations. It maps unchanged properties of former object and only assigns changed properties. So it is fast and space saving. But of course it could be cumbersome for simple solutions and your proposal works just fine.
It seems to me that this approach provides compile-time safety, but it doesn’t give any run-time safety. For runtime safety, I think you need something like Immutable.js.
I appreciate the article Drew. I arrived here looking for an answer to this question – perhaps you could help:
I have a class extending a Record like yours above. But in the constructor, I also do some validation of an array (say…. do all the numbers in it add up to 100?). If not I also add a property `isValid` (a boolean).
Now, when I want to update the object, I also want to rerun that validation logic. My first thought was to create a class from the update, but then I get all those default values, which will overwrite existing values (which I do not intend, I just have a couple fields to update).
A second thought is to add it to your `with` function??
What do you think?
Hi Morgan,
I think running the validation in
withwould work. I’d consider a few other options as well.
One potential downside about the
isValidapproach is that it complicates your domain model with validation logic. It may be desirable depending on your circumstance, but my default would be to avoid it.
One useful technique for this is to create a separate type
Valid<T>which is a
Tthat has been proven to be valid. You can then define a type guard which, given some model, proves that that model is valid.
You can then define containers and functions to take a
Valid<Model>and typescript will enforce that anything stored there/passed into that function has been checked for validity.
See an example here.
You can elaborate on this technique to return a set of validation errors in the case of invalid objects by changing validate model to build/return some sort of failed validation summary instead of undefined.
Drew
Interesting… thanks for the gist and explanation. I was unfamiliar with type guards. I’m trying to get through the rest of that gitbook.
Here’s another consideration for this scenario. I am hoping to save the `isValid` to the object in the DB. Why? I’d like to query against it perhaps. At least use it for user analytics.
With that in mind, I like your idea of returning (one of) a set of validation errors for invalid objects. Think you would do something like:
if (isInvalidForReason1(someModel)){
someModel.invalid = ‘REASON_1’;
}
Before sending the PUT/PATCH to the db that is.
Thanks for you thoughts on this.
-Morgan
The current rc release of immutable has much better TypeScript generic support:
npm install immutable@rc
import * as Immutable from “immutable”;
interface Foo {
id: number;
bar: string;
}
const FooRecord = Immutable.Record({
// Default values
id: null,
bar: “Empty (default)”
});
const foo = FooRecord({ id: 3 });
foo.id; // 3
foo.bar; // “Empty (default)”
const bar = foo.merge({ bar: “blah” });
bar.id // 3
bar.bar // “blah”
|
https://spin.atomicobject.com/2016/11/30/immutable-js-records-in-typescript/
|
CC-MAIN-2019-30
|
refinedweb
| 1,700
| 65.83
|
#include <db.h> int DB_ENV->rep_elect(DB_ENV *env, u_int32_t nsites, u_int32_t nvotes, u_int32_t flags);
The
DB_ENV->rep_elect() method holds an election for the master of a
replication group.
The
DB_ENV->rep_elect() method is not called by most replication
applications. It should only be called by Base API applications implementing
their own network transport layer, explicitly holding replication
group elections and handling replication messages outside of the
Replication Manager framework.
If the election is successful, Berkeley DB will notify the application of the results of the election by means of either the DB_EVENT_REP_ELECTED or DB_EVENT_REP_NEWMASTER events (see DB_ENV->set_event_notify() method for more information). The application is responsible for adjusting its relationship to the other database environments in the replication group, including directing all database updates to the newly selected master, in accordance with the results of the election.
The thread of control that calls the
DB_ENV->rep_elect() method must
not be the thread of control that processes incoming messages;
processing the incoming messages is necessary to successfully complete
an election.
Before calling this method, the enclosing database environment must already have been opened by calling the DB_ENV->open() method and must already have been configured to send replication messages by calling the DB_ENV->rep_set_transport() method.
Elections are done in two parts: first, replication sites collect information from the other replication sites they know about, and second, replication sites cast their votes for a new master. The second phase is triggered by one of two things: either the replication site gets election information from nsites sites, or the election timeout expires. Once the second phase is triggered, the replication site will cast a vote for the new master of its choice if, and only if, the site has election information from at least nvotes sites. If a site receives nvotes votes for it to become the new master, then it will become the new master.
We recommend nvotes be set to at least:
(sites participating in the election / 2) + 1
to ensure there are never more than two masters active at the same time even in the case of a network partition. When a network partitions, the side of the partition with more than half the environments will elect a new master and continue, while the environments communicating with fewer than half of the environments will fail to find a new master, as no site can get nvotes votes.
We recommend nsites be set to:
number of sites in the replication group - 1
when choosing a new master after a current master fails. This allows the group to reach a consensus without having to wait for the timeout to expire.
When choosing a master from among a group of client sites all restarting at the same time, it makes more sense to set nsites to the total number of sites in the group, since there is no known missing site. Furthermore, in order to ensure the best choice from among sites that may take longer to boot than the local site, setting nvotes also to this same total number of sites will guarantee that every site in the group is considered. Alternatively, using the special timeout for full elections allows full participation on restart but allows election of a master if one site does not reboot and rejoin the group in a reasonable amount of time. (See the Elections section in the Berkeley DB Programmer's Reference Guide for more information.)
Setting nsites to lower values can increase the speed of an election, but can also result in election failure, and is usually not recommended.
The nsites parameter specifies the number of replication sites expected to participate in the election. Once the current site has election information from that many sites, it will short-circuit the election and immediately cast its vote for a new master. The nsites parameter must be no less than nvotes, or 0 if the election should use the value previously set using the DB_ENV->rep_set_nsites() method. If an application is using master leases, then the value must be 0 and the value from DB_ENV->rep_set_nsites() method must be used.
The nvotes parameter specifies the minimum number of replication sites from which the current site must have election information, before the current site will cast a vote for a new master. The nvotes parameter must be no greater than nsites, or 0 if the election should use the value ((nsites / 2) + 1) as the nvotes argument.
The
DB_ENV->rep_elect()
method may fail and return one of the following non-zero errors:
The replication group was unable to elect a master, or was unable to complete the election in the election timeout period (see DB_ENV->rep_set_timeout() method for more information).
|
http://docs.oracle.com/cd/E17276_01/html/api_reference/C/repelect.html
|
CC-MAIN-2013-20
|
refinedweb
| 781
| 51.21
|
This chapter describes the two-dimensional Apollonius graph
of CGAL. We start with a few definitions in
Section
.
The software design of the 2D Apollonius graph package is described
in Section
.
In Section
we discuss the geometric
traits of the 2D Apollonius graph package and in Section
). If this is
the case we call the site hidden (these are the black
circles in Fig.
). Fig.
Fig.
(right).
Fig.
). controllong [KE02, KE03]..
). The first three (red circles in
Fig.
) define a tritangent circle (yellow
circle in Fig.
). What we want to determine is
the sign of the distance of the green circle from the yellow
circle. The distance between two circles and
is defined as the distance of their centers minus
their radii: Fig.
). essense, afore mentioned CGAL::Ring_tag and CGAL::Sqrt_field_tag. When CGAL::Ring_tag is used, only ring operations are used during the evaluation of the predicates, whereas if CGAL: [Dev98].: examples/Apollonius_graph_2/example1.C #include <CGAL/basic.h> // standard includes #include <iostream> #include <fstream> #include <cassert> // the number type #include <CGAL/MP_Float.h> #include <CGAL/Filtered_exact.h> // example that uses the Filtered_exact number type typedef CGAL::Filtered_exact<double>2 || defined CGAL_USE_CORE # include <CGAL/Filtered_exact.h> #endif #if defined CGAL_USE_LEDA // If LEDA is present use leda_real as the exact number type for // Filtered_exact typedef CGAL::Filtered_exact<double,leda_real> NT; #elif defined CGAL_USE_CORE // Othwrwise if CORE is present use CORE's Expr as the exact number // type for Filtered_exact typedef CGAL::Filtered_exact<double>3/example4; }
|
http://www.cgal.org/Manual/3.1/doc_html/cgal_manual/Apollonius_graph_2/Chapter_main.html
|
crawl-001
|
refinedweb
| 244
| 50.02
|
Related Reads
Reverse TCP Shell Client and Server
January 24, 2017
By: Priyank Gada
5183
Channel Scoop: July 6, 2018
July 7, 2018
148
Excerpts from Modern Bank Heists – Threat Hu ...
June 22, 2018
142
Hello, and welcome again!
Scapy is one of the powerful packet manipulator and decoder libraries for Python. Scapy is used for forging and manipulating packets through python and can also be used as an alternative to carrying out a few functionalities provided by popular Wireshark and Nmap.
In this article lets see how to use few basic functionalities of scapy and also to sniff traffic on the network interface by writing a simple python script.
Install scapy module for Python:
easy_install scapy
So lets see few basic functions provided by scapy library.
Let’s start scapy first –
Once you install scapy, go to your terminal and type “sudo scapy”
Note: SCAPY should be run as root.
menoe@menoetius:~$ sudo scapy
WARNING: No route found for IPv6 destination :: (no default route?)
Welcome to Scapy (2.3.2)
>>>
Now, lets create a packet using scapy:
>>> ip=IP(dst=”google.com”)
>>>ip.dst
Net(‘’)
This created a simple ip packet consisting of destination parameter which points to “google.com”, OR you can specify ip address of the destination.
Now, let’s add a src parameter to the ip packet we just created.
>>> ip.src=”192.168.1.100″
now lets see all available parameters for the ip layer function.
>>>ip.show()
###[ IP ]###
version= 4
ihl= None
tos= 0x0
len= None
id= 1
flags=
frag= 0
ttl= 64
proto= hopopt
chksum= None
src= 192.168.1.100
dst= Net(‘google.com’)
options
Note: we can set all the parameters if we require to set the parameters.
Next let’s add a TCP layer to the already existing packet.to do that,we make use of “/” operator to append layers to the existing packet.
>>> packet=ip/TCP(sport=1020,dport=80)
Look at the packet attributes and layers it contains.
>>> packet.show()
###[ IP ]###
version= 4
ihl= None
tos= 0x0
len= None
id= 1
flags=
frag= 0
ttl= 64
proto= tcp
chksum= None
src= 192.168.1.100
dst= Net(‘google.com’)
options
###[ TCP ]###
sport= 1020
dport= http
seq= 0
ack= 0
dataofs= None
reserved= 0
flags= S
window= 8192
chksum= None
urgptr= 0
options= {}
Note: we can add Ethernet protocol layer to the packet by using Ether function. usage: Ether()/IP()/TCP()
if Ether() function is used without parameters, it takes your default machine address as source mac address.
Now, let’s send the IP packet we just created.we make use of send function to do the required operation.count parameter is used to specify the number of times to send the packet.
>>>send(packet,count=20)
………………..
Sent 20 packets.
Note: we need to use “sendp” function for sending ethernet packets.
Now, lets craft a layer 3 ICMP request packet using scapy.sr() function helps us to send a layer 3 packet and also receive a number of response packet from the destination consisting of answered and unanswered packets.sr1() function is used to send packet and returns the first answer packet answered by the destination for collection of packets sent.
>>>result,unans= sr(IP(dst=”abc.com”)/ICMP())
.Finished to send 1 packets.
*
Received 2 packets, got 1 answers, remaining 0 packets
>>> result.summary()
IP / ICMP 192.168.1.100 > 199.181.132.250 echo-request 0 ==> IP / ICMP 199.181.132.250 > 192.168.1.100 echo-reply 0
Here, as we can see,we have received a echo response for our request to address abc.com.
Now we know few basic operations that can be performed using scapy. If you observe clearly we can spoof the packets we are sending with the help of scapy by editing the src parameter.which can be leveraged for Denial of service types of attack.
Now lets create a simple python script to sniff traffic on your local machine network interface .
from scapy.all import * #import scapy module to python
def sniffPackets(packet): # custom custom packet sniffer action method
if packet.haslayer(IP):
pckt_src=packet[IP].src
pckt_dst=packet[IP].dst
pckt_ttl=packet[IP].ttl
print “IP Packet: %s is going to %s and has ttl value %s” % (pckt_src,pckt_dst,pckt_ttl)
def main():
print “custom packet sniffer”
sniff(filter=”ip”,iface=”wlan0″,prn=sniffPackets) #call scapy’s inbuilt sniff method
if __name__ == ‘__main__’:
main()
Here in this simple script, we are leveraging the scapy modules method called “sniff” .it takes parameter as interface you wish to sniff packets on. In this case, I wanted to sniff packets on interface “wlan0”. and filter parameter is used to specify what packets have to be filtered.prn parameter specifies what function to call and send the sniffed packet as parameter to the function.here our custom function is “sniffPackets”.
Inside sniffPackets function we are checking, if the sniffed packet has an IP layer,if it has IP layer then we store source, destination and ttl values of the sniffed packet and print it out.
To run the script:
Save the script and run it as root through Python interpreter.
>This makes the script listen to traffic on a specified interface.
Run through any web browser and start browsing, then switch back to the terminal to see sniffed packets.
Sample Output:
>>sudo python scapy_sniff.py
WARNING: No route found for IPv6 destination :: (no default route?)
custom packet sniffer
Packet: 192.168.100.114 is going to 192.168.100.1 and has ttl value 64
Packet: 192.168.100.114 is going to 192.168.100.1 and has ttl value 64
Packet: 192.168.100.114 is going to 192.168.100.1 and has ttl value 64
Packet: 192.168.100.1 is going to 192.168.43.14 and has ttl value 64
Packet: 192.168.100.1 is going to 192.168.43.14 and has ttl value 64
Packet: 192.168.100.1 is going to 192.168.43.14 and has ttl value 64
……..
Formatted paste bin code:
This is just a few of the basic things we could achieve with scapy.
Hope you enjoyed this article. Thank you.?
Thks.
Thank you 🙂
Thanks !!
U r welcome 🙂
Nice!
Thank you!
Good Contribution
Glad it helped. 🙂
|
https://www.cybrary.it/0p3n/forge-sniff-packets-using-scapy-python/
|
CC-MAIN-2019-26
|
refinedweb
| 1,046
| 67.04
|
The:
- Constantly having to dereference yourself in your own objects: self->foo, self->bar. This alone can make a 10-15 line member function look extremely dense.
- Long method names. There is just one big namespace, so the name of the class is prefixed to every single method name, every time it is called. Even worse, you have to explicitly call gtk_superclass_operation(foo) even if foo is more specialized. That is, you have to remember which classes the methods are originally declared in when calling, instead of just invoking them on the object.
- Constant need for casting macros like GTK_ADJUSTMENT(foo), GTK_CONTAINER(foo) and the like. This is made worse by the fact that typically you only use them inside argument lists of calls (i.e. nesting gets ugly.) Unless you declare extra pointers with the right type and cast them earlier on, you have to do this or get compiler problems and risk type errors. Even that is extra lines of code using cast macros; there's no way for clarity to win on this one.
GTK+ and its object system are still easy to use and relatively straightforward. I think it is an excellent user-interface library. But if there is one real criticism to be made of GTK use, it is this: at times the sheer verbosity of the object system.
(For several reasons, I cannot use Glade to speed this up for me.)
Languages are a touchy issue---I hope nobody will take this the wrong way. I make no criticisms of the design of anyone's library or object system. My concerns here are limited strictly to usability at the syntactic level, and are based entirely on my own experiences.
Here is my suggestion for improving the situation. Provide a runtime dynamic object library just as is planned with GObject, but also provide a minimal set of syntax extensions on top of C, so that all three problems mentioned above just disappear.
No time to design and build something new, you say? But someone has already built a small syntax extension to C that fits the above requirements.
The main changes.
I would like to propose that app developers for GNOME and GTK start investigating this. Everything that I have learned so far tells me that Objective-C is exactly the "small step" that I, and possibly others in the world of UNIX applications development, would like to take. I don't want to give up even one bit of C's power, but I'm not afraid of convenience and new tools. I like to use abstract data types and some interfaces, but I don't want a completely new language that throws out what I know. Only the parts you want to write in objc need be written that way---the language is completely compatible with C. There are already objc bindings for GTK+ and GNOME, but they do not appear to be actively maintained. Generating interest in the language and volunteering help (as I am doing) could change the situation and find a real answer to the agonized "C or C++" question for people interested in trying something new.
With reference to GObject, in the immediate future, I believe that we could explore two possibilities:
- Improve the existing Objective-C bindings for GTK+ and GNOME and keep them up-to-date, so that people can begin developing more applications, and so that the barrier to experimentation for other developers is lower. I am willing to help with the bindings, so I have contacted the people involved.
- Either that, or.
Someone with better knowledge of GObject's design would have to comment on these ideas. Thank you, if you have read this far into my rant :-).
-dave
[link
to more info on objective c]
[Apple's recent book on Objective-C]
|
http://www.advogato.org/person/dto/diary/68.html
|
CC-MAIN-2015-32
|
refinedweb
| 638
| 62.58
|
Using DMD 0.118, Windows 98SE.
This was briefly talked about before, but not actually reported as a
bug. It's a separate issue from the other interface covariance bug I
reported a while back.
The compiler allows a method with an interface return type to be
overridden with a class return type. However, when this is done,
strange things happen, from AVs to doing things that seem to have no
relation to the method that was called.
Two similar testcases:
----- covariant_int2.d -----
import std.stdio;
interface Father {}
class Mother {
Father test() {
writefln("Called Mother.test!");
return new Child(42);
}
}
class Child : Mother, Father {
int data;
this(int d) { data = d; }
override Child test() {
writefln("Called Child.test!");
return new Child(69);
}
}
void main() {
Child aChild = new Child(105);
Mother childsMum = aChild;
Child childsChild = aChild.test();
Child mumsChild = cast(Child) childsMum.test();
}
----- covariant_int4.d -----
import std.stdio;
interface Father {
void showData();
}
class Mother {
Father test() {
writefln("Called Mother.test!");
return new Child(42);
}
}
class Child : Mother, Father {
int data;
this(int d) { data = d; }
override Child test() {
writefln("Called Child.test!");
return new Child(69);
}
void showData() {
writefln(data);
}
}
void main() {
Child aChild = new Child(105);
Mother childsMum = aChild;
aChild.test();
Father mumTest = childsMum.test();
aChild.showData();
mumTest.showData();
}
----------
D:\My Documents\Programming\D\Tests\bugs>covariant_int2
Called Child.test!
Called Child.test!
Error: Access Violation
D:\My Documents\Programming\D\Tests\bugs>covariant_int4
Called Child.test!
Called Child.test!
105
Child
----------
I'm guessing that the underlying cause of both is the same - as
speculated before
interface references aren't compatible with class references. This
means that when a method is covariantly overridden from interface to
class, and it is then called through the base class, a class reference
is returned, which is no good as the base class method, and hence the
caller through it, needs an interface reference.
The spec doesn't explicitly forbid this, but if it isn't supposed to
work then the compiler should be giving an error (and the spec updated
accordingly). Otherwise, it could be fixed to work like this:
- The compiler would detect that a method is being overridden from
Father (interface) to Child (class), and compile Child.test to return an
interface reference for compatibility
- When the method is called through a Child reference, the caller would
need to implicitly convert the returned Father reference to a Child
reference. Of course, this conversion can be optimised away if the
context dictates that a Father reference is required.
- It would be necessary to throw in a restriction or two. A class
cannot derive from both a class and an interface, or multiple
interfaces, if they define methods with the same name and parameter
types but one has a class return and the other has an interface return.
Assuming that it would be impossible to compile the method to be
compatible with both simultaneously.
I haven't experimented with interface-to-interface covariant overrides,
so don't know if these work. But I can imagine there being
complications when multiple interface inheritance is involved.
The question: Is it worth making this work? Or do these complications
mean that we ought to disallow interface-to-class overrides altogether?
Stewart.
--
My e-mail is valid but not my primary mailbox. Please keep replies on
the 'group where everyone may benefit.
|
http://forum.dlang.org/thread/d1mbpo$2peg$1@digitaldaemon.com
|
CC-MAIN-2014-23
|
refinedweb
| 556
| 56.66
|
An In-Depth Look at java.util.LinkedList
Explore the Linked List data structure further, specifically focusing on the java.util.LinkedList class.
Join the DZone community and get the full member experience.Join For Free
This article is part of Marcus Biel’s free Java 8 course focusing on clean code principles. In this article, let's walk through the Collections class LinkedList and compare it to ArrayList.
A PDF of the article is also available here.
As the name implies, the Java class LinkedList is called LinkedList because internally it is based on a Doubly Linked List.
Difference Between LinkedList and java.util.LinkedList article,.
Difference Between ArrayList and LinkedList a LinkedList, storing elements in an ArrayList consumes less memory and generally gives faster access times. Adding or removing elements is usually faster for a LinkedList, but as you usually have to iterate to the position at which you want to add or remove an element, the performance loss for iterating to the correct position often prevails over the performance gain in adding or removing an element. Michael Rasmussen has done a JMH benchmark showing this nicely.
Besides the different data structures of ArrayList and LinkedList, LinkedList also implements the Queue and the Deque interfaces which give it some additional functionality over ArrayList.
In conclusion, there is no overall winner between ArrayList and LinkedList. Your specific requirements will determine which class to use.
LinkedList Implementation
Let’s put ArrayList aside for now and have an in-depth look at the LinkedList implementation. Here is a simplified code excerpt from the java.util.LinkedList class:
package java.util; public class LinkedList implements List,Deque { private Node first; private Node last; public E get(int index) {…} public boolean add(E e) {…} public E remove(int index) {…} […] }
I don’t expect you to fully grasp every detail of the code, I just want to show you that LinkedList is a normal Java class which anyone could have written, given enough time and knowledge. The real source code is available online. After reading this article,.
So the LinkedList class has a reference to the first and last elements of the list, shown as red arrows in this image below:
Every single element in a Doubly Linked List has a reference to its previous and next elements as well as a reference to an item, simplified as a number within a yellow box on this image above.
public class Node { private E item; private Node previous; private Node next; public Node(E element, Node previous, Node next) { this.item = element; this.next = next; this.previous = previous; } [...] } my tutorial about ArrayList, I wrote about the List interface methods already. In this article, I want to look at the methods of the Queue and Deque interface as implemented by LinkedList.
java.util.Queue interface
From a high-level perspective, the Queue interface consists of three simple operations:
- add an element to the end of the Queue
- retrieve an element from the front of the Queue, without removing adding null. Finally, you can retrieve and remove an element from the front of the Queue. If the Queue is empty, remove will throw an Exception, while poll will return null.
java.util.Deque interface null in this case. Finally, you can retrieve and remove elements from both ends of the Deque. “removeFirst” and “removeLast” will throw an Exception when the Queue is empty, while “pollFirst” and “pollLast” will return null in this case.
Stack data structure.
Published at DZone with permission of Marcus Biel, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/linked-list-journey-continues
|
CC-MAIN-2021-39
|
refinedweb
| 602
| 63.8
|
Another user had contacted kaggle about it. Hopefully we will get an answer on this soon.
Hi Jimmy, thanks for the info. Hopefully Kaggle will resolve the issue soon.
The MNIST notebook uses test_batches as validation data for model fitting. Shouldn't the validation set be partitioned from the training set, thereby reserving the test data only for final model evaluation?
A second question: in the data augmentation section of the MNIST notebook, why are test_batches augmented?
Hi
I wanted to use VGG16 for statefarm. I could easily achieve %85 accuracy by finetuning last layer. Then I tried to change last dense layers dimensions but I get validation accuracy sticked at %10, meaning it doesn't work at all.Rather than getting predictions of convolution layers and feeding them to new linear model, I tried to change Vgg model directly by changing Dense layers. Why It doesn't work?
This is what I did:
vgg = Vgg16()
O = Vgg16()
# removing layers after first dense layer
denses = [i for i, l in enumerate(vgg.model.layers) if isinstance(l, Dense)]
denses.pop()
for i in range(denses[0], len(vgg.model.layers)):
vgg.model.pop()
for l in vgg.model.layers: l.trainable = False
# adding dense layers same as VGG16
vgg.model.add(Dense(4096, activation='relu'))
vgg.model.add(Dropout(.5))
vgg.model.add(Dense(4096, activation='relu'))
vgg.model.add(Dropout(.5))
vgg.model.add(Dense(10, activation='softmax'))
# even load trained weights of VGG16
for d in denses:
vgg.model.layers[d].set_weights(O.model.layers[d].get_weights())
# resetting classes to statefarm's
classes = list(iter(batches.class_indices))
for c in batches.class_indices:
classes[batches.class_indices[c]] = c
vgg.classes = classes
vgg.compile()
vgg.fit(batches, val_batches)
I expect vgg work exactly like finetuned O with 10 but val_accuracy never changes on subsequent epochs while O could achieve %85 accuracy.
vgg
O
Actually I can pop Vgg16's last four layers and It will work by adding another dense, but If I pop last five layers (that means removing all layers after flatten) this problem will happen again.
Does data augmentation help when fine tuning the final dense layers of a pre-trained neural network?
justin,
Did you run cell 79 again after running cell 80? Otherwise, I'm confused just like you were months prior. Why would we initialize the weights to bn_layers to be the prior bn_model weights after we've already attached bn_layers to the conv_layers and thus created the final_model object? It seems like we should run cell 80 before cell 79 since the scope of cell 80 shouldn't extend to the bn_layers that have been appended to the final_model object.
bn_layers
bn_model
conv_layers
final_model
Thanks,Patrick
Hi Partick,I didn't run the cell 80 after running cell 79, because maybe cell 80 is used to create another model, and applying the weights to the new model(I can't remember very much), but one thing is for sure, you can ignore the cell 80 like we've discussed above, it won't effect the following codes.
I am facing the following errors. Has someone seen this already and knows the answer
def get_lin_model(): model = Sequential([ Lambda(norm_input, input_shape=(1,28,28)), Flatten(), Dense(10, activation='softmax') ]) model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy']) return model
lm = get_lin_model()
Error as following
178 outbound_layer.name + 179 '" should return a tensor. Found: ' +--> 180 str(output_tensors[0])) 181 if len(output_tensors) != len(output_shapes): 182 raise ValueError('The get_output_shape_for method of layer "' +
get_output_shape_for
TypeError: The call method of layer "lambda_3" should return a tensor. Found: None
call
this problem is for the mnist dataset in lesson-3
This is my first post, so I hope I am in the correct section.
I have a question on Lesson3 workbook. We split the VGG model into convolution and FC networks, then calculate the output of the convolution part. How does this work?? When we do:
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
My understanding is that predict will create predictions of class membership (labels or probabilities). And so, I would expect any output from a predict method to be either labels or class probabilities. The output of predict_generator seems to be representation features from the convolution layer. I looked at the source for predict_generator w/o any insight.
Any suggestions would be welcomed. Thanks
The predict generator is going to just output the values of the last layer of the model on which it is called. If the last layer is something like a softmax than you can calculate labels. In your case the last layer is a conv layer / max pool and hence the output will be the features extracted by the convolution layer. It is a hacky way to speed up the process of training. It will be more clear when you learn about the functional api in the next classes.
Hi,
How's the preprocessing different with TensorFlow backend? Is it only the channels that differ, i.e. the input_shape from (3, 224, 224) to (224, 224, 3) ?
input_shape
(3, 224, 224)
(224, 224, 3)
Thanks!
I was wondering why you use training features to train w/o dropout instead of directly using the batched pre-computed in previous steps. It adds extra calculations that can be avoided according to my low-level criterion :D. I suppose that I'm wrong but I would like to know the explanation for this thing.
Thanks
Hi all,
I think I am missing a piece of the puzzle when it comes to SGD, I wonder if anyone can point me in the right direction? My naive assumption is that using a linear model comprising the last fully connected layer of a network should behave in exactly the same way as a linear model comprising of all the fully connected layers of a network where only the last layer is trainable.
In lesson 2 under the section Train linear model on predictions, subsection Training the model, Jeremy gets the features from the penultimate layer of the CNN and then uses these as input to a linear model which he defines and compiles as
lm = Sequential([ Dense(2, activation='softmax', input_shape=(1000,)) ])
lm.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
This is great and speeds up the fine tuning process enormously.
In this lesson we split the model into conv_model and fc_model, to experiment with removing dropout. Instead of removing dropout I wanted to perform the same experiment as in Lesson 2 with the final fully connected layers. That is only set the last layer in fc_model to trainable=True. To achieve this I use the below approach where I copy the initial weights from lm to the last layer of fc_model, with the assumption that both models will now behave in the same way.())
If I set opt=RMSprop(lr=0.01) I can train lm using lm.fit, however unless I reduce the learning rate to 0.001 I cannot train fc_model. By that I mean the accuracy stays around 0.5, from which I imply that I have overshot the minimum by choosing a learning rate which is too great.
If I set opt=SGD(lr=0.1) again I can train lm, however I have to reduce this to lr=0.001 to get fc_model to train.
What am I missing?
It appears my assumption above was correct. I was getting the described behaviour because the first two layers of fc_model were still trainable even though fc_model.summary() output
Total params: 119,554,050
Trainable params: 8,194
Non-trainable params: 119,545,856
According to the documentation (which I should have read more closely) How can I "freeze" Keras layers? after setting the trainable property the model needs to be compiled.
Thank you.
Hi and thanks to @jeremy and @rachel for this wonderful class!
I'm starting the Lesson 3 lecture video, and the review of the key concepts, and I'm walking through the convolution-intro.ipynb notebook. I don't have Tensorflow installed currently, so I followed @jeremy's advice (which now I cannot find) and used the Keras MNIST dataset instead. Now, I'm getting strange results:
corrtop
Details:
Getting the dataset from Keras:
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data() # saves to /root/.keras/datasets/mnist.pkl.gz
Then I assigned the images and labels variables as follows:
images
labels
images=x_train
labels=y_train
n=len(images)
images.shape
Output: (60000, 28, 28) (NOTE: TF was (55000, 28, 28)).
(60000, 28, 28)
I computed the corrtop value using the existing code:
corrtop = correlate(images[inspect_idx], top)
Here are the plots of the resulting corrtop data. The original TF versions on the left, mine on the right. Note that the numeral '7' is a different index and a different sample, so the shape is different. That's not my concern. My concern is the overall appearance. It looks like something is wrong with the filters or something.
plot(corrtop[dims])
TF: Keras:
plot(corrtop)
TF: Keras:
Can anyone shed light on why such a difference here? The only change I've made really is to use the Keras dataset instead of the Tensorflow dataset.
Thanks much!
Dimension ordering is different inTensorFlow compared to Theano. In TensorFlow channels come last.
I think I found a clue to this in the Lesson 3 video at:. Here it says that Keras expects color images, and has a channels dimension that carries that information. But the MNIST data is B/W and omits that dimension. Not accounting for this can lead to weird errors with MNIST in Keras. I think this is what my problem is.
I'll be trying this out soon to confirm.
|
http://forums.fast.ai/t/lesson-3-discussion/186?page=8
|
CC-MAIN-2017-51
|
refinedweb
| 1,621
| 57.47
|
Contains the classes representing events occurring in the replication stream. More...
#include <stdlib.h>
#include <sys/types.h>
#include <zlib.h>
#include <climits>
#include <cstdio>
#include <iostream>
#include "debug_vars.h"
#include "event_reader.h"
#include "my_io.h"
#include <sys/times.h>
Go to the source code of this file.
Contains the classes representing events occurring in the replication stream.
Each event is represented as a byte sequence with logical divisions as event header, event specific data and event footer. The header and footer are common to all the events and are represented as two different subclasses.
defined statically while there is just one alg implemented
Fixed header length, where 4.x and 5.0 agree.
That is, 5.0 may have a longer header (it will for sure when we have the unique event's ID), but at least the first 19 bytes are the same in 4.x and 5.0. So when we have the unique event's ID, LOG_EVENT_HEADER_LEN will be something like 26, but LOG_EVENT_MINIMAL_HEADER_LEN will remain 19.
start event post-header (for v3 and v4)
The length of the array server_version, which is used to store the version of MySQL server.
We could have used SERVER_VERSION_LENGTH, but this introduces an obscure dependency - if somebody decided to change SERVER_VERSION_LENGTH this would break the replication protocol both of these are used to initialize the array server_version SERVER_VERSION_LENGTH is used for global array server_version and ST_SERVER_VER_LEN for the Start_event_v3 member server_version
|
https://dev.mysql.com/doc/dev/mysql-server/latest/binlog__event_8h.html
|
CC-MAIN-2019-35
|
refinedweb
| 242
| 67.96
|
You can subscribe to this list here.
Showing
1
results of 1
Hi John,
I've implemented xerrorbars and xyerrorbars commands.
I did this by adding two extra functions to the matplot.py rev 1.25
Your code makes it trivial.
As matlab seems not to have x or xy-errorbars, I took my inspiration from
Gnuplot when deciding whether to implement the different versions as
separate commands or as different formats of parameters to errorbars
command. However, since I haven't discussed this with you and just went
ahead and did it, I'll just provide the diffs here (and email you the
matplot.py file offlist to save you patching it).
Also, the xyerrorbars function just naively wraps the errorbars and
xerrorbars functions. This means that the plot symbol is rendered twice for
each point which is probably not ideal. If you see this as a problem, I can
redo it - let me know.
On the same topic, I see you just changed the bar end size ratio from 0.005
to 0.001.
The bar ends are now quite small and I don't think 0.005 was excessive, so
I'd be inclined to revert it back although I guess you had a reason.
Gnuplot makes the bar end size a settable parameter which would be a better
solution IMHO.
regards,
Gary
---8<---Cut here---8<---
185a186,187
> xerrorbar - make an errorbar graph (errors along x-axis)
> xyerrorbar - make an errorbar graph (errors along both axes )
195c197
< 'figure', 'gca', 'gcf', 'close' ]
---
> 'figure', 'gca', 'gcf', 'close', 'xerrorbar', 'xyerrorbar' ]
372c374,407
< def errorbar(x, y, e, u=None, fmt='b-'):
---
>
>
> def xyerrorbar(x, y, e, f, u=None, v=None, fmt='b-'):
> """
> Plot x versus y with x-error bars in e and y-error bars in f.
> If u is not None, then u gives the upper x-error bars and e gives the
lower
> x-error bars.
> Otherwise the error bars are symmetric about y and given in the array
e.
> If v is not None, then v gives the upper y-error bars and f gives the
lower
> y-error bars.
> Otherwise the error bars are symmetric about x and given in the array
f.
>
> fmt is the plot format symbol for the x,y point
>
> Return value is a length 2 tuple. The first element is a list of
> y symbol lines. The second element is a list of error bar lines.
>
> """
> errorbar(x, y, e, u, fmt)
> xerrorbar(x, y, f, v, fmt)
>
>
> def xerrorbar(x, y, e, u=None, fmt='b-'):
> """
> Plot x versus y with error bars in e. if u is not None, then u
> gives the left error bars and e gives the right error bars.
> Otherwise the error bars are symmetric about x and given in the
> array e.
>
> fmt is the plot format symbol for y
>
> Return value is a length 2 tuple. The first element is a list of
> y symbol lines. The second element is a list of error bar lines.
>
373a409,432
> l0 = plot(x,y,fmt)
>
> e = asarray(e)
> if u is None: u = e
> right = x+u
> left = x-e
> height = (max(y)-min(y))*0.001
> a = gca()
> try:
> l1 = a.hlines(y, x, left)
> l2 = a.hlines(y, x, right)
> l3 = a.vlines(right, y-height, y+height)
> l4 = a.vlines(left, y-height, y+height)
> except ValueError, msg:
> msg = raise_msg_to_str(msg)
> error_msg(msg)
> raise RuntimeError, msg
>
> l1.extend(l2)
> l3.extend(l4)
> l1.extend(l3)
> draw_if_interactive()
> return (l0, l1)
>
374a434,435
> def errorbar(x, y, e, u=None, fmt='b-'):
> """
|
http://sourceforge.net/p/matplotlib/mailman/matplotlib-devel/?viewmonth=200310&viewday=31
|
CC-MAIN-2015-27
|
refinedweb
| 601
| 73.88
|
Available items
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
Lookups for real IP starting from the favicon icon and using Shodan.
pip3 install -r requirements.txt
First define how you pass the API key:
-kor
--keyto pass the key to the stdin
-kfor
--key-fileto pass the filename which get the key from
-scor
--shodan-clito get the key from Shodan CLI (if you initialized it)
As of now, this tool can be used in three different ways:
-ffor
--favicon-file: you store locally a favicon icon which you want to lookup
-fuor
--favicon-url: you don't store locally the favicon icon, but you know the exact url where it resides
-wor
--web: you don't know the URL of the favicon icon, but you still know that's there
-fhor
--favicon-hash: you know the hash and want to search the entire internet.
You can specify input files which may contain urls to domain, to favicon icons, or simply locations of locally stored icons:
-fl,
--favicon-list: the file contains the full path of all the icons which you want to lookup
-ul,
--url-list: the file contains the full URL of all the icons which you want to lookup
-wl,
--web-list: the contains all the domains which you want to lookup
You can also save the results to a CSV/JSON file:
-o,
--output: specify the output and the format, e.g.:
results.csvwill save to a CSV file (the type is automatically recognized by the extension of the output file)
python3 favUp.py --favicon-file favicon.ico -sc
python3 favUp.py --favicon-url -sc
python3 favUp.py --web domain.behind.cloudflare -sc
from favUp import FavUp
f = FavUp()
f.shodanCLI = True f.web = "domain.behind.cloudflare" f.show = True f.run()
for result in f.faviconsList: print(f"Real-IP: {result['found_ips']}") print(f"Hash: {result['favhash']}")
| Variable | Type | |-:|:-| | FavUp.show | bool | FavUp.key | str | FavUp.keyFile | str | FavUp.shodanCLI | bool | FavUp.faviconFile | str | FavUp.faviconURL | str | FavUp.web | str | FavUp.shodan | Shodan class | FavUp.faviconsList | list[dict]
FavUp.faviconsListstores all the results, the key fields depend by the type of the lookup you want to do.
In case of
--favicon-fileor
--favicon-list:
favhashstores the hash of the favicon icon
filestores the path
In case of
--favicon-urlor
--url-list:
favhashstores the hash of the favicon icon
urlstores the URL of the favicon icon
domainstores the domain name
maskIPstores the "fake" IP (e.g. the Cloudflare one)
maskISPstore the ISP name associated to the
maskIP
In case of
--webor
--web-list:
favhashstores the hash of the favicon icon
domainstores the domain name
maskIPstores the "fake" IP (e.g. the Cloudflare one)
maskISPstore the ISP name associated to the
maskIP
(in this case the URL of the favicon icon is returned by the
hrefattribute of HTML element)
If, while searching for the favicon icon, nothing useful is found,
not-foundwill be returned.
In all three cases,
found_ipsfield is added for every checked entry. If no IP(s) have been found,
not-foundwill be returned.
At least
python3.6is required due to spicy syntax.
Feel free to open any issue, your feedback and suggestions are always welcome <3
Unveiling IPs behind Cloudflare by @noneprivacy
This tool is for educational purposes only. The authors and contributors don't take any responsibility for the misuse of this tool. Use It At Your Own Risk!
Conceived by Francesco Poldi noneprivacy, build with Aan Wahyu Petruknisme
stanley_HAL told me how Shodan calculates the favicon hash.
More about Murmur3 and Shodan
|
https://xscode.com/pielco11/fav-up
|
CC-MAIN-2020-45
|
refinedweb
| 610
| 65.93
|
[Douglas Bates> > Anyway, when I copy the description element into another document and > PrettyPrint the target document, I end up with all the markup that > appears in the source document as > <i>S</i> > showing up in the target document as > <i xmlns=''>S</i> > > I want the abstract in the target document to appear like the abstract > in the source document. > I think you have already gotten all the pieces of this. The elements you want to copy in the target document are in the dc namespace. When you copy them, the processor has to keep the namespace (since the elements are in it already), thus the declaration that you started out asking about. The solution has been suggested to: instead of just copying the nodes, create new ones in no namespace, then copy the attributes and PCDATA that you want. Alternatively, you could do it with an xslt stylesheet. You could pull out what you want and output it without a namespace prefix. You could do that with Python, for example by using 4slt. Cheers, Tom P
|
https://mail.python.org/pipermail/xml-sig/2002-June/007910.html
|
CC-MAIN-2019-51
|
refinedweb
| 179
| 68.7
|
Number Validation with Regex in Ruby
Motivation
Proper implementation of input validation is one of the most fundamental aspects of any web application. While we could rely on some of the existing methods to achieve many types of input validation, some methods might not be as flexible as we would want them to be.
For instance,
Object#is_a? method allows us to check if its calling object is of certain type:
>> 12.is_a? Integer
=> true
>> 'foo'.is_a? String
=> true
>> [1, 2, 3].is_a? Array
=> false
This method works well until we introduce objects that look like a number, an array, etc:
>> '12'.is_a? Integer
=> false
>> '[1, 2, 3]'.is_a? Array
=> false
The reason they return
false , of course, is that these calling objects are actually strings rather than an integer or an array.
I’m not sure if these situations often arise in the wild but, if they did, we would want to avoid these inputs being treated as strings since we would be interested in validating what’s “inside” the strings.
Wouldn’t it be nice if we had a method that could have (almost) any types of arguments as an input, look at the “inside value” of these inputs, and verify if they are of certain type we specify? — that’s what I had in my mind when I decided to write this post.
Implementation
The implementation described below focuses on number validation. More precisely, it will check if the “content” of a given input of an arbitrary type belongs to one of the following types (classes): Integer, Fixnum.
def number?(obj)
obj = obj.to_s unless obj.is_a? String
/\A[+-]?\d+(\.[\d]+)?\z/.match(obj)
end
Let’s start by analysing the regex:
/\A[+-]?\d+(\.\d+)?\z/
Here is the list of ‘vocabulary’ included in the above regex.
/ start of the regex
\A start of the string to be matched
[+-]? zero or one of '+' or'-'
\d+ one or more of digit
(\.\d+)? zero or one of 'one dot and 'one or more of digit''
\z end of the string to be matched
/ end of the regex
(For more details on regex syntax, you might want to check this regex quick reference.)
Let’s hop into irb to verify the regex actually works:
(irb)
REGEX = /\A[+-]?\d+(\.[\d]+)?\z/
REGEX.match '13'
=> #<MatchData "13" 1:nil>
!!REGEX.match '13'
=> true
REGEX.match '3.14'
=> #<MatchData "3.14" 1:".14">
!!REGEX.match '3.14'
=> true
REGEX.match 'not a number'
=> nil
!!REGEX.match 'not a number'
=> false
If the inspected string represents a number, Regexp#match method returns a MatchData object, which evaluates to true. Otherwise it returns nil, which evaluates to false.
Please note here that we are trying to match String objects against the regex. If you try to match other types of object, the code throws an error:
(irb)
REGEX = /\A[+-]?\d+(\.[\d]+)?\z/
REGEX.match 13
=> TypeError: no implicit conversion of Fixnum into String ...
REGEX.match 3.14
=> TypeError: no implicit conversion of Float into String ...
In order to avoid this we will add the following code (*):
obj.to_s unless obj.is_a? String (*)
(“obj” is the object we want to test against the regex) This line of code transforms an object into a String object if it’s not of type String and ensures that it will indeed be a String object when tested against the regex.
This completes the explanation as to how the above method:
def number?(obj)
obj = obj.to_s unless obj.is_a? String
/\A[+-]?\d+(\.[\d]+)?\z/.match obj
end
works.
Here is an example of how we can use this method (in this example, the code(*) will not be necessary as ‘number’ will always be of type String after being assigned a value via gets.chomp):
puts "Enter a number:"
number = nil
loop do
number = gets.chomp
break if number?(number)
puts "That is not a number."
end
puts "#{number} is indeed a number."
Running this code prints out the following:
Enter a number:
foo
That is not a number.
bar
That is not a number.
‘12.34’
That is not a number.
12.3.4
That is not a number.
12.34
12.34 is indeed a number.
References
You might also find the following resources on this subject helpful: (Interactive Ruby regex editor)
If you liked the post, don’t forget to give it some “claps” and follow me on social media :)
- (Twitter)
- (steemit)
If there are areas which you think can be improved, please let me know in the comment section below. Any feedback will be greatly appreciated!
|
https://medium.com/launch-school/number-validation-with-regex-ruby-393954e46797
|
CC-MAIN-2018-51
|
refinedweb
| 760
| 74.79
|
Yes, I know I am far from the first to bemoan the atrocious behavior of Visual Studio 2005 when you press the F1 button, even accidently. The first time you do it - or other times, if it deems the contents sufficiently changed for some reason - you are treated to an obnoxious and uncancellable dialog saying Help is updating itself and showing an uninformative neverending progress bar.
Worse yet, this window, despite being non-modal (due to running in a different process, dexplorer.exe) still blocks the calling thread on devenv.exe, preventing me from working for up to 5 minutes(!) at a time. No Abort button, no way to tell it to skip this pointless operation - launching help from VS is simply a giant waste of time.
What I now discovered is that I can't even use the task manager to kill the dexplore.exe process, because Visual Studio actively monitors it and RELAUNCHES it, apparently from scratch, if I try to do so.
So tell me, what program manager decided that this particular feature, launching the Help window in a completely different process, was so amazingly important that it was allowed to take over my Visual Studio completely for whole minute while I just sat there and twiddled my thumbs?
Infuriating.
UPDATE:
Tools -> Customize -> Keyboard -> Show Commands Containing -> Help.F1Help -> Remove.
There. F1 is disabled. Peace is resumed.
Published
Monday, September 03, 2007 3:34 PM
by
Avner Kashtan
There should be a press f1 and dont launch the help button Option within VS - it sucks exceptionally.
Gregor Suttie
Best rant you'll here this month :)
Well said, we are totally with you.
sameera
Right on!! This has to be the worst MS technology ever.
Bob
MSDN has a wealth of information, lot of it is great. The tool to search has forever been pathetic. I am not sure who is responsible for the user experience in dexplore.exe but they have failed. For me I have MSDN installed on my PC but use it as a last resort. Microsoft please give us access to MSDN via a great search/catalogue tool please...
ozczecho
Pingback from Soci blog » Blog Archive » VS F1 kikapcs
Soci blog » Blog Archive » VS F1 kikapcs
I've actually integrated my notes into the help collection so I have Microsoft Document Explorer performing updates often enough to eliminate this problem while I'm working.
Robert S. Robbins
: ozczecho:
: Microsoft please give us access to MSDN
: via a great search/catalogue tool please...
I find myself using the MSDN Online even when I have it installed locally. I use google to search it, so it's faster, and I have the Wiki enhancements to boot.
It's a sad state of affairs when using to search msdn2.microsoft.com is faster than using a locally installed MSDN Library.
Avner Kashtan
Sometimes you want to know a type's exact namespace and assembly (for instance, in order to find it in
Omer van Kloeten's .NET Zen
Sometimes you want to know a type's exact namespace and assembly (for instance, in order to find
עומר.נט
Pingback from How to avoid Visual Studio Help « Infovark Underground
How to avoid Visual Studio Help « Infovark Underground
|
http://weblogs.asp.net/avnerk/archive/2007/09/03/visual-studio-help-launcher-rant.aspx
|
crawl-002
|
refinedweb
| 539
| 63.49
|
Convert a multibyte character into a wide character (restartable)
#include <wchar.h> size_t mbrtowc( wchar_t * pwc, const char * s, size_t n, mbstate_t * ps );
You can call mbsinit() to determine the status of this variable.
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The mbrtowc() function converts single multibyte characters pointed to by s into wide characters pointed to by pwc, to a maximum of n bytes (not characters).
This function is affected by LC_CTYPE.
This function is safe to call in a multithreaded program if the ps argument isn't NULL.
|
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/m/mbrtowc.html
|
CC-MAIN-2018-22
|
refinedweb
| 102
| 56.96
|
Task:MMS
The aim of this page is to list potential obstacles and solutions in regards to implementing MMS in Maemo 5 ”Fremantle” on the N900.
[edit] Use case
While this has been discussed much back and forth both on Talk and on #maemo, here are some points why MMS should be implemeneted:
- Quickly sending private pictures to someone.
- While most users tend to be able to receive MMS, there's a lot few with push e-mail on their phones atm (need citation, but this is what's come up in discussion; see Mms_implemention_conversation)
- Another use case is to send your location from nokia maps over mms to quickly allow another person using nokia maps and gps find a location or service (e.g. bar, restaurant , etc).
The advantage MMS have over e-mail on the phone as well is it's generally less spammy; e.g. no Facebook notification mails etc.
[edit] Implementation
To successfully implement MMS sending and receiving on the N900 the following has to be done:
[edit] Hook in to SMS receiving.
According to this post by danielwilms the final SDK for Maemo5 should let us do this through Telepathy.
To handle WAP Push messages one will be required to register a handler via the D-BUS API (final docs on this pending, for alpha version please contact frals).
[edit] Fetch the MMS from the provider.
Multiple issues with this.
- Fremantle UI currently only allows configuring one (GPRS) APN. Add one manually with gconftool and it will show up in the UI afterwards
Example:
gconftool-2 --set /system/osso/connectivity/IAP/Tele2@32@MMS/type --type string "GPRS" gconftool-2 --set /system/osso/connectivity/IAP/Tele2@32@MMS/name --type string "Tele2 MMS" gconftool-2 --set /system/osso/connectivity/IAP/Tele2@32@MMS/gprs_accesspointname --type string "internet.tele2.se" gconftool-2 --set /system/osso/connectivity/IAP/Tele2@32@MMS/ipv4_type --type string AUTO gconftool-2 --set /system/osso/connectivity/IAP/Tele2@32@MMS/sim_imsi --type string YOURSIMIMSI
To get the SIM IMSI to use:
gconftool -R /system/osso/connectivity/IAP |grep sim_imsi
- Most operators only allow fetching of MMS when accessing via a specific APN. To access this you would have to temporarily suspend your current 3G connection and switch over to this (is this correct? need someone with knowledge on this). On S60 you won't get dropped as long as you are on a 3G connection.
- If the user only got access to GPRS the current connection (if in-use) have to to be temporarily suspended while fetching the MMS. This should be doable by doing a "Deffered Retrieval". [1]
- Accessing the MMS "server" through a specific connection: iptables tweaking? Discussion at Mms_implemention_conversation#Technical. The following is based on that conversation:
- The problem here is routing specific data to the specific connection. While this is possible with some clever routes, this does raise the problem with operators having the same gateway for both GPRS and MMS connections, which might lead to a collision (if both they are indeed the same gateway, it should be no problem just getting the MMS over the current connection?).
- One possibility is to tag the appropiate packets with iptables and then sent to a different routing table.
- To overcome the problem with different gateways on the same subnet one could add a hard dumb route to the specific gateway via route, then set the Internet gateway to a higher priority. e.g. rename 123.123.123.0 -> MMS route via iproute2 NAT. Something like maybe?
- Another solution would be a mms-fetcher-daemon running as a specific user (e.g. mmsd) and use iptables to route all these packages to the correct interface. This works around when both the GPRS and the MMSC specified is on the same IP but requires different connections. i.e. "route add 1.2.3.4 ppp:mmc - fails if there is also a server on 1.2.3.4 on ppp:normal". However, using the ipt_route module (experimental for years, netfilter patch-o-matic-ng) would allow traffic to be directed out of a specific interface (ppp:mmc) and since this interface is point2point it would not need a specific route. infact the mmc ip address could be access via ppp:normal or ppp:mmc depending on the match criteria of iptables (pid, uid?).
- At the core of it - whats needed is a separate interface (e.g. ppp0, ppp1 etc) for each APN the user sets up.
- Applications should be able to request a certain IAP to be activated; thus should the MMS app be able to request the correct APN. Source
- The functionality to handle the above problem is in linux-2.6.30+ (Fremantle runs 2.6.28) so if there is a future kernel upgrade, it should be possible to implement full MMS support. See this talk-post. Alternative would be to backport the networking namespace to 2.6.28.
[edit] Format the MMS and display it correctly.
- How open is the Messaging client?
Plenty of open source libs to do this in other languages:
- PHP1
- PHP2
- Java JavaSE/JavaME
- C
- Gammu+ with C++ implementation (compilable in Linux and Windows)
- Python
[edit] Rough roadmap
Before MMS implementation is written a roadmap must be chosen, and various factors must be taken into account before choosing a roadmap.
Potential roadmaps based on e.g. practical situations could use a quick community hack whereas official certification is much more work, and theoretical/technical.
Please note following information is much easier understood by using a flowchart.
[edit] Quick hack, r/o
- Make sure a technical inclined Nokia N900 user is able
- Receive MMS
- Read MMS
- Community support; not official Nokia support
- CLI, GUI, HIG not important
Implemented, see:
[edit] Quick hack, r/w
- Make sure a technical inclined Nokia N900 user is able to
- Receive & sent MMS
- Read MMS
- Write MMS
- Community support; not official Nokia support
- CLI, GUI, HIG not important
Implemented, see:
[edit] Quick hack, r/o, user-friendly
- Make sure a non-technical inclined Nokia N900 user is able
- Receive MMS
- Read MMS
- Community support; not official Nokia support
- Finger/touch GUI, HIG recommended
[edit] Quick hack, non-r/w user-friendly
- Make sure a technical inclined Nokia N90 user is able to
- Receive & sent MMS
- Read MMS
- Write MMS
- Community support; not official Nokia support
- Finger/touch GUI, HIG recommended
Implemented, see
[edit] Full specs, r/w, user-friendly
- Make sure a non-technical inclined Nokia N900 user is able to
- Receive & sent MMS
- Read MMS
- Write MMS
- Official support by Nokia, official MMS certification.
- Must support all the specifications such as WAP 2.1.
- Finger/touch GUI, HIG mandatory
[edit] References
- This page was last modified on 27 January 2011, at 15:46.
- This page has been accessed 19,833 times.
|
http://wiki.maemo.org/Task:MMS
|
CC-MAIN-2016-30
|
refinedweb
| 1,133
| 51.89
|
I am working on python string.
import string
import re
str = "Hello"
b = 0
srr = ""
for a in str:
srr[b] = a #'str' object does not support item assignment
b = b + 1
print (srr)
This is my sample code. I was trying to read some characters from a string and put it into another string like C or Java but it’s not working for my case.Thanks. I must recognize your help.
Strings are immutable in Python. You can not change the characters in their place. But you are allowed to do
for a in str:
srr += a
This program creates a new String with each iteration and stores a new String in srr.
|
https://kodlogs.com/36503/str-object-does-not-support-item-assignment
|
CC-MAIN-2021-21
|
refinedweb
| 115
| 80.62
|
table of contents
- buster 241-7~deb10u2
- buster-backports 244-3~bpo10+1
- testing 244-3
- unstable 244-3
NAME¶sd_bus_process - Drive the connection
SYNOPSIS¶
#include <systemd/sd-bus.h>
int sd_bus_process(sd_bus *bus, sd_bus_message **ret);
DESCRIPTION¶sd_bus_process() drives the connection between the client and the message bus. That is, it handles connecting, authentication, and message processing. When invoked pending I/O work is executed, and queued incoming messages are dispatched to registered callbacks. Each time it is invoked a single operation is executed. It returns zero when no operations were pending and positive if a message was processed. When zero is returned the caller should synchronously poll for I/O events before calling into sd_bus_process() again. For that either user the simple, synchronous sd_bus_wait(3) call, or hook up the bus connection object to an external or manual event loop using sd_bus_get_fd(3).
sd_bus_process() processes at most one incoming message per call. If the parameter ret is not NULL and the call processed a message, *ret is set to this message. The caller owns a reference to this message and should call sd_bus_message_unref(3) when the message is no longer needed. If ret is not NULL, progress was made, but no message was processed, *ret is set to NULL.
If a the bus object is connected to an sd-event(3) event loop (with sd_bus_attach_event(3)), it is not necessary to call sd_bus_process() directly as it is invoked automatically when necessary.
RETURN VALUE¶If progress was made, a positive integer is returned. If no progress was made, 0 is returned. If an error occurs, a negative errno-style error code is returned.
ERRORS¶Returned errors may indicate the following problems:
-EINVAL
-ECHILD
-ENOTCONN
-ECONNRESET
-EBUSY
|
https://manpages.debian.org/buster/libsystemd-dev/sd_bus_process.3.en.html
|
CC-MAIN-2020-05
|
refinedweb
| 286
| 55.84
|
The Transport Authority is implementing a new Road Pricing system. The authorities decided that the cars will be charged based on distance travelled, on a per mile basis. A car will be charged $0.50/mi, a van $2.1/mi and taxis travel for free. Create a function to determine how much a particular vehicle would be charged based on a particular distance. The function should take as input the type of the car and the distance travelled, and return the charged price.
def Road_Pricing():
x = float(input("How many miles is driven?"))
y = (input("What car was driven?"))
if "car":
print (.50*x)
if "van":
print (2.1*x)
if "taxi":
print ("Free")
Road_Pricing()
The requirement is(emphasis mine):
...... The function should take as input the type of the car and the distance travelled, and return the charged price.
This means:
Another problem in your code is that expressions in your statements isn't checking the value of
car_type. Also, you should use more meaningful variable names(for example,
distance and
car_type instead of
x and
y).
def road_pricing(car_type, distance): if car_type == "car": return .50 * distance if car_type == "van": return 2.1 * distance if car_type == "taxi": return 0 car_type = raw_input("What car was driven? ") distance = float(input("How many miles is driven? ")) print road_pricing(car_type, distance)
|
https://codedump.io/share/Zwwb0XYfjUA1/1/creating-a-function-for-road-pricing
|
CC-MAIN-2017-09
|
refinedweb
| 219
| 69.79
|
.
Defining our first component
Defining a component can seem tricky until you have some practice, but the gist is:
- If it represents an obvious "chunk" of your app, it's probably a component
- If it gets reused often, it's probably a component.
That second bullet is especially valuable: making a component out of common UI elements allows you to change your code in one place and see those changes everywhere that component is used. You don't have to break everything out into components right away, either. Let's take the second bullet point as inspiration and make a component out of the most reused, most important piece of the UI: a todo list item.
Make a
<Todo />
Before we can make a component, we should create a new file for it. In fact, we should make a directory just for our components. The following commands make a
components directory and then, within that, a file called
Todo.js. Make sure you're in the root of your app before you run these!
mkdir src/components touch src/components/Todo.js
Our new
Todo.js file is currently empty! Open it up and give it its first line:
import React from "react";
Since we're going to make a component called
Todo, you can start adding the code for that to
Todo.js too, as follows. In this code, we define the function and export it on the same line:
export default function Todo() { return ( ); }
This is OK so far, but our component has to return something! Go back to
src/App.js, copy the first
<li> from inside the unordered list, and paste it into
Todo.js so that it reads like this:
export default function Todo() { return ( <li className="todo stack-small"> <div className="c-cb"> <input id="todo-0" type="checkbox" defaultChecked={true} /> <label className="todo-label" htmlFor="todo-0"> Eat </label> </div> <div className="btn-group"> <button type="button" className="btn"> Edit <span className="visually-hidden">Eat</span> </button> <button type="button" className="btn btn__danger"> Delete <span className="visually-hidden">Eat</span> </button> </div> </li> ); }
Note: Components must always return something. If at any point in the future you try to render a component that does not return anything, React will display an error in your browser.
Our
Todo component is complete, at least for now; now we can use it. In
App.js, add the following line near the top of the file to import
Todo:
import Todo from "./components/Todo";
With this component imported, you can replace all of the
<li> elements in
App.js with
<Todo /> component calls. Your
<ul> should read like this:
<ul role="list" className="todo-list stack-large stack-exception" aria- <Todo /> <Todo /> <Todo /> </ul>
When you look back at your browser, you'll notice something unfortunate: your list now repeats the first task three times!
We don't only want to eat; we have other things to — well — to do. Next we'll look at how we can make different component calls render unique content.
Make a unique
<Todo />
Components are powerful because they let us re-use pieces of our UI, and refer to one place for the source of that UI. The problem is, we don't typically want to reuse all of each component; we want to reuse most parts, and change small pieces. This is where props come in.
What's in a
name?
In order to track the names of tasks we want to complete, we should ensure that each
<Todo /> component renders a unique name.
In
App.js, give each
<Todo /> a name prop. Let’s use the names of our tasks that we had before:
<Todo name="Eat" /> <Todo name="Sleep" /> <Todo name="Repeat" />
When your browser refreshes, you will see… the exact same thing as before. We gave our
<Todo /> some props, but we aren't using them yet. Let's go back to
Todo.js and remedy that.
First modify your
Todo() function definition so that it takes
props as a parameter. You can
console.log() your
props as we did before, if you'd like to check that they are being received by the component correctly.
Once you're confident that your component is getting its
props, you can replace every occurrence of
Eat with your
name prop. Remember: when you're in the middle of a JSX expression, you use curly braces to inject the value of a variable.
Putting all that together, your
Todo() function should read like this:
export default function Todo(props) { return ( <li className="todo stack-small"> <div className="c-cb"> <input id="todo-0" type="checkbox" defaultChecked={true} /> <label className="todo-label" htmlFor="todo-0"> {props.name} </label> </div> <div className="btn-group"> <button type="button" className="btn"> Edit <span className="visually-hidden">{props.name}</span> </button> <button type="button" className="btn btn__danger"> Delete <span className="visually-hidden">{props.name}</span> </button> </div> </li> ); }
Now your browser should show three unique tasks. Another problem remains though: they're all still checked by default.
Is it
completed?
In our original static list, only
Eat was checked. Once again, we want to reuse most of the UI that makes up a
<Todo /> component, but change one thing. That's a good job for another prop! Give each
<Todo /> call in
App.js a new prop of
completed. The first (
Eat) should have a value of
true; the rest should be
false:
<Todo name="Eat" completed={true} /> <Todo name="Sleep" completed={false} /> <Todo name="Repeat" completed={false} />
As before, we must go back to
Todo.js to actually use these props. Change the
defaultChecked attribute on the
<input /> so that its value is equal to the
completed prop. Once you’re done, the Todo component's
<input /> element will read like this:
<input id="todo-0" type="checkbox" defaultChecked={props.completed} />
And your browser should update to show only
Eat being checked:
If you change each
<Todo /> component’s
completed prop, your browser will check or uncheck the equivalent rendered checkboxes accordingly.
Gimme some
id, please
Right now, our
<Todo /> component gives every task an
id attribute of
todo-0. This is bad HTML because
id attributes must be unique (they are used as unique identifiers for document fragments, by CSS, JavaScript, etc.). This means we should give our component an
id prop that takes a unique value for each
Todo.
To follow the same pattern we had initially, let's give each instance of the
<Todo /> component an ID in the format of
todo-i, where
i gets larger by one every time:
<Todo name="Eat" completed={true} <Todo name="Sleep" completed={false} <Todo name="Repeat" completed={false}
Now go back to
Todo.js and make use of the
id prop. It needs to replace the value of the
id attribute of the
<input /> element, as well as the value of its label's
htmlFor attribute:
<div className="c-cb"> <input id={props.id} type="checkbox" defaultChecked={props.completed} /> <label className="todo-label" htmlFor={props.id}> {props.name} </label> </div>
So far, so good?
We’re making good use of React so far, but we could do better! Our code is repetitive. The three lines that render our
<Todo /> component are almost identical, with only one difference: the value of each prop.
We can clean up our code with one of JavaScript's core abilities: iteration. To use iteration, we should first re-think our tasks.
Tasks as data
Each of our tasks currently contains three pieces of information: its name, whether it has been checked, and its unique ID. This data translates nicely to an object. Since we have more than one task, an array of objects would work well in representing this data.
In
src/index.js, make a new
const beneath the final import, but above
ReactDOM.render():
const DATA = [ { id: "todo-0", name: "Eat", completed: true }, { id: "todo-1", name: "Sleep", completed: false }, { id: "todo-2", name: "Repeat", completed: false } ];
Next, we'll pass
DATA to
<App /> as a prop, called
tasks. The final line of
src/index.js should read like this:
ReactDOM.render(<App tasks={DATA} />, document.getElementById("root"));
This array is now available to the App component as
props.tasks. You can
console.log() it to check, if you’d like.
Note:
ALL_CAPS constant names have no special meaning in JavaScript; they’re a convention that tells other developers "this data will never change after being defined here”.
Rendering with iteration
To render our array of objects, we have to turn each one into a
<Todo /> component. JavaScript gives us an array method for transforming data into something else:
Array.prototype.map().
Above the return statement of
App(), make a new
const called
taskList and use
map() to transform it. Let's start by turning our
tasks array into something simple: the
name of each task:
const taskList = props.tasks.map(task => task.name);
Let’s try replacing all the children of the
<ul> with
taskList:
<ul role="list" className="todo-list stack-large stack-exception" aria- {taskList} </ul>
This gets us some of the way towards showing all the components again, but we’ve got more work to do: the browser currently renders each task's name as unstructured text. We’re missing our HTML structure — the
<li> and its checkboxes and buttons!
To fix this, we need to return a
<Todo /> component from our
map() function — remember that JSX allows us to mix up JavaScript and markup structures! Let's try the following instead of what we have already:
const taskList = props.tasks.map(task => <Todo />);
Look again at your app; now our tasks look more like they used to, but they’re missing the names of the tasks themselves. Remember that each task we map over has the
id,
name, and
checked properties we want to pass into our
<Todo /> component. If we put that knowledge together, we get code like this:
const taskList = props.tasks.map(task => ( <Todo id={task.id} name={task.name} completed={task.completed} /> ));
Now the app looks like it did before, and our code is less repetitive.
Unique keys
Now that React is rendering our tasks out of an array, it has to keep track of which one is which in order to render them properly. React tries to do its own guesswork to keep track of things, but we can help it out by passing a
key prop to our
<Todo /> components.
key is a special prop that's managed by React – you cannot use the word
key for any other purpose.
Because keys should be unique, we're going to re-use the
id of each task object as its key. Update your
taskList constant like so:
const taskList = props.tasks.map(task => ( <Todo id={task.id} name={task.name} completed={task.completed} key={task.id} /> ) );
You should always pass a unique key to anything you render with iteration. Nothing obvious will change in your browser, but if you do not use unique keys, React will log warnings to your console and your app may behave strangely!
Componentizing the rest of the app
Now that we've got our most important component sorted out, we can turn the rest of our app into components. Remembering that components are either obvious pieces of UI, or reused pieces of UI, or both, we can make two more components:
<Form/>
<FilterButton/>
Since we know we need both, we can batch some of the file creation work together with a terminal command. Run this command in your terminal, taking care that you're in the root directory of your app:
touch src/components/Form.js src/components/FilterButton.js
The
<Form />
Open
components/Form.js and do the following:
- Import
Reactat the top of the file, like we did in
Todo.js.
- Make yourself a new
Form()component with the same basic structure as
Todo(), and export that component.
- Copy the
<form>tags and everything between them from inside
App.js, and paste them inside
Form()’s
returnstatement.
- Export
Format the end of the file.
Your
Form.js file should read like this:
import React from "react"; function Form(props) { return ( <form> <h2 className="label-wrapper"> <label htmlFor="new-todo-input" className="label__lg"> What needs to be done? </label> </h2> <input type="text" id="new-todo-input" className="input input__lg" name="text" autoComplete="off" /> <button type="submit" className="btn btn__primary btn__lg"> Add </button> </form> ); } export default Form;
The <FilterButton />
Do the same things you did to create
Form.js inside
FilterButton.js, but call the component
FilterButton() and copy the HTML for the first button inside the
<div> element with the
class of
filters from
App.js into the
return statement.
The file should read like this:
import React from "react"; function FilterButton(props) { return ( <button type="button" className="btn toggle-btn" aria- <span className="visually-hidden">Show </span> <span>all </span> <span className="visually-hidden"> tasks</span> </button> ); } export default FilterButton;
Note: You might notice that we are making the same mistake here as we first made for the
<Todo /> component, in that each button will be the same. That’s fine! We’re going to fix up this component later on, in Back to the filter buttons.
Importing all our components
Let's make use of our new components.
Add some more
import statements to the top of
App.js, to import them.
Then, update the
return statement of
App() so that it renders our components. When you’re done,
App.js will read like this:
import React from "react"; import Form from "./components/Form"; import FilterButton from "./components/FilterButton"; import Todo from "./components/Todo"; function App(props) { const taskList = props.tasks.map(task => ( <Todo id={task.id} name={task.name} completed={task.completed} key={task.id} /> ) ); return ( <div className="todoapp stack-large"> <Form /> <div className="filters btn-group stack-exception"> <FilterButton /> <FilterButton /> <FilterButton /> </div> <h2 id="list-heading">3 tasks remaining</h2> <ul role="list" className="todo-list stack-large stack-exception" aria- {taskList} </ul> </div> ); } export default App;
With this in place, we’re almost ready to tackle some interactivity in our React app!
Summary
And that's it for this article — we've gone into some depth on how to break up your app nicely into components, end render them efficiently. Now we'll go on to look at how we handle events in React, and start adding some interactivity.
|
https://developer.mozilla.org/bn/docs/Learn/Tools_and_testing/Client-side_JavaScript_frameworks/React_components
|
CC-MAIN-2020-50
|
refinedweb
| 2,405
| 63.7
|
- MVC
- Entity Framework
- LINQ
- Web Services
- ADO.NET
- Servers
- Client-Side Technologies
- Concepts
- Development
- Visual Studio
- References
- About
- References
ADO.NET
ADO.NET
ADO.NET provides access to a variety of data sources and supports 'connected' and 'disconnected' models with a highly efficient connection pooling system.
ADO.NET is a set of classes that provides data access services for .NET. It provides access to many different relational database management systems (RDBMS) as well as other types of data such as XML. ADO.NET was designed to handle large data loads and to be secure, flexible, and dependable. It is the oldest data access technology in .NET and is widely used. ADO.NET supports both a connected and disconnected models with a highly efficient connection pooling system.
The System.Data namespace contains many of the ADO.NET methods which are used for all database management systems. There are also several vendor specific libraries available such as System.Data.OracleClient. Additionally their are generic libraries such as System.Data.Odbc which provide access to ODBC compliant systems. The generic libraries generally do not perform as well as the vendor specific libraries.
ADO.NET is vulnerable to SQL Injection and Cross-Site Scripting attacks. It is essential to use parameterized queries, with typed parameters, when using any untrusted data. That includes data retrieved from the database as well as data input by the user. Typed parameters treat the values as typed data and not executable code. When query strings use data that is concatenated to the query string, it allows the data to be treated as executable code (SQL, JavaScript) which may have been entered by a malicious users. Also do not depend on .NET's "Request Validation" to trap all potentially malicious code going to the browser. Be sure any data sent to the browser is properly HTML encoded. HTML encoding changes the string "<script>" to "<script>" to prevent Javascript stored in data from being executed by the browser.
Connection Strings
Connection Strings contain initialization information that is passed as a parameter from a data provider to a data source. The syntax depends on the data provider, and the connection string is parsed during the attempt to open a connection. Once the connection string syntax is validated, the data source applies the options specified in the connection string and opens the connection to the data source. Below is a comparison of an ADO.NET connection string (line #1) and and Entity Framework connection string (line#3). Note, by default, the Entity Framework looks for a connection string named the same as the object context class.
ADO on Line #1 and Entity Framework on Line #3
Connections can hold locks on data causing concurrency issues. In ADO.NET this can be alleviated by using the disconnected model and keeping connections closed as much as possible. By default, ADO.NET uses connection pooling which reduces the work of establishing and cleaning up connections. The connections in the pool are open database connections. When the program requests a connection, .NET assign a connection from the pool. When a connection is closed, or disposed, .NET returns the connection to the pool. ADO.NET 4.5 accessing SQL Server 2012 defaults to a maximum of 100 simultaneous connections and will adjust the actual number of connections in the pool according to the application's needs.
Data Providers
The connection string is one of the core objects in the set of .NET data providers. Other core objects allow for the execution of SQL commands and for retrieving results from the database. The results are either processed directly or are stored in an in-memory cache of data called a dataset to use the disconnected model. DataSets consist of a collection of related DataTables. The DataTables can be copies of database tables or can be populated from other sources.
The DataSet represents a complete set of data, including related tables, constraints, and relationships among the tables.
Database Queries
xxx
ExecuteNonQuery()
xxx
DataAdapter
|
http://www.kcshadow.net/aspnet/?q=adonet
|
CC-MAIN-2018-22
|
refinedweb
| 663
| 57.98
|
Note: This is Python 3.5
Given some project
thing
thing/
top.py (imports bar)
utils/
__init__.py (empty)
bar.py (imports foo; has some functions and variables)
foo.py (has some functions and variables)
import utils.bar
import utils.foo
import foo
import utils.foo
Its typical to have all your entrypoints outside the module, than all imports can be relative. For example
thing/ app.py thing/ __init__.py top.py utils/ __init__.py bar.py foo.py
And app.py can look like
from thing import app if __name__ == '__main__': app.main()
Than in top.py you have
from .utils import bar
And in bar you have
from . import foo
Or
from thing.utils import foo
Or
from ..utils import foo
So if you need two entrypoints, you can either make
app take a command line argument for the second entry point, or you can make another file like app that imports bar
|
https://codedump.io/share/3GwlLnBCXKoJ/1/importing-a-module-in-python-as-part-of-a-package-as-well-as-in-a-way-that-works-stand-alone
|
CC-MAIN-2017-47
|
refinedweb
| 155
| 81.39
|
Differential Development, Part 1
Introduction
If you look at a software solution like a race across the ocean through a narrow canyon of icebergs, then companies start out by charting the maze. They have a goal (the finish line) and set checkpoints (milestones). They analyze where the currents are, where the glaciers are stable and where they drift, the weather conditions, the water temperature, the wind speed, right down to the native sea-life. Then, they try to pilot a supertanker through the maze and get to the end as quickly as possible. Many seasoned project managers in their skipper-hats will attest that making small course corrections early on can save a lot of effort and time when aiming the supertanker for a narrow channel across the ocean. So, they have to make certain the channel they are aiming for is the right channel. In a sense, project managers and architects are not only responsible for steering the ship, but also for predicting the future. If the finish line should move halfway through the race, it either takes a lot longer to reach it, or the supertanker simply runs out of fuel (funding) and is dead in the water.
What is 2D?
2D Stands for Differential Development. 2D is a stronger, more powerful approach to developing applications that leverage bleeding edge technology, the opinions of some very progressive thinkers and ardently rejects the notion of freezing any part of the design. It is important to know that the concepts embraced by 2D are not unproven or experimental. 2D seeks to synergistically unite compatible methodologies from a broad spectrum of leading IT experts. Much of what 2D advocates has been in practice and successfully implemented within the industry. 2D seeks to make a supertanker-sized jet-ski that can handle the load of corporate business needs and efforlessly zip across the turbulent waters of changing requirements. It can also be a lot of fun to ride.
Structured Programming vs. Agile Development
However, an Agile Development Methodology can support fundamental changes on any level without a complete re-write, and do so without introducing additional bugs. This is important to understand. Very important. Fundamental changes on any level without necessitating a rewrite.
In many situations, things such as re-naming a base class or namespace, changing data types on an interface, renaming variables, or even changing the base data model is an agonizing decision because it could require extensive revisions throughout the code to accommodate the changes and may also introduce new and unexpected bugs. The fear of "Breaking the Interface"—changing the face of one object to make it unrecognizeable to the rest of the objects in your application tree—results in leaving things as they are, which is often a compromised design that meets the functional requirements, but is not as efficient, modular, or agile as it could have been given the chance to code it knowing what the developers have learned.
The approach to product architecture has a fundamental and unavoidable flaw:
The most important decisions are made at the start of a project with the least amount of road experience and the highest uncertainty factor.
These are critical design decisions that not only involve the foundation of the relational data model but also the architecture and interrelation of the layers/tiers between the UI and the database. How many times have you heard the word "Re-architect" mentioned in reference to a project halfway through the development life cycle? I would venture to guess not very often. That is because re-architecting is almost always synonymous with a major re-write. And like rebuilding a house, the very act of committing to re-architecture implies a demolition of the existing product and devaluation of the time and effort spent on it. Usually, such drastic measures are taken only after enough enhancement and feature requests unsupportable by the current version have piled up or if current product performance is unacceptable and cannot be resolved with hardware upgrades. More than likely, the bigger and more complex projects (supertankers) pose a much greater challenge to re-architecture than a simple, single-user application (jet-ski).
Re-Architecting vs. Re-Factoring
Although Re-Architecting is a major undertaking that cannibalizes existing design, Re-Factoring is actually quite the contrary. In fact, many developers have been secretly refactoring their code for years without saying a word. Some do not call it refactoring, instead referring to it as "tweaking" or "tuning," but the principle is the same. A developer builds a class or module, then goes back and removes redundant code, adds references to remote error handlers, renames the variables or re-classifies their type, adds attributes and comments, and so on. When they are done, the object, class, or module they are submitting to version control has already gone through several personal revisions, versions, and iterations before it was added to the application. Each developer has their own style, their own experiences with what works better in different situations. Development environments in corporate IT often force developers to adapt and use uniform coding standards that can be as ambiguous as "good user experience" (one of my most memorable function requirements) or as specific as variable naming conventions. Though there are many varieties of coding standards and best practices, there seem to be surprisingly few flavors of refactoring methodology, leaving the process mostly in the hands of developers.
Differential Development (2D) using Refactoring principles provides the following advantages:
- Keeps the application up to date with the latest technology.
- Keeps the design up to date and in line with expectations and experiences.
- Allows the dev team to apply what they learn along the way back into the design on every level, including the core libraries.
- Modularity of components, which can effortlessly adapt to changing business requirements.
- Dynamic and on-going testing of the product.
- Established and well-practiced process of applying changes to any level of the application and understanding/managing their impact on the rest of the application.
- Better interoperability within the application layers.
- A non-frozen data model, which can change and evolve throughout the course of the development life cycle without significantly impacting deadline or cost.
The advantage to refactoring as opposed to rearchitecture is that the product design is in a constant state of revision, implementation, and testing. There is no "set" design and nothing is frozen. Any change to design, anywhere, is propagated throughout the product via dependency chains and references. You can call it "Extreme Development" or "Dynamic Design" or whatever makes sense, but the concept is important and quite powerful.
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/csharp/.net/net_general/tipstricks/article.php/c7019/Differential-Development-Part-1.htm
|
CC-MAIN-2017-09
|
refinedweb
| 1,113
| 50.26
|
panda3d.core.PNMBrush¶
from panda3d.core import PNMBrush
- class
PNMBrush¶
Bases:
ReferenceCount
This class is used to control the shape and color of the drawing operations performed by a PNMPainter object.
Normally, you don’t create a PNMBrush directly; instead, use one of the static PNMBrush::make_*() methods provided here.
A PNMBrush is used to draw the border of a polygon or rectangle, as well as for filling its interior. When it is used to draw a border, the brush is “smeared” over the border; when it is used to fill the interior, it is tiled through the interior.
Inheritance diagram
- enum
BrushEffect¶
- static
makeImage(image: PNMImage, xc: float, yc: float, effect: BrushEffect) → PNMBrush¶
Returns a new brush that paints with the indicated image. xc and yc indicate the pixel in the center of the brush.
The brush makes a copy of the image; it is safe to deallocate or modify the image after making this call.
- Return type
-
- static
makePixel(color: LColorf, effect: BrushEffect) → PNMBrush¶
Returns a new brush that paints a single pixel of the indicated color on a border, or paints a solid color in an interior.
- Return type
-
- static
makeSpot(color: LColorf, radius: float, fuzzy: bool, effect: BrushEffect) → PNMBrush¶
Returns a new brush that paints a spot of the indicated color and radius. If fuzzy is true, the spot is fuzzy; otherwise, it is hard-edged.
- Return type
-
|
https://docs.panda3d.org/1.10/python/reference/panda3d.core.PNMBrush
|
CC-MAIN-2020-29
|
refinedweb
| 232
| 50.87
|
- Python 3.8.2
- wxPython 4.1.0
Feel free to experiment. Here are some possible enhancements:
- Add the ability to run a program when the timer expires. With a little scripting you could, for example, schedule the sending of an email.
- Add the option to auto-restart a timer after it has alarmed.
- Autosave timers on close and reload them on restart.
- Add a taskbar icon with pop-up summary of timers on mouse over.
This file contains the mainline GUI code. It displays a list of custom timer entries and three control buttons. The three buttons allow the user to:
- create a new timer
- start all existing timers
- stop all existing timers
Timer entries are displayed one per line. Each timer contains the following controls:
- a button which will run/stop the timer
- a button that will stop and reset the timer (countdown only)
- a button that will delete a timer
- a checkbox to enable a popup message when the timer expires
- display of the time remaining
- description of the timer
This is a custom control that is subclassed from a wx.BoxSizer. The fields mentioned above are arranged horizontally in this sizer.
A timer entry object can delete all of the controls within it, however, it is up to the parent object to delete the actual timer entry object. I decided that the easiest way to do this was to pass the TimerEntry constructor the address of a delete method from the parent object.
Countdown timers are updated once per second by subtracting one second from the time remaining. Absolute timers, however, must recalculate the time remaining on every timer event otherwise, if you put the computer to sleep then wake it up the time remaining would not account for the sleep period.
This is a custom control that is subclassed from wx.Dialog. This control displays a GUI where the user can select a timer type (absolute or countdown), and specify timer values and a description. For absolute timers, the values entered represent an absolute date/time at which the alarm is to sound. Countdown timers represent a time span after which the alarm will sound. The dialog offers three closing options:
- Create - creates the timer but does not start it
- Create & Run - creates the timer and automatically starts it
- Cancel - does not create a timer
This module is used to ensure that only one copy of Timer.pyw can run at a time. It does this by creating a mutex which uses the app name (Timer.pyw) as the mutex prefix. If you want to be able to run multiple copies you can remove the lines:
from GetMutex import * if (single := GetMutex()).AlreadyRunning(): wx.MessageBox(__file__ + " is already running", __file__, wx.OK) sys.exit()
This is the wav file that will be played whenever a timer expires. If you do not like the one provided just copy a wav file of your choice to a file of the same name.
The entire project is attached as a zip file.
|
https://www.daniweb.com/programming/software-development/code/523613/a-multiple-timer-application-in-python-wxpython
|
CC-MAIN-2022-33
|
refinedweb
| 504
| 71.34
|
The IDE is the language?
- Printer-friendly version
- evanx's blog
- 2448 reads
by ronaldtm - 2008-01-07 18:36)
by dog - 2008-01-08 19:03I believe java is the only programming language to learn and use for programming. That said, I will only fully accept my belief when someone writes the equivalent of Unreal Touranment 3 in java. I would NOT go that far. You should learn a bunch of languages, including Haskell and Scheme and other different sorts of languages. The belief in a single language to learn is what creates dinosaurs!!)
by dog - 2008-01-08 18:59The biggest reason Java is going to stay around is because of the ENOURMOUS amount of libraries and other tools around that support it..
by fabriziogiudici - 2008-01-08 17:31Sorry to be repetitive, but again the Dynamic Proxies is again the wrong example IMO. The _complexity_ of a Dynamic Proxy is much superior to statically generated stubs. It's just that you don't see it.
by jbailo - 2008-01-08 17:36I believe java is the only programming language to learn and use for programming. That said, I will only fully accept my belief when someone writes the equivalent of Unreal Touranment 3 in java.
by scotty69 - 2008-01-08 15:10!
by fabriziogiudici - 2008-01-08 13:11(tons of typos above, but it should be clear - the day that they improve the user interface of this blog won't ever be too late)
by fabriziogiudici - 2008-01-08 13:11
"Using the IDE as a caterpillar to shovel piles of code around cannot be the solution. We humans are supposed to still be in control and to actually understand what we are doing"Having the IDE doing some automated things doesn't imply that we always lose control (of course, it depends on what you're doing). The point is that some things are pretty damned simple for a conceptual point of view, but they are repetitive and thus requires lot of boilerplate. Let's just think on the JavaBeans (I mean the full fledged ones, with listener property support, as Cay mentioned). The concept is very simple: for each property, you have a setter/getter, the setter fires up a certain event when the property changes, you can attach/detach listeners. Pretty coincise. It requires tenths of lines of Java code to be done. Now you can:
by winfriedmaus - 2008-01-08 08:25Funny thing, just this morning I stumbled over this article:.'
by alarenal - 2008-01-08 01:20P.S.: Sure you can introduce new language features afterwards, but what happens to the API? Will you add another 50 ways to do the same stuff the old API already does or will you just add a language feature and only use it for additional functionality within the platform? And what would that mean to the old stuff? A lot of Java's current API is outdated already, other stuff is just not very well thought and other stuff is missing. In a fast paced time the mantra of the old players seem to be "bloat is all around". Apple had success with a clean cut with Mac OS X, I don't see why it shouldn't work for Sun with Java (language and API) as well.
by evanx - 2008-01-08 10:51i do believe in that dijkstraian law of design, together with "Systems Thinking." Adding some simple stuff to the language might mean we can take a helluva lot of code and complexity out of the libraries and our apps, and improve readability, reliability, simplicy, toolability and a few other 'abilities, of the system as a whole? Like supporting properties properly. Like everything Bill Joy et al really wanted for Java 1.0 but marketing and business pressures (and not technical) didn't permit at that time? For Swing we need first-class properties and EDT thread programming to be simple and natural - if closures help with that, then great. Failing that, we carry on as we are. However, that is not human nature to accept that which we can change for the better. When i say "better" i mean simpler, on whole.
by alarenal - 2008-01-08 01:14I gave Scala a brief look, same with Python and Ruby. I have similar problems in setting myself in motion learning new languages with different syntax. But I also think that this is something you should do from time to time, at least to be able to get a glimpse of different concepts. Maybe those let you think of what you're doing in Java and how you're doing it in a different way. Otherwise we may find out someday we have missed the train... Interoperability with Java through Scala, JRuby, Jython, Groovy (, ...) has its charme. You can reuse existing code and migrate step by step. But what about deployment? I always found web development in Java being a pita and therefore stuck with server side scripting and used Java mainly for development of desktop applications. Time will tell what becomes of the Java language, the API and JVM. Surely it will be around for another long time one way or the other, just because of the immense codebase.
by mthornton - 2008-01-08 02:23I have long suggested the IDE approach as a way out of the operator overloading war. Methods could be annotated as 'operators' and suitable IDE's could display tham as such, while regular IDEs saw the underlying method syntax.
by briansilberbauer - 2008-01-08 11:05I've been thinking around programming 'languages' and trying out a few new ones (ruby et al) recently (having done my share of pascal C/C++ etc). I found my problem is high expectation: none of the languages relieve me of the basic grind of coding like marking blocks of code with different characters (}, END etc) and generally repeating myself by typing the same variable name every time I want to use it (I'm easily irritable). IDEs go a fair way to alleviate this in cludgy ways - code completion and templates; its nice, but missing the point. What I want from a new language is: 1. No source code (when you think about it, source code is soo 20th century). Why not code directly to bytecode? Thats pretty much what is happening when you use eclipse (indulge me). 2. Use graphics to demarcate code blocks, a nice wire frame or something less intrusive would work for me. 3. Similarly we can get rid of keywords, this is a bit more difficult as we would need some kind of shortcut syntax for creating for loops etc - though again, most IDEs support this in a way already. 4. Variable names could be seen as tokens rather than text (sorry the next bit is going to be rushed, aperitif is awaiting), no more imports (the full namespace will be used but hidden from normal via), you should only type the variable name when creating it, else you are selecting it from a known list, or something. Of course, creating the language spec would mean stipulating the L&F to an extent, but I don't see that as a problem. Anyway, thats the germ of the idea I'd be interested in comment on this. Evan, if you are back in town give me a call. Brian
|
https://weblogs.java.net/node/239404/atom/feed
|
CC-MAIN-2015-18
|
refinedweb
| 1,234
| 69.62
|
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
On Thu, Nov 28, 2002 at 01:28:29PM -0600, Benjamin Kosnik wrote: > > >I'm about to commit it as follows > > > > #ifdef __GNUC__ > > #if __GNUC__ < 3 > > #include <hash_map.h> > > namespace Sgi { using ::hash_map; }; // inherit globals > > #else > > #include <ext/hash_map> > > #if __GNUC_MINOR__ == 0 > > namespace Sgi = std; // GCC 3.0 > > #else > > namespace Sgi = ::__gnu_cxx; // GCC 3.1 and later > > #endif > > #endif > > #else // ... there are other compilers, right? > > namespace Sgi = std; > > #endif > > FYI there are no other compilers officially supported. Isn't this meant for users to put in their own code? Just a few of them use compilers other than gcc. Probably the final #else clause needs a sample #include in it if just for completeness' sake. When C++0x comes out (with a hash_map of its own, different from the one in ext/) this will get rather more complicated, I suppose. Nathan Myers ncm-nospam@cantrip.org
|
http://gcc.gnu.org/ml/libstdc++/2002-11/msg00333.html
|
crawl-001
|
refinedweb
| 163
| 76.82
|
On Mon, Apr 9, 2018 at 7:11 AM, Michael Paquier <mich...@paquier.xyz> wrote:
> Hi all, > > I was just going through pg_rewind's code, and noticed the following > pearl: > /* > * Don't allow pg_rewind to be run as root, to avoid overwriting the > * ownership of files in the data directory. We need only check for > root > * -- any other user won't have sufficient permissions to modify files > in > * the data directory. > */ > #ifndef WIN32 > if (geteuid() == 0) > { > fprintf(stderr, _("cannot be executed by \"root\"\n")); > fprintf(stderr, _("You must run %s as the PostgreSQL > superuser.\n"), > progname); > } > #endif > > While that's nice to inform the user about the problem, that actually > does not prevent pg_rewind to run as root. Attached is a patch, which > needs a back-patch down to 9.5. > Seems simple enough and the right hting to do, but I wonder if we should really backpatch it. Yes, the behaviour is not great now, but there is also a non-zero risk of breaking peoples automated failover scripts of we backpatch it, isn't it? -- Magnus Hagander Me: <> Work: <>
|
https://www.mail-archive.com/pgsql-hackers@lists.postgresql.org/msg11602.html
|
CC-MAIN-2018-43
|
refinedweb
| 183
| 69.72
|
. . 2nd argument type being known: #a :: Outer.Outer -> b -- New "# Resolution" rule: The first argument is in module Outer, so -- resolve to Outer.a, and now we know the full type of Outer.a: Outer.a :: Outer.Outer -> Inner.Inner -- Due to the definition of (.) and its 1st argument (M.Record 42) -- Due to type of 'get' and known type. Let's alter # resolution so it expects a type Lens a b where a is known, and looks in the module that defines a. To make nicer looking examples, I'll also assume we can write deriving (Lens) to make ghc generate lenses for the fields instead of get functions.
import qualified M -- In M.hs: data Record = Record { a :: Int } deriving (Lens)
import qualified Outer -- Outer.hs: data Outer = Outer { a :: Inner.Inner } deriving (Lens) import qualified Inner -- Inner.hs: data Inner = Inner { b :: Int } deriving (Lens) --.
pros
- No effect on (.) operator, which is composition as always. No "binds tighter than functions" or left-to-right vs. right-to-left controversy, and partial application works as it always did.
- Record declaration syntax remains exactly the same. Totally backward compatible, we can gradually convert existing programs. Even convert an existing record field by field, no need for a single giant patch to update everything at once.
- Works on any function, so it doesn't tie you to the implementation of a record, you can remove a field and add a compatibility shim. So no tension between directly exposing the record implementation vs. writing a bunch of set/modify boilerplate.
- It's not just record types, any lens can go in the lens composition, e.g. one for Data.Map. So you can translate imperative record.a.b[c].d = 42 to set (#d . Map.lens c . #b . #a) 42 record. Make a new operator (.>) = flip (.) if you like left to right.
- Module export list controls access over record fields as always.
- Orthogonal to records: any function can be addressed.
- "Support" for polymorphic and higher-ranked fields, via existing record update syntax. It's a cheat because it's also con #2, but I think it's a valid design to build on top of the existing syntax instead of replacing it. Maybe it can be extended to support fancy stuff later, but meanwhile it solves the simple case while not precluding the complicated one.'m sure if this is solvable without the set being builtin syntax, or if always.
|
https://ghc.haskell.org/trac/ghc/wiki/Records/SyntaxDirectedNameResolution?version=16
|
CC-MAIN-2015-32
|
refinedweb
| 409
| 69.28
|
fsync, fdatasync - synchronize a file's complete in-core state with that on disk
#include <unistd> int fsync(int fd);
fsync(2)(2) on the file descriptor of the directory is also needed.
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
In case the hard disk has write cache enabled, the data may not really be on permanent storage when fsync(2) returns.
When an ext2 file system is mounted with the sync option, directory entries are also implicitely synced by fsync(2).
On kernels before 2.4, fsync(2) on big files can be inefficient. An alternative might be to use the O_SYNC flag to open(2).
POSIX.1b (formerly POSIX.4)
bdflush(2), open(2), sync(2), mount(8), update(8), sync(8)?, fdatasync(2)
The difference between fsync(2) and fdatasync(2) is that fsync modifies the access time metadata in the i-node, while fdatasync doesn't/shouldn't. I think this is true version kernel versions >= 2.4 - for 2.2 and earlier fdatasync was the same as fsync.
13 pages link to fsync(2):
lib/main.php:944: Notice: PageInfo: Cannot find action page
|
http://wiki.wlug.org.nz/fsync(2)?action=PageInfo
|
CC-MAIN-2015-22
|
refinedweb
| 197
| 69.38
|
Chinese Characters saved with Notepad+ turn Unicode in another editor
I can’t figure out what setting I’m missing. I realize that character encoding is really hard-stuff.
I’m using Visual-Studio 2008. And VS can read and save Asian characters. But when I edit files with Notepad++ the 2-byte Asian Chinese characters are readable and savable within Notepad++ but then they are corrupted for other editors. I say corrupted. Because then I reopen the same files after they are saved with NP+ they they come out ‘Unicode’ and/or gobbledygook in VS. I think VS-08 uses UTF, but I’m not an expert. I’m speaking specifically of comment lines in VS. I don’t know what it would do to web-text if I had any with 2-byte characters. In the future we will have Chinese web-text.
I’ve tried many different encoding settings, but nothing seems to work. If I use NP+ to save it has the same problem.
- Robert Koernke last edited by Robert Koernke
Correction:
Excuseme: They are not ‘unicode’. They are something other-worldly. This: ‘获得是第几个’ gets turned into this ‘»ñµÃÊǵڼ¸¸ö’
- andrecool-68 last edited by
And if you edit Chinese in Visual-Studio, what happens?
- PeterJones last edited by
Do you know what encoding VS-08 uses when it saves the file successfully with the correct Asian Chinese characters? There should be a setting someplace that defines such things.
Whatever encoding that is, you will need to set the same in Notepad++. (You might also have to turn off Settings > Preferences > MISC > Autodetect character encoding, because there are known issues with that in recent NPP versions.)
@PeterJones said:
@Robert-Koernke ,
(You might also have to turn off Settings > Preferences > MISC > Autodetect character encoding, because there are known issues with >that in recent NPP versions.)
I think that was it. I’ve tested saving and opening after turning that off, and it works.
To answer your question. I’m 98% sure it is UTF-8-BOM.
Sorry it looked like the last entire post was from @PeterJones . I’m learning how to quote and stuff on this site.
- PeterJones last edited by
If it’s UTF-8-BOM in Visual Studio, then I see no reason why Notepad++ would be messing it up. If there’s a BOM, NPP will know it’s UTF-8, and it will save it again in UTF-8-BOM.
Pasting your text, set Encoding > Convert To UTF-8-BOM, and saving with Notepad++.
获得是第几个
Then use an external hex dumper to show the 21 bytes in the file:
00000000: efbb bfe8 8eb7 e5be 97e6 98af e7ac ace5 ................ 00000010: 87a0 e4b8 aa .....
Looking up the UTF-8 representation of each
BOM = EF BB BF 获 = E8 8E B7 得 = E5 BE 97 是 = e6 98 af 第 = e7 ac ac 几 = e5 87 a0 个 = E4 B8 AA
So all of the UTF-8 representation are exactly translated into the hexdump of the file in Notepad++.
This is the correct 21 bytes for a UTF-8-BOM file with those six codepoints
I can open and close, add a space, delete it, resave – do that as many times as I want, and it doesn’t change the file.
I opened that file in MSWord: it asked me to convert file from “Encoded Text”,
Other Encoding= “Unicode (UTF-8)”, and the preview and the final result in Word was the same six glyphs.
Open it with WordPad: it shows those same six glyphs.
If you open that exact file in VS, and it shows anything but that, then VS isn’t expecting and/or cannot handle UTF-8-BOM.
So, try pasting your text into a fresh file in Notepad++, Encoding > Convert to UTF-8-BOM, save. Then try opening the file in VS. It should be right.
Also, try pasting those 6 glyphs into VS, and saving, then use a hex dumper[1] to dump the saved VS file, and show us the results
----
[1]: If you don’t have a hex dumper, but since you do have VS available, I assume you could compile this C code:
#include <stdio.h> int main(int argc, char**argv) { int c, i; FILE* fp; if(argc<2) { printf("usage: %s <filename>", argv[0]); return(0); } if(NULL==(fp = fopen(argv[1], "rb") ) ) { perror("could not open file"); return(1); } while( EOF != (c = fgetc(fp)) ) { printf("%02x ", c); if(++i % 16 == 0) printf("\n"); } return(0); }
i’m glad that disabling autodetect character encoding worked for your case, thanks for reporting back 👍
@Robert-Koernke @PeterJones
i think, vs2008 uses the default codepage of the current localization, unless “save as unicode …” is selected at the documents options.
if it is selected, it will add a bom, but only to files that don’t match the current windows language codepage as i can recall.
If it’s UTF-8-BOM in Visual Studio, then I see no reason why Notepad++ would be messing it up. If there’s a BOM, NPP will know it’s UTF-8, and it will save it again in UTF-8-BOM.
it should, but i also had the problem once, that utf-8-bom was not correctly loaded if auto detect encoding was enabled. easy to spot, as the encoding bullet was somewhere nested inside the character sets menu instead of having the bullet at utf-8-bom.
maybe we should verify some tests with bom, to check if it’s the same result if autodetect character encoding is activated.
(i guess most regulars have currently disabled autodetect, until the uchardet 0.0.6 implementation is fixed, so we’d need to reenable it for some testing)
|
https://community.notepad-plus-plus.org/topic/17023/chinese-characters-saved-with-notepad-turn-unicode-in-another-editor
|
CC-MAIN-2020-16
|
refinedweb
| 956
| 71.14
|
20 September 2012 05:52 [Source: ICIS news]
TOKYO (ICIS)--?xml:namespace>
Its exports of inorganic chemicals declined by 12% to Y144.6bn in August, while shipments of plastic materials fell by 0.3% to Y170.3bn, the Ministry of Finance (MOF) said in a statement.
The country exported 494,112 tonnes of plastic materials in August, down by 4.6% from the same period a year earlier, according to the ministry.
The country recorded a trade deficit of Y754.1bn last month, a 3% decrease from a deficit of Y777.5bn in the corresponding period in the previous year, according to the ministry.
($1 = Y78
|
http://www.icis.com/Articles/2012/09/20/9597050/japans-august-chemical-exports-decrease-3.4-year-on.html
|
CC-MAIN-2014-41
|
refinedweb
| 106
| 70.19
|
Many of you may have seen or even used the accordion control that comes with the Atlas Control Toolkit. This custom server control can be data bound, and provides similar functionality, although with several differences.
I wrote this control primarily because I needed the functionality of an accordion to conserve space in the search area of a web application that I recently finished building for a client. Initially, I had been attempting to use both the accordion and a group of collapsible panels that are available with the Atlas Control Toolkit. While these are both excellent controls, neither really suited my needs without substantial customization. As a result, I decided to take the time and write a server control that would meet my needs. My requirements dictated that the accordion be able to maintain its selection state (to allow the user to go back to the page and revise their search). The accordion also had to contain checkboxes that could quickly all be selected or deselected, and that could be individually selected or deselected. There was also the requirement that if a pane in an accordion was expanded, all other open panes with no checked checkboxes be closed. Again, the purpose was to conserve space. Additionally, the users felt that the accordion should show them the number of selections that they had made in a particular panel.
This control was written with a specific purpose in mind, and probably would not be reusable as it currently is. I intend to take the time to eventually make it reasonably generic. Possibly add templates to the content area, etc. The purpose of this article is really to provide an example of how to implement a relatively complex composite data-bound control.
The basic idea behind the accordion is that it can bind to a data-source like any other data-bound control. What makes it slightly different, however, is that it actually binds to a collection of collections. Each pane within the accordion binds to an individual collection. The header for each pane corresponds to a property on the collection. For example, if the pane were binding to a
DataTable, the header text might correspond to the
TableName property. If the pane were binding to a collection, it would be necessary to create a custom collection that exposes the property that you would like to bind to the header. For example:
public class NamedList<T> : List<T> { private readonly string m_strListTitle; public NamedList(string _strListTitle) { m_strListTitle = _strListTitle; } public string ListTitle { get { return m_strListTitle; } } }
Here, you could bind the accordion pane on the
ListTitle property, and that is the text that would appear on the header.
The accordion control is relatively simple to use: just drop it onto your page, and either set its properties via the designer, or declaratively. See the sample project included for an example. The accordion control can be data bound to any data-source that implements
IEnumerable or
IListSource. Note that currently it will not bind properly to a
DataSet. If you need to bind to
DataTables, instead of adding them to a
DataSet, add them to an
IList<DataTable>. At some point, I will add support for
DataSets.
Maintaining the expanded/collapsed state of each pane provided a bit of a challenge, but once I realized the basic technique of using hidden input fields and combining that with ensuring that the accordion pane objects implement
IPostBackDataHandler, it was relatively simple to implement.
As another point of interest, I hadn't really had much of a background in injecting JavaScript. This control makes heavy use of this technique, and it really amazes me what you can achieve with JavaScript.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/custom-controls/AccordionControl.aspx
|
crawl-002
|
refinedweb
| 618
| 52.19
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.