text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Quantifying Trends
One challenge in creating trading algorithms that are based on trends are how to represent the trend as a number. How can a number tell what direction a trend is moving and how do you compare the steepness of a trend with another.
You can compare starting values to ending values but this doesn't tell the story of what happens in the middle. Average values tend to be influenced by spikes or knifes that could dampen the strength of the trend..
Scipy has functions to easily calculate this. As this exercise is only to look at trends, it is worthwhile smoothing out the price movements to see the overall direction.
import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import interp1d, spline from scipy.misc import derivative plt.style.use('ggplot') s = pd.Series.from_csv('SPY20180216.csv') x = s.index y = s.values # Simple interpolation of x and y f = interp1d(x, y) x_fake = np.arange(x.min()+1,x.max(),0.1) y_smooth = spline(x, y, x_fake) # derivative of y with respect to x df_dx = derivative(f, x_fake, dx=1e-6) # Plot fig = plt.figure() ax1 = fig.add_subplot(211) ax2 = fig.add_subplot(212) ax1.plot(x_fake, y_smooth) ax1.set_title('Smoothed Intra Day Chart of SPX') ax1.set_xlabel("Minutes After Open") ax1.set_ylabel("Price") ax2.errorbar(x_fake, df_dx, lw=2) ax2.errorbar(x_fake, np.array([0 for i in x_fake]), ls="--", lw=2) ax2.set_title('Derivative of Price Changes') ax2.set_xlabel("Minutes after Open") ax2.set_ylabel("dy/dx") txt = 'Sum of Derivative: {}'.format(df_dx.sum()) ax2.text(0.95, 0.01, txt, verticalalignment='bottom', horizontalalignment='right', transform=ax2.transAxes, fontsize=12).
Once you got a number to quantify a trend, you can apply it to different trend types for different time frames.
>>IMAGE.
References
The inspiration for this post was a Stack Overflow post. | http://findingpatterns.co.uk/quantifying-trends.html | CC-MAIN-2019-09 | refinedweb | 314 | 53.98 |
One challenge of face identification is that when you want to add a new person to the existing list. Do you retrain your network with tons of this new person's face images along with others'? If we build a classification model, how can the model classify an unknown face?
In this demo, we tackle the challenge by computing the similarity of two faces, one in our database, one face image we captured on webcam.
The VGGFace model "encodes" a face into a representation of 2048 numbers.
We then compute the Euclidean distance between two "encoded" faces. If they are the same person, the distance value will be low, if they are from two different persons, the value will be high.
During the face identification time, if the value is below a threshold, we would predict that those two pictures are the same person.
The model itself is based on RESNET50 architecture, which is popular in processing image data.
Let's first take a look at the demo.
The demo source code contains two files. The first file will precompute the "encoded" faces' features and save the results alongside with the persons' names.
The second will be the live demo to capture frames of images from a webcam and identify if any known faces.
Let's jump into it.
One standard way to add a new person to the model is to call the one-shot learning. In the one-shot learning problem, you have to learn from just one example to recognize the person again.
I might be risky since this one photo could be badly lighted or the pose of the face is really bad. So my approach is by extracting faces for from a short video clip taking only contains this person and calculate the "mean features" by averaging all computed features for each image.
You can find my full source precompute_features.py. But here is the important parts that make the magic happen.
We have one or more video files for each person.
FaceExtractor 's
extract_faces method takes a video file, read it frame by frame.
For each frame, it crops the face area, then saves the face to an image file into the
save_folder.
FACE_IMAGES_FOLDER = "./data/face_images" VIDEOS_FOLDER = "./data/videos" extractor = FaceExtractor() folders = list(glob.iglob(os.path.join(VIDEOS_FOLDER, '*'))) os.makedirs(FACE_IMAGES_FOLDER, exist_ok=True) names = [os.path.basename(folder) for folder in folders] for i, folder in enumerate(folders): name = names[i] videos = list(glob.iglob(os.path.join(folder, '*.*'))) save_folder = os.path.join(FACE_IMAGES_FOLDER, name) print(save_folder) os.makedirs(save_folder, exist_ok=True) for video in videos: extractor.extract_faces(video, save_folder)
In the
extract_faces method, we call the VGGFace feature extractor to generate face features like this,
from keras_vggface.vggface import VGGFace resnet50_features = VGGFace(model='resnet50', include_top=False, input_shape=(224, 224, 3), pooling='avg') # pooling: None, avg or max # images is a numpy array with shape (N, 224, 224, 3) features = resnet50_features.predict(images) # features is a numpy array with shape (N, 2048)
We do this for all people's videos. Then we extract features for those face images and calculate the "mean face features" for each person. Then save it to file for the demo part.
precompute_features = [] for i, folder in enumerate(folders): name = names[i] save_folder = os.path.join(FACE_IMAGES_FOLDER, name) mean_features = cal_mean_feature(image_folder=save_folder) precompute_features.append({"name": name, "features": mean_features}) pickle_stuff("./data/precompute_features.pickle", precompute_features)
Since we have already pre-computed the face features of each person in the live demo part. It only needs to load up the features file we just saved.
Extract the faces, compute the features, compare them with our precomputed features to find if any matches. If we found any matching face, we draw the person's name in the frame overlay.
The method below takes the features computed from a face in webcam image and compare with each of our known faces' features
def identify_face(self, features, threshold=100): distances = [] for person in self.precompute_features_map: person_features = person.get("features") distance = sp.spatial.distance.euclidean(person_features, features) distances.append(distance) min_distance_value = min(distances) min_distance_index = distances.index(min_distance_value) if min_distance_value < threshold: return self.precompute_features_map[min_distance_index].get("name") else: return "?"
If the person's face feature is "far away" from all of our known face features, we show the "?" sign on the final image overlay to indicate this is an unknown face.
A demo shown below,
I have only included 3 people in this demo. As you can imagining, as the number of people grows, the model will likely to confuse with two similar faces.
If that happens, you could consider exploring the Siamese Network with Triplet Loss as shown in the Coursera course.
FaceNet is a good example.
For those interested. The full source code is listed in my GitHub repo. Enjoy!Share on Twitter Share on Facebook | https://www.dlology.com/blog/live-face-identification-with-pre-trained-vggface2-model/ | CC-MAIN-2020-24 | refinedweb | 801 | 58.38 |
UNIX is a kernel implementation.
Concepts
External
Wikipedia page about UNIX.
Standardizing UNIX, an article by David Chisnall.
The first in the series, Neil Brown's 2010-10-27 article Ghosts of Unix Past: a historical search for design patterns introduces the concepts of file descriptors and the single, hierarchical ?namespace.
Next, Neil Brown's 2010-11-04 article Ghosts of Unix past, part 2: Conflated designs discusses issues with conflated designs such as the
mountcommand (a problem we have partly solved / solved differently with our translator approach and the virtual file system), and the plethora of flags that can be passed to the
opensystem call.
In Neil Brown's 2010-11-16 article Ghosts of Unix past, part 3: Unfixable designs, he deals with unfixable designs, such as UNIX signals and the UNIX permission model (which is clearly inferior to a capability-based system).
UNIX File Permissions (2004) by Richard Kettlewell. (open issue documentation) | http://www.gnu.org/software/hurd/unix.html | CC-MAIN-2015-18 | refinedweb | 155 | 53.1 |
Code Cracked for Code-Splitting + SSR in Reactlandia: React Loadable + Webpack Flush Chunks and more
The code has been cracked for a long time now for server-side rendering and code-splitting individually. Until now — bar Next.js — there has been no easy to use common abstraction to achieve both simultaneously for the greater NPM community. This article presents several packages I’ve worked on for a long time now that I hope to be the final simple + idiomatic solution to this problem.
import { flushModuleIds } from 'react-universal-component/server'
import flushChunks from 'webpack-flush-chunks'
const app = ReactDOMServer.renderToString(<App />)
const { js, styles } = flushChunks(webpackStats, {
moduleIds: flushModuleIds()
})
res.send(`
<!doctype html>
<html>
<head>
${styles}
</head>
<body>
<div id="root">${app}</div>
${js}
</body>
</html>
`)
TL;DR
Run one of the Universal Webpack boilerplates below, and examine the code and done. There are 4 packages and 4 boilerplates:
PACKAGES:
💩 flush webpack chunks for SSR from React Loadable or similar packagesgithub.com
Extract CSS from chunks into multiple stylesheets + HMRgithub.com
BOILERPLATES:
universal webpack boilerplate for Webpack Flush Chunks + React Universal Componentgithub.com
universal webpack boilerplate for Webpack Flush Chunks + React Universal React Component (using magic comments for chunkNames)github.com
babel boilerplate for Webpack Flush Chunks + React Universal Componentgithub.com
THE STORY
James Kyle about 2 months ago laid the groundwork for what needed to be done in his now famous article ending with “use this shit,” which even inspired Arunoda with his “dynamic” component contribution to Next.js 3.0. And so React Loadable was born.
React Loadable ultimately offered a solution that perhaps many had made something similar to, in one way or another. And which similarly had the same problem: it only solved the problem of having an async client-side component. What was most important was James Kyle’s discovery near the end of the article regarding how even though you may have an async component, you need it to render both asynchronously and synchronously, and that buffering/flushing the IDs of modules that were synchronously loaded on the server could lead to figuring out what initial chunks to send to the client.
As such, it turned out that the asynchronous aspect in fact was the easier task.
Rendering the client synchronously as you did on the server requires lots of labor.
You have to render synchronously on the server (in either a babel or webpack server environments) AND synchronously again on the client on initial load of the page so that React checksums match up with what was received from the server, and so an additional rendering did not occur (and so you don’t lose precious milliseconds or seconds loading modules asynchronously). Even this turns out to be just the beginning of the problem — because what about actually discovering which chunks the module IDs or paths correspond to? What about serving them? What about stylesheets? Are you just going to always render your style sheet as one big main.css instead of breaking it up into chunks like your js OR have to use the new and exciting tools which come with their own caveats: they don’t generate static cacheable sheets and eat up cycles continually at render time both on the client and server? What about HMR? The list goes on. To do code splitting plus SSR right requires a laundry list of things checked off to do with ease, and more importantly: idiomatically.
AND SO WEBPACK-FLUSH-CHUNKS WAS BORN
And so I studiously went on my way to solve a problem I had put up with in the past by getting the benefits of code-splitting without SSR + SEO. If you’ve done this, you know the drill: create a map of possible components you want to load asynchronously where each key is the name of a component and the value is an object containing an asynchronous value and a synchronous one (for the server). Then when you’re done you can dynamically call
require.ensure and choose which component to load, and more importantly toggle between rendering the synchronous one on the server and the asynchronous one on the client. And when you were done, you/I just settled for no server-side rendering and no SEO benefits.
Basically, after hearing all the excitement about code-splitting, I couldn’t believe how little we all were getting for it, and how much farther it had to go.
Anyway, everything is a progression and ultimately that’s the really exciting thing about everything what’s happening in Reactlandia. I’m not even gonna say the words “Javascript F***gue” to paint the picture, but to me it’s the exact opposite. It’s all of us evolving the perfect [software] world in a decentralized way. The future is here. Or rather, it’s just begun.
MOTIVATION: THE PAST PROBLEMS /w CODE SPLITTING
So what were the precise problems? What have been the shortcomings of code-splitting?
Webpack long ago introduced the capability of code-splitting. However, it’s been an enigma for most (me at least). Firstly, just grokking the original
require.ensure API and how to create your Webpack configuration to support it wasn't a natural thing for many. More importantly, if you were just to read how developers were using it, you'd think it was a done deal. But anyone who's tried to take this feature full circle and incorporate server-side rendering were left scratching their head (to me, it was surprising how little-talked-about this was).
What's the point of code-splitting when on your initial page-load--if code from additional chunks was evaluated server-side--it required an additional request to get them?
Sure, as your users navigate your single-page app it came in handy, but what about SEO? What about not showing loading spinners after your initial page loaded? If you're like me, you ended up only using code-splitting for a few areas of your app where SEO wasn't important--often lesser used portions of your app. There has been no off-the-shelf solutions to handle this besides Next.js, which requires you making a commitment to a “framework.” Coming from the Meteor world, and having left it a year and a half ago, I was not going back to Framework land.
So, React can synchronously render itself in one go on the server. However, to do so on the client requires all the chunks used to perform that render, which obviously is different for each unique URL, authenticated user, etc. While additional asynchronous requests triggered as the user navigates your app is what code-splitting is all about, it’s sub-optimal to have to load additional chunks in the initial render. Similarly, you don’t want to just send all the chunks down to the client for that initial request, as that defeats the purpose of code-splitting. In addition, if your strategy is the former, checksums won’t match and an additional unnecessary render will happen on the client.
As a result, the goal became to get to the client precisely those chunks used in the first render, no more, no less.
SOLUTION
By now you probably get that the solution revolves around somehow triangulating the data you have available such as the rendered module IDs to determine what chunks to spit out from the server. In general I’m breaking the whole solution down into 2 parts:
- frontend
- backend
James Kyle pioneered the “frontend”: React Loadable when used on the server, skips the loading phase and synchronously renders your contained component, while recording the ID of its corresponding module.
React Loadable may be used multiple times and therefore may record multiple split points.
What Webpack Flush Chunks (i.e. the “backend”) does is cross-reference those module IDs (or paths if using a Babel server) with your Webpack stats to determine the minimal set of “chunks” required to re-render those modules/components on the client. The “chunks” themselves contain ALL the files corresponding to the chunk (js, css, source maps, etc). So from there Webpack Flush Chunks outputs strings, React components, or plain arrays containing the precise javascript files (and CSS files) to embed in your HTML response. Though it has a lower level API available to you, it also automatically handles your
main,
vendorand possible
bootstrap chunks, putting them in the correct order. It even creates React components you can pass to
renderToStaticMarkup. Perhaps the most important thing it provides is simply the precise and details laundry list of things you must do to setup your Webpack config.
It’s not the hardest thing (though for whatever reason, remained arcane for so long). What it takes is getting extremely familiar with all the stats spit out by Webpack. It also requires figuring out how to match Babel paths to Webpack module IDs if you’re using a Babel server. In short the code isn’t too complex and is very maintainable, but required a lot of thought and trial and error to figure out.
That’s the end of the story. Use Webpack Flush Chunks. “Use This Shit” in the words of James Kyle.
NEXT: CSS
Well not quite. There’s also CSS. I have a lot of opinions on this one. But I’ll save most for the readme to Extract CSS Chunks Webpack Plugin.
I also have answers. It boils down to:
- the fact that Using CSS Modules already is “CSS-in-JS”
- Wasting cycles on both the client and server rendering CSS is just that — a waste!
- Cacheable Stylesheets are just that — cacheable!
- HMR is a must
- And: guess what? If you can chunkify your CSS just like your JS you don’t need to generate “render-path” CSS to send the least amount of bytes over the wire. In fact, you send less!
See with truer CSS-in-JS solutions like StyleTron, Aphrodite, etc, all your CSS is represented in code anyway, aka javascript. So you may be sending the smallest amount of CSS possible, but you’re sending it all down in the form of javascript IN ADDITION and NO MATTER WHAT.
It also turns out that if you can statically chunkify your CSS, you’ve achieved the 80–20 rule: you’ve achieved the sweet-spot of 80% optimization in how little CSS you send over the wire. See, the real problem is, for example, sending over the CSS of your private user panels to your public-facing site or vice versa. If you have many sections/panels, your CSS exponentially grows and is sent everywhere. However, if you have a straight-forward mechanism to breakup your CSS into chunks by sections you’ve solved 80% of the problem, if not more.
Again, you can read the Extract CSS Chunks Webpack Plugin readme for a lot more thoughts on this. It boils down to static determination of what CSS to send being a damn good solution. Going after that last 20% by sacrificing render cycles and having to use custom hocs — IMHO — is nitpicking and results in diminishing returns.
Did I mention when u do I in fact request async chunks, those chunks have the CSS ready for injection embedded in the JavaScript again? See, it creates two js chunks: one without css which is sent in initial requests along with real stylesheets AND another for async requests which has CSS injection as usual! This gives u the smallest possible initial js bundles.
Did I mention — unlike the original Extract Text Webpack Plugin — it supports HMR! “USE THAT SHIT!”
MORE
Yes, I got more. You wanna make your own Async Component because neither React Loadable nor React Universal Component serves your needs?
Well, the core aspect for all this “universal rendering” goodness has been abstracted/extracted into its own package:
require-universal-module - Universal-Rendering Module Loading Primitivesgithub.com
And what you make with it will flush chunks along with Webpack Flush Chunks just as easily as React Loadable and React Universal Component.
This time, I really won’t say anymore. USE THAT SHIT!
REACT UNIVERSAL COMPONENT
But let’s not get ahead of ourselves. A lot has been put into React Universal Component to make it the be-all-end-all — what I’m calling “Universal” — component. React Loadable kicked ass and this is its spiritual successor after all!
Basically everything under the sun (from PRs, issues, other packages, etc) has been included in it. With discretion of course ;)
I’m trying to think of a few noteworthy capabilities to point out (as just reading its readme probably is the best thing to do once again). Well, lets look at some code:
import universal from ‘react-universal-component’
const UniversalComponent = universal(() => import(‘./Foo’), {
error: Error,
timeout: 15000,
minDelay: 300,
chunkName: ‘myChunkName’,
onLoad: module => replaceReducers({ ...reducers, module.bar })
key: ‘Foo’ || module => module.Foo,
path: path.join(__dirname, ‘./Foo'),
resolve: () => require.resolveWeak('./Foo')
})
export default ({ isLoading, error }) =>
<div>
<UniversalComponent isLoading={isLoading} error={error} />
</div>
et voila!
A 2 argument API like Next.js’
dynamic, and an options argument with a super clean surface like Vue’s, which itself is also inspired by React Loadable.
Not all those options are required. In fact they are all optional. You could even have an async-only component which as you’ll read at the end of the readme — thanks to Async Reactor — may very well be the basis for an even further evolution in universal rendering.
There’s still a lot more than meets the eye. For one, you can wrap the resulting
<Universal Component/> in a HoC that does data-fetching (i.e. some separate async work) and re-uses the
isLoading prop, etc. It makes for a perfect match for Apollo and the like.
Both promises will run in parallel, and the same loading spinner will show.
You can use
onLoad to utilize other exports from the module to do related work: replacing reducers, updating sagas, perhaps something with animation, etc.
If
15000milliseconds is reached, the error component will show. Thank you Vue.
The
minDelay is different than what React Loadable has. It results in a more responsive component. Instead of waiting a few ms to see anything, it always shows the spinner immediately. And you can set the minimum amount of time before the async component can show. This also helps with animations. Say your page with the spinner in it slides in and the sliding animation takes 500ms — well, now u can avoid rendering jank from messing up your sliding animation by prolonging the page update until the sliding animation is done. It also better solves the original problem of avoiding flashing between the loading spinner and the component since, no matter what, they could appear around the same time without a minimum delay you can control. The readme has you covered there aswell. This is just off the top of my head.
It has support for HMR which React Loadable has yet to attain. Same with all your async CSS if you’re using Extract CSS Chunks Webpack Plugin.
Lastly, instead of being restricted to promises with
import() you can use a function that calls
require.ensure with a callback, which gives you the additional capabilities of
require.ensure. You can actually do all the stuff Async Reactor does including data-fetching in it. More importantly, the props are passed as an arg so you can determine what data to fetch dynamically. This is a story for another day, but checkout Async Reactor as you review this stuff. Even if you’re very familiar with React Loadable, but you haven’t checked that, it will likely throw you for a loop [in a very good way].
The interface proposed by Async Reactor has a lot of potential for becoming the idiomatic future of combination async/sync “universal” rendering.
Basically it has potential to be the greater NPM community’s answer to Next.js’
getInitialProps if it has a recursive promise resolution system like Apollo. Read the end of the readme to hear me go off on what I think is the future.
And did I mention: you don’t have to use it in a sever-rendered scenario. That’s just where it shines. If you read the readme (and compare it to Async Reactor), you can do some pretty cool things with the async component argument. Async-only is a primary use case for this package as well.
CONCLUSION
USE ALL THIS SHIT. THERE’S FAR MORE YOU CAN DO AS YOU’LL READ IN THE READMES. SO MANY BASES ARE COVERED. GOODBYE.
PS.
Did I mention Webpack’s “magic comments” feature which just came out is fully supported as well? Just use it and name your chunks and call
flushChunkNames instead of
flushModuleIds and pass
chunkNames to
universal(asyncWork, { chunkNames }) to make it work. It will save your server some cycles from doing all I had to do to jump through hoops to cross-reference module IDs with stats.
Richard Scarrott’s Webpack-Hot-Server-Middleware (HMR on the Server) is world class. Examine its usage in the boilerplates. It’s important and related because this is supposed to the most modern [non-restricting] React/NPM developer experience for serious apps. The boilerplates themselves — I might add — are pristine. Developer experience goodness everywhere, hopefully you find it all to be idiomatic. Enjoy!
> ALSO READ:
REDUX-FIRST ROUTER INTRO (BUILT-IN CHUNK PRE-FETCHING):
The purpose of this article is to debunk the effectiveness of route-matching components + nested routes when using…hackernoon.com
USING WEBPACK “MAGIC COMMENTS” FOR CODE SPLITTING:
Webpack 2.4.0, which came out a few weeks ago, launched with a very interesting new feature: “magic comments.” In…medium.com
Tweets and other love are much appreciated. Find me on twitter @faceyspacey Want to stay current in Reactlandia? Tap/click “FOLLOW” next to the FaceySpacey publication to receive weekly Medium “Letters” via email 👇🏽 | https://medium.com/faceyspacey/code-cracked-for-code-splitting-ssr-in-reactlandia-react-loadable-webpack-flush-chunks-and-1a6b0112a8b8 | CC-MAIN-2017-26 | refinedweb | 2,998 | 63.39 |
Scripting Languages For Java
By Weiqi Gao, Ph.D., OCI Senior Software Engineer
March 2001
Introduction
This month, we bring you to the world of scripting languages.
In this article, we focus on two scripting-language implementations that interact nicely with Java: Rhino and Jython.
Obviously, one of the prerequisites for using a scripting language is familiarity with the language. Good tutorials and users' guides exist for all of the languages discussed in this article (see resources). For the rest of this article, I'll assume that you are familiar with JavaScript and Python. And I'll explain how Rhino and Jython allow you to access arbitrary Java objects.
A Survey of the Field
Since the release of JDK 1.0 in 1995, many popular languages have been re-implemented in Java. These re-implementations usually give their users the ability to access any Java classes in the JVM. Some are even capable of compiling arbitrary scripts into Java class files.
At the same time, many languages gained the ability to access the JVM and all the classes in it in their original C implementations. Here is a partial list of languages that I have found on the internet as free software:
Getting Started
To start using the Java implementation of a scripting language, downloads the jar files, put them into the CLASSPATH, and write a shell script or two to make using the language convenient. Jython even comes with an installer.
A portion of my CLASSPATH reads:
... ~/rhino15R2pre/js.jar:~/jython-2.0/jython.jar: ...
And I have the following shell scripts:
rhino: java org.mozilla.javascript.tools.shell.Main $* (Rhino interactive shell)
rhinoc: java org.mozilla.javascript.tools.jsc.Main $* (Compile JavaScript code to class files)
jython: ... (Jython interactive shell, supplied by Jython)
jythonc: ... (Compile Python code to class files, supplied by Jython)
Here's the transcript of a couple of sessions with Rhino and Jython respectively:
[weiqi@gao] 1 $ rhino
Exploratory Programming and Testing
We are writing JavaScript and Python code while making use of standard Java classes. The interactivity and the economy of expression in the scripting languages account for their first big use: exploratory programming and testing.
Rhino and LiveConnect
LiveConnect is the mechanism that JavaScript uses to access Java objects. LiveConnect is used by both the C engine and Rhino, while Rhino contains a few more convenient functions.
Although LiveConnect relies on a set of wrapper classes to communicate between JavaScript and Java, to the user, it works almost transparently.
Let's look at some examples.
1. Java classes and packages can be imported.
js> importPackage(java.awt, java.awt.event, Packages.javax.swing) js> // Note packages not in the java hierarchy need to be prefaced with Packages
2. Instantiating a Java object is straightforward.
js> f = new Frame("A Java Frame in Rhino") js> b = new Button("Hello") js> f.add(b); f.pack();
3. JavaBeans style properties (defined by
getter and
setter methods) can be accessed directly.
js> f.visible = true
4. Java interfaces can be implemented with JavaScript code.
js> foo = function(e) { print("World") } js> bar = { actionPerformed: foo } js> listener = ActionListener(bar) js> b.addActionListener(listener)
5. Java classes can be extended with JavaScript methods.
js> baz = function(e) { quit() } js> o = { windowClosing: baz } js> adapter = WindowAdapter(o) js> f.addWindowListener(adapter)
Since Java arrays are not in direct correspondence to JavaScript arrays in Rhino, creating a Java array in Rhino is more complicated than in Java.
6. Java reflection is used to create a Java array in Rhino.
js> a = java.lang.reflect.Array.newInstance(Object, 3) js> a.length 3
Using Jython
For the most part, accessing Java using Jython has the same feel as using Rhino. However, since Python (the language implemented by Jython) has a richer set of features than JavaScript, Jython provides more convenience for the exploratory Java programmer.
Let's try out our Rhino examples in Jython.
1. Java classes and packages can be imported using the Python import statement (note the extra work Jython does for you, compared to Java).
>>> import java # Import the java 'module' so that we can use names therein >>> java # This works now <java package java at 3665287> >>> java.awt # Surprise, this works too <java package java.awt at 26038046> >>> from java.awt import * # Import all names from the java.awt 'module' into the current 'module' >>> Frame # We can now refer to java.awt.Frame as simply Frame <jclass java.awt.Frame at 30892413>
2. Instantiating a Java object is straightforward (you don't need the 'new' when instantiating an object in Python).
>>> f = Frame("A Java Frame in Jython") >>> b = Button("Hello") >>> f.add(b); f.pack()
3. JavaBeans style properties (defined by
getter and
setter methods) can be accessed directly.
>>> f.visible = 1 # In Python, 1 is true, 0 is false
4. Java interfaces can be implemented with Python code.
>>> class MyActionListener (java.awt.event.ActionListener): ... def actionPerformed(self, event): # Note the extra self argument ... print("World") >>> listener = MyActionListener() >>> b.addActionListener(listener)
5. Java classes can be extended with Python code, (however Python's multiple inheritance does not apply when Java classes are extended).
>>> class MyWindowListener (java.awt.event.WindowAdapter): ... def windowClosing(self, event): ... from sys import exit; exit() # Calling Python's sys.exit() rather than Java's System.exit() >>> adapter = MyWindowListener() >>> f.addWindowListener(adapter)
6. Jython makes creating Java arrays easier through the
jarray module.
>>> from jarray import array, zeros >>> a = array([1,2,3], 'i') array([1, 2, 3], int) >>> from java.util import HashMap >>> b = array([HashMap(), HashMap()], HashMap) array([{}, {}], java.util.HashMap)
Jython goes one step further than Rhino in JavaBeans scripting.
7.(); } });
I have found this style of exploratory programming useful in the rapid prototyping of ideas, in getting acquainted with new Java APIs, and in informal testing of newly developed Java classes and packages.
User-Level Scripting of Applications.
Embedding a scripting language into an application has been a winning strategy for application designers for a long time. With the proliferation of scripting languages for the JVM, embedding one into an application becomes as easy as choosing a language and instantiating an interpreter object of the chosen language. Compiled from Turtle.java public class Turtle extends java.awt.Canvas { public void reset(); public void move(int); public void turn(double); ... }
The methods of this class do the obvious things. However, merely using objects of this class in a plain Java application isn't very exciting. You probably prescribe a series of
move() and
turn() actions in Java code, compile and execute the code, and observe the resulting pattern.!
Summary
In this article, we illustrated in Rhino and Jython two ways of using Java-aware scripting languages: exploratory programming and testing and user level scripting of applications. But we have only scratched the surface.
The scripting languages offer much more that are worth studying and using. Visit their websites to learn more about them.
All the scripting languages mentioned, in either their C or Java implementations, have other uses that are not directly related to the Java platform. These uses include rapid GUI application development, CGI programming, testing frameworks, and even full blown application development. The developments in this space should be interesting for Java developers to watch.
Resources
- [1] Rhino is a subproject of the Mozilla project:.
- [2] The canonical JavaScript core language guide can be found here: .
- [3] Jython has it's own web site:.
- [4] Of course, Python information can be found at:.
- [5] The Python tutorial is a very gentle introduction to the syntax and facilities of Python:.
- [6] Bruce Eckel's new book in progress, Thinking in Patterns, has a chapter on Jython:.
Software Engineering Tech Trends (SETT) is a monthly newsletter featuring emerging trends in software engineering. | https://objectcomputing.com/resources/publications/sett/march-2001-scripting-languages-for-java | CC-MAIN-2020-10 | refinedweb | 1,291 | 58.38 |
Taking.)
3 Comments
This is a nice little script. Unfortunately, it requires you to know the name of the window you want a screenshot of, or it'll spit out error messages about there being no such window, so instead of the import line you have, you can simply use:
import #{num}.png
Also, if there's no existing file your script spits out an error about "undefined method `match' for nil:NilClass"
Finally, instead of searching through existing files and figuring out what the next consecutive filename should be, it's much easier to just use the current month, day, hour, minute, and second to create a unique filename (which will be in consecutive order as long as the clock on your machine works properly :). This can be done with a simple bash script:
!/bin/bash
#
Grab a screenshot of a window and output it,
using a unique name, to the appropriate directory
#
#
#
CONFIGURABLE VARIABLES
# IMPORT='import'
IMPORT_OPTS='-window root' # Grab the full screen
DATE=
date +%m%d%H%M%SOUTPUT_PATH="$HOME/images/screenshots/full-screen" OUTPUT_FILENAME="$OUTPUT_PATH/$DATE.png" #
#
#
Grab screenshot
# $IMPORT $IMPORT_OPTS "$OUTPUT_FILENAME"
Thanks for another way to do it. Yours is simpler and more general-purpose.
I only want screenshots of one window, the same program each time, so that's why I use the window name. If that program isn't running, my script should die. I could timestamp the files, but I'm putting these files on a web page and typing in filenames made of 20 semi-random numbers is error prone and painful. I could always timestamp them and rename/renumber them later, but my way works for me.
Yep. That makes sense. Though, as far as typos go, you could use cut and paste, or write a script to transfer the files to your web page. | http://briancarper.net/blog/175.html | CC-MAIN-2017-26 | refinedweb | 305 | 66.57 |
When working with databases, it's convenient to have some initial data. Imagine being a new developer. It will be a pain if you need to set up all this data by hand.
That's where migrations come in handy. Prisma has a super-easy way to deal with these migrations. And today, we'll be creating our seeder!
Creating the Prisma seed file permalink
Create a new file called
seed.ts in the
prisma folder.
This file will handle our seeds, and the rough layout looks like this:
import { PrismaClient } from '@prisma/client'; const prisma = new PrismaClient(); async function main() { // Do stuff } main() .catch((e) => { console.error(e); process.exit(1); }) .finally(async () => { await prisma.$disconnect(); });
As you can see, this loads the Prisma client. Then we define the main function, which is an async function. And eventually, we call this function to catch errors and disconnect once it's done.
Before moving on, let's create a data file for the playlist model we made in Prisma.
I've created a
seeds folder inside this
prisma folder.
Inside that
seeds folder, create a file called
playlists.ts.
export const playlists = [ { title: 'Wake Up Happy', image: ' uri: 'spotify:playlist:37i9dQZF1DX0UrRvztWcAU', }, { title: 'Morning Motivation', image: ' uri: 'spotify:playlist:37i9dQZF1DXc5e2bJhV6pu', }, { title: 'Walking On Sunshine', image: ' uri: 'spotify:playlist:37i9dQZF1DWYAcBZSAVhlf', }, ];
As you can see, this resembles our fields, and we have three playlists added here.
Now head back to the
seed.ts file and import this file.
import { playlists } from './seeds/playlists';
Now inside our
main function, we can use the
createMany function on the Prisma client.
async function main() { await prisma.playlist.createMany({ data: playlists, }); }
This will create many playlists with the data we just added.
Running seeds in Prisma permalink
The next thing we need is a way to run this seed script.
Before doing that, we need to install
ts-node as a dev dependency:
npm install ts-node -D
Then head over to your
package.json file and add a
prisma section.
{ // Other stuff "prisma": { "seed": "ts-node prisma/seed.ts" }, }
To run the migrations, you can run the following command:
npx prisma db seed
And the seed is also run when you execute
prisma migrate dev or
prisma migrate reset.
You can see the seeding in action in the video below.
If you want to see the completed project, it's hosted on GitHub.
Thank you for reading, and let's connect! permalink
Thank you for reading my blog. Feel free to subscribe to my email newsletter and connect on Facebook or Twitter | https://daily-dev-tips.com/posts/seeding-a-prisma-database-in-nextjs/ | CC-MAIN-2022-21 | refinedweb | 426 | 76.42 |
Errata
This is the errata page for Grokking Algorithms. If you see an error, please send me an email.
Chapter 1
The full code for binary search includes this line:
mid = (low + high)
It should actually be:
mid = (low + high) / 2
For binary search, you have to check log n elements in the worst case. For a list of 8 elements, log 8 == 3, because 2^3 == 8. So for a list of 8 numbers, you would have to check 3 numbers at most.
For 8 numbers we actually need to check 4 numbers in the worst case.
In the rocket example, I say:
Bob runs binary search with 1 billion elements, and it takes 30 ms (log~2~ 1,000,000,000 is roughly 30 because 2^30 = ~1 billion). "32 ms!"
I use 30ms first, and 32ms the second time. It should be 30ms both times.
The running times for
O(n!) are wrong:
8.6 x 10^505should be
2.7 x 10^498
5.4 x 10^2638should be
1.72 x 10^2631.
Chapter 2
Page 35: "Quicksort is a faster sorting algorithm that only takes O(n log n) time. It's coming up in the next chapter." The discussion of quicksort actually occurs in chapter 4 and not the next chapter, chapter 3.
Chapter 3
On Page 41, this snippet:
def countdown(i): print i countdown(i-1)
Should be formatted as:
def countdown(i): print i countdown(i-1)
On Page 41, in the countdown function the base case is
if i <= 0, but in the illustration below I say
if i <= 1. It should be
if i <= 1 in both cases.
On Page 43, the code for the
bye() function should be indented the same as
greet2().
Chapter 4
On Page 56, I show how the farm can be divided up into 80x80 plots. But the grid I've shown is a 14x8 grid. It should be 21x8.
I talk about partitioning an array in quicksort, with an array of five elements: "Here are all the ways you can partition this array, depending on what pivot you choose." Right after that line, I have a big image showing the various ways the array can be partitioned. The first partition should be:
[ ] <1> [ 3, 5, 2, 4 ].
In this chapter at different times I mention:
- it is better to choose a random element as the pivot
- it is better to choose the middle element of the array as the pivot.
Obviously they can't both be correct. The O(n lg n) avg case runtime of quicksort only applies if you choose a random element as the pivot. So choose a random element as the pivot.
Chapter 5
Page 82: The second line of check_voter() is
print "kick them ou which isn't complete. It should be
print "kick them out!"
On Page 94, I say "once your load factor is greater than .07". That should be "0.7".
Chapter 7
On page 120, I incorrectly say "use Bellman-Ford" instead of "use Dijkstra's algorithm".
On page 122, I say that Dijkstra's algorithm only works with DAGs. This is incorrect, Dijkstra's algorithm works even if there is a cycle, as long as it is a positive weight cycle.
Page 130: In the last paragraph on the page, "Dijkstra's algorithm assumed that because you were processing the poster node, there was no faster way to get to that node." For this example, it would have been better to use the word "cheaper" instead of "faster".
Dijkstra's algorithm, page 132.
The top half of the page sets up the weights such that start -> A is 6 and start -> B is 2. However, the section at the middle of the page shows the weights being swapped:
>>> print graph['start']['a'] 2 >>> print graph['start']['b'] 6
The values above should be swapped.
Chapter 8
It takes O(2^n) time, because there are 2^n stations.
That should read "2^n subsets", not stations.
On page 149, the arrow for "New syntax! This is called a set intersection." should point to the first line, not the second line.
In the code snippet on the bottom of page 151, the last two lines should be indented so that they are inside the while loop:
while states_needed: .... states_needed -= states_covered final_stations.add(best_station)
In this chapter, I spelled footballer Brandon Marshall's name incorrectly.
Chapter 9
On page 173, the bottom left square of the grid should contain "$2000", not "$3500".
On page 182, the bottom left square in both grids should contain a "1", not a "0", since H == H in both cases.
Chapter 10
I mention in this chapter that Netflix used KNN for recommendations, but a reader pointed out that they used collaborative filtering.
Chapter 11
In this chapter, I talk about the SHA algorithms, and how they are used for passwords. People have found that the SHA algorithms are not secure enough. The current industry recommendation is to use one of bcrypt/scrypt/PBKDF2. Here's what a reader had to say:
Password hashes are really an algorithm class of their own. A hash algorithm such as SHA-512 can be "secure" in terms of being relatively free of predictable collisions but still be inappropriate for a password hash if it is computationally efficient. Bcrypt is intentionally hard to calculate in terms of CPU cycles, and scrypt is difficult in terms of memory needed.
The Diffie-Hellman algorithm is not the same as asymmetric encryption. Here's the comment I got from a reader:
Asymmetric encryption is indeed interesting! The problem of key exchange is related to the problem of secure encryption but is not the same. This is similar to saying that problem of distributing real, metal keys to authorized users is related to the problem of designing a secure padlock but is not the same.
Whit Diffie and Martin Hellman invented a way of allowing two parties to agree on a secret key without a man in the middle being able to obtain that secret. This is different than the problem of using that secret key to encrypt some plaintext. With Diffie Hellman key exchange the encryption key is a secret. Diffie Hellman key exchange isn't a cryptosystem at all.
Whit Diffie and Martin Hellman also proposed the idea of asymmetric encryption, but did not provide an algorithm for implementing this idea. This is separate from Diffie Hellman key exchange.
RSA encryption, on the other hand, is an asymmetric cryptosystem (the first such system widely known to the public, in fact). RSA is, today, perhaps not the best asymmetric cryptosystem (we are concerned about resistance to quantum cryptanalysis), but it is still in use and it's relatively easy to understand. | http://adit.io/errata.html | CC-MAIN-2017-43 | refinedweb | 1,135 | 73.17 |
The AOL-Netscape-Sun Triune want to slay Microsoft 230
paRcat wrote to us with the lastest news from MS Trial. It appears, from court documents, that AOL-Sun-Netscpe (Or, Apollo, Zeus and Odyssey as they referred to themselves) have laid a plan to make Microsoft irrelevant. Reading through much of it is common-sense, but it's interesting to see the plans laid out, including the tidbit that IE4-AOL is "the last" one. The three are betting heavily on the notion that everything runs off of the Internet-and they mean everything, pairing that with Java from Sun, and Netscape in applications, they want to dominate everything.
Re:Java??? Don't get confused about its speed (Score:1)
Remember that the Java classes are essentially executable code and chips do exist that use bytecode as the native language so it may not be that bad.
Despite all this, the notion seems like their playing king of the hill and not sleigh the dragon.
Big Brother II? (Score:1)
I feel like the only Netscaper out here in
While I am of the mind that almost all of our media is now catering to the interests of a few corporate giants (NBC - GE, ABC - DISNEY.. etc), I do not think that the trinity that I now work for will be bad for the Internet and it's protocols. It's not the same as the electromagnetic spectrum and the silliness of the US Govt and the telco/cable/telephone cartels. Netscape and Sun have been innovators who have open sourced and RFCed a very valuable segment of the common literature. I'm going to try to help it stay that way. More folks helping Mozilla would go a long way too.
As for the content of those Internet channels that we surf, well, we know AOL's "family" policy. We each have our own opinion on that. But it's for sure that you can use these technologies to put up your own websites, and say whatever you please. Please do.
Microsoft picked the fight. Just maybe someone else will win.
(Then of course, capitalism will fall into ruin and we'll all live free, liberated, able to lead our lives productively without having to sell our labor -- but that's certainly not my employer's opinion!)
Re:Hello!! Reality check time. (Score:1)
It is somewhat scary though. I don't want AOL everywhere. I really, really dislike them. To put it lightly.
Same applies to MS, though.
oops, typos, im tired (Score:1)
they = the leaders on Terminus
"Anacreon being taken over" should be "Anacreon taking over them"
Sorry.. heh
Re:great comments! (Score:1)
Yes, I've seen the other coverage. That's why the tone of the article seems so strange. And you're right about Boies. He not only pulls these things off, he does them with just the right timing and theatrical sense. Thing is, though, that all else about the deal aside, if AOL, which MS says is now so powerful that they can challenge MS in the marketplace, is afraid to annoy MS by dropping MS's browser in favor of one AOL owns, that only shores up the proposition that MS has a monopoly. It certainly doesn't help MS's case any.
No influence? (Score:1)
Let me get this straight. The people who put "MS" in "MSNBC" have no influence whatsoever over the editorial content?
They aren't involved in hiring or firing anyone?
They don't provide any advertising revenue?
They don't provide any funding in any form (such as paychecks, lowered/eliminated prices on hardware/software)?
No influence? Or invisible influence? Or all-pervasive influence that the (no offense) serfs such as yourself don't even notice?
I'm sure you've never gotten a call from MS in the middle of the night: "Nice family you've got there. Be a shame if somethin' happen'd to it." But that doesn't mean they have no influence.
--
"Please remember that how you say something is often more important than what you say." - Rob Malda
Anybody read Neal Stephenson's Snow Crash? (Score:1)
Spooky isn't it?
--Blerik
*-Future not available in some areas. (Score:2)
Sun's vision of fat servers and dumb clients? Maybe. There are certainly a few issues which are going to work against that -- privacy, and games. I might want to type, send, and store my email on a remote server, but I'm going to be a whole lot less trusting to put personal finances and information on an online "excel/word" application to be stored and managed for me.
Even WORSE, it leads to "metered computing", which nobody wants. Quake type games become impossible to run, and you've got vendor lock-in with their decision of what application you run. (Well, kind of like Microsoft, huh?)
Sun has done a great job of defending its turf ever since Microsoft jumped it on the workstation space, and made a feeble attempt at the datacenter. I'm rooting for Sun here. As far as AOL? If it floats someone's boat, good for them. Just don't ram it down my throat like MSN.
Well, the author has made an interesting point, which I hate to say that I've fallen into. The point is that the battle against Microsoft is going to change the landscape in ways I may not like, win or lose. Linux is looking better even more these days. I need to install it.
Re:The AOL-Netscape-Sun Triune want to slay Micros (Score:1)
If they do, I'm sure we'll see the story on Slashdot:
Truisms for Nerds. Stuff that's Obvious.
--
"Please remember that how you say something is often more important than what you say." - Rob Malda
Seeing the future (Score:1)
I was going to post a reply to the main article... (Score:1)
I got a wonderful mental picture with hoards of gaunt and starving people frantically digging in the plains of Texas for oil; dirt was flying everywhere as they sought to compete with Standard Oil! Reminds me alot of today's computing marketplace.
If you ask me, Microsoft is insulting our intelligence! Who in their right mind would fall for Microsoft's assertion that these three companies pose any REAL threat to it's monopoly? Not me says I.
I'm sick to death of Microsoft's heavy handedness in the computing industry - it's time for a change! I laud these three (now two) company's (AOL and Sun) partnership; it may someday become competition for Microsoft. Someday.
Re:Only one thing scarier than Microsoft (Score:1)
"you don't have any privacy anyway, so get over it"
?
"The number of suckers born each minute doubles every 18 months."
-jafac's law
Re:great comments! (Score:1)
Anyone who would accuse Brock of having a pro-MS bias is crazy! The MS flaks and lawyers usually run for cover when he comes to ask them questions. And Brill's Content (the independent ombudsman magazine) just wrote an article commending him for being impartial in spite of MS' partnership in our site (they weren't so kind with ABC's Disney affiliation a couple months back!)
Anyway, the story we wrote on AOL wasn't supposed to be about the jusge's reaction, but was a feature spun-off from the trial. The idea was to take a step back and sift through all the documents ourselves and determine what we could about the nature of this blockbuster deal.
So we'd already reported the part you said, and were trying to take the story in a direction that would add value for our readers. And to satisfy our own curiosity, as I (as someone who covers AOL and Netscape) personally found the documents fascinating.
And that email from Case you mentioned -- you should have been there in court. David Boies slowly stood up and waited quietly for Warden to finished. Then at a key moment, he introduced Pittman's respone citing a rule of evidence that it should be included since it was part of the same document technically.
By the time Warden was done objecting, both Jackson and Colburn (not to mention all of us in the peanut gallery) were so curious about what the response was, that Warden had no choice but to ask Colburn about it right then and there.
It ended up turning what was looking like a big point for MS into a huge score for the government. Boies is quite an attorney.
Re:Ugh. (Score:1)
Antitrust (Score:2)
An example, Standard Oil in the 19th century. They had a near monopoly in the Oil business. If a competitor would appear, then Standard would undercut the competitor's prices until the competitor could not afford to stay in the business.
This is similar to what MS did with IE, give it away for free to kill Netscape.
It's the anti-competative behavior that is illegal.
The paranoids are after me! (Score:1)
This reminds me of my Latin lessons... (Score:1)
During the rule of the Roman emperor Gaius Julius Caesar, Rome was prospering. He had just bumped off Pompey, and conquered Gaul. So on the whole he was doing pretty good.
Now along comes Brutus et al, and they kill him. A bit of a problem on the ol' conquering front, being dead.
Now, this understandably causes a bit of civil unrest. (and I'll get to the point in a minute..) Mark Anthony, Caesar's adopted son, takes a bit of objection to a bunch of upstarts going and killing his father, so he stirs up the public a bit. They soon are after the conspirator's blood.
When they are safely in exile (the conspirators, not Anthony), he and Octavius, and Lepidus form the second Triumvirate (three men). They take control, and are soon no better on the evil dictator front than Caesar.
Now isn't this like the AOL-Sun-Netscape Triune wanting to kill Microsoft? Let's do a little comparison...
AOL NETSCAPE SUN | MICROSOFT
ANTHONY OCTAVIUS LEPIDUS | CONSPIRATORS
There we go. And we mustn't forget the end of the second Triumvirate, they all got jealous of each other, and ended up with Ant and Octavius killing Lepidus, and Ant killing Octo.
(I apologise for any historical errors in the text...I'm kinda winging it a bit until my End of Year exams.)
Re:The "AOL PC" is a pipe dream (Score:1)
The problem with the "AOL PC" is where are the other applications, ie. Office applications??? What else could you do with an AOL PC other than surf and send email?? Not much else -- no games, no office apps, no servers, just surfing.
Look at what both Corel and Microsoft are doing with their 2000 suites. They're working towards selling the end user a "thin client" application that includes just enough functionality to allow the user to do whatever has to be done on the client side, i.e. setup, configuration, UI, etc. with the core of the application living on a server at your local neighborhood Application Service Provider (ASP). In other words, without the ASP (whether it's your corporate app servers or your ISP) your MS Office is an empty shell. This is a bad thing, just another way to lock users into proprietary software, in my opinion.
There are some interesting articles floating around that detail how Microsoft, Intel, HP, Cisco and other big players are investing in the infrastructure required for this type of application.
This is scary (Score:1)
Still, I hope that AOL/Netscape/Sun is moderately successful in their plans - I don't think they'll achieve the plans they have, but they probably WILL shake MS up quite a bit. This is going to leave a perfect opportunity for Linus's joking comments about world domination to become true. After MS's marketing stranglehold is broken, technically superior solutions will finally rise.
You might say that AOL/NS/Sun will try to FUD us to nonexistence like MS has. Well, we've got a headstart on them by around 10 million users, and we're already growing rapidly.
having similar points != same (Score:1)
Does any of the other 3:
I could go on much further. Basically, there's lots of things MS have done that just about no other company has ever done or done to that extreme. There's lots of people who seem to fall to MS's own PR - ie anyone who wants to supplant us, is just as bad as us.
Also, to focus on Sun's Java software is highly missleading. Sure it's their biggest PR problems with respect to the open source community, but it's still a far cry from typical MS operations. Anyone can get the source code to Java. (not even close with Windows, and MS's 'hints' of open source seem to be just starshine so far). Sun's 'community source liscense' is still pretty closed, and they need to sort out things though. Neither is Java a once off - Sun are making most of their source code available it seems. Jini is most definately not vapourware - how can it be vapourware if the source code has been available for downloading for months? Also, part of Java (the JavaServer Pages/servlet stuff) is be coming with Apache, as source code, under the Apache liscense. Far cry from MS.
Java is just a tiny, but highly visible part of Sun, and they make very little revenue or profit from it directly. Sun's hardware and Solaris software is highly reliable, secure, scalable - far more so than Windows.
M$ FUD!!! (Score:1)
thats all, I am not going to belive this untill I start seeing some hard evidence on things not owned by microsoft.
I mean this really paints a BAD!! picture of the AOL-NETSCAPE-SUN trio. and just what is this legal battle of MS about, its about microsoft's monolpoly, and who are the key players in it, aol, netscape, and sun.
and I don't think that trio would try to beat MS at there own game and get in the same position as MS is in, at least netscape and sun is smarter than that, I am not to sure about aol though
Keff (Score:1)
MSNBC have been very critical of Micros~1 (Score:1)
--
Employ me! Unix,Linux,crypto/security,Perl,C/C++,distance work. Edinburgh UK.
Lame app servers is that the best you can do? (Score:1)
First GUI invented Used to hate Xerox for
dropping the project but no doubt it's because of Luddites like yourself who were using that killer application - the lawn mower - most of the time.
1975 MS formed
Big Whoop. BAssIC. Down with IBM (who used to share source monopoles as they were).
...
1985 First 32-bit OS Available
Amiga for $500 in 1987 - NC before NCs.
4096 colors
Big question how come the following was gaining ground?
1991 Tandy 1000 SL $1200 384k RAM 8086 16 Colors
Let me see your Luddite merit badge again.
Incidentally the birth of Linux.
Hmm 1999 the PUBLIC discovers multiple GUI's.
Get off your phony evolutionary theorist high horse. The next generation is going to be more than just lame App servers over the net. That's the fucking best you can come up with? I see Open Hardware design, I see intelligent effects processor boards for High Budget Multimedia artists. They will be able to do in a virtual world what they can do on canvas and with clay. And I mean artistic effects boards that can transfer learned styles from one 3D model to another. Distorted Model Style Recognition. I see every person being able to do the work they need to do at whatever paradigm they need to do it. Not the crap that happens to be the latest bullshit fad.
You've had a desk job too long get some air on the moon.
Re:And this is different... (Score:1)
There seems to be an assumption that microsoft is somehow `evil'. This is wrong. They are just doing the best they can to make money, the same as any company will. Sun would do the same thing if they were in microsoft's shoes (as would I). Because Microsoft are a monopoly, their attempts to make money hurt the consumer, but it's not their fault they are a monopoly, it's just that free markets are vulnerable to this kind of cascade runaway where all power ends up with one organisation.
So of course, it will not be fifferent. All we can hope is that the anti monopoly laws are enforced sooner next time around.
Try catdoc (Score:1)
Works on all but the most complicated docs.
MSNBC on the Greek Trio (Score:1)
Tom Byrum
hey. what do you know... (Score:1)
---
Ryan Wilhelm
Lotus Notes Administrator
Executive Risk, Inc.
Re:The "AOL PC" is a pipe dream (Score:1)
Re:The "AOL PC" is a pipe dream (Score:1)
And this is different... (Score:4)
Re:And this is different... (Score:1)
Re:The "AOL PC" is a pipe dream (Score:1)
I didn't say that the current incarnation of Office 2000 is a thin client application--I said they were working towards this goal. There's plenty of press that proves this, including statements from Mr. Gates himself.
Here's an interesting article that gives some clues on where things are headed. This is not a pretty picture: 7913,00.html [news.com]
We could hold a contest or something (Score:2)
"evil three" logo, because, like you said, it almost fits the bill
already. Also, Netscape is really just a part of aol now. And,
of course, AOL is by far the most evil and MS-like of the three
companies.
Start with the AOL logo, and color it black, with evil looking fire
peeking around the edges. Inside, blend in a picture of a sinister
mozilla, warped and twisted from it's previously good, pure
form. In the monster's eye, or perhaps partly hidden behind
the head, put a deep red, sinister looking sun.
I think a logo like that would be really cool, funny, and make
a statement at the same time. Anyone out there really good
with the Gimp? I'm not, but I'll try my best.
Re:No influence? (Score:2)
Well I don't know if this eases your conspiricy theories at all, but I just search for both "GE" and "General Electric" and got "No MSNBC articles found". "Microsoft" of course provides a gazillion hits.
(GE owns NBC.)
--
The "AOL PC" is a pipe dream (Score:4)
other applications, ie. Office applications???
What else could you do with an AOL PC other
than surf and send email?? Not much else --
no games, no office apps, no servers, just
surfing. You may as well buy a WebTV. If you
think you can build Java apps to play Quake
and write heavily formatted spread sheets, well
it's not going to happen any time soon -- Java
can't handle heavy applications.
Microsoft's dominance rests on at least three
hinges: Windows, Microsoft Office, and Internet
Explorer. OK, so you figure you can replace
Windows and Internet Explorer but you're
forgeting the Big One: Microsoft Office, and
I don't recall Sun or Netscape having any office
application ready to roll.
As long as the business world is hopelessly
addicted to MS Office, Windows will be there
too. The only real threat to that market is
Linux w/ Star Office or Applix or some other
office suite in Linux.
Re:And this is different... (Score:1)
Re:And we think MS is bad... (Score:2)
I'm betting on microsoft.mil myself..
Microsoft: We make the things that make communications break.
Re:This reminds me of my Latin lessons... (Score:1)
Now you SEE!
Anthony Octavius Lepidus == A O L == AOL the Conspirators!!! And SUN is Several Ugly Nutkickers... and you don't even want to know about Netscape...
not the point. (Score:1)
and with AOL graping ICQ, winAMP. they are gathering the data they need. and the app's they need to tie everything to netcenter.
thank discorda, that mozilla is open source, cause when it is ready im grabing a copy and rolling my own... ( need a good browser and mozilla is getting everything right )
if you look, (just scan some headlines) sometimes you can see what is "really" going on.
nmarshall
#include "standard_disclaimer.h"
R.U. SIRIUS: THE ONLY POSSIBLE RESPONSE
Plans != Reality (Score:1)
--
Astroturf? (Score:1)
3 posts.
1 sort of pro m$
1 sort of anti mozilla
1 other
Hmmmm . . . probably my paranoia . . .
Java (Score:1)
Just think how many times we have recently heard about so and so not using java-OS for net-puters, handheld devices, palmtops, etc?
I don't think that a java system will ever be a commercial success. And Yes I have written some serios Client Server Jave.
"There is no spoon" - Neo, The Matrix
"SPOOOOOOOOON!" - The Tick, The Tick
Re:I hate to say this... (Score:1)
(shielding self from flames by people who don't realize I'm being facetious)
Re:browser market share (Score:1)
I let my gerbil chew them up and use the shavings as bedding.
"The number of suckers born each minute doubles every 18 months."
-jafac's law
Re:power mongers and whining companies (Score:1)
I hate Microsoft, but the consumer has alternatives offered to them, and all of those alternatives can do just as much as a MS product can do, peoples just don't want to change. The average consumer -likes- windows and doesn't want something different, MS should not be sued for that.
They do not have a monopoly, they just have a succesful company. Take many European contries for example, in holland windows is $300, due to that schools and most home users use Linux. A monopoly is when you can hike the prices and people -must- pay, like the oil monopoly once had.
Business is a mean, cruel, heartless world that MS has just done very well in.
Good Idea, but.... (Score:1)
Hrm, a triangle with an eye in the center... a good Idea, beacuse in theory if they held all your data, they could look at it... (like they'd be useing any kind of encription)
but there is one problem... AOL already *has* that logo
---------------
Chad Okere
*Sigh*.. here we go again... (Score:1)
First, just because Netscape still exists and is owned by another company it does not necessarily follow that they are doing well. If they were doing so well, they might still have a profitable browser business. Unfortunately, Microsoft feared Netscape becoming a platform unto itself, so they did everything they could to sink Netscape's browser. They did pretty well until the browser went open source. They can't touch it directly now. All they can do is hope to proprietize as many protocols and whatnot as possible so that nobody else has access to them. This was all laid out in the Halloween memos.
Secondly, cut-throat tactics can be very illegal if you have a monopoly.
They do not have a monopoly, they just have a succesful company. (...) A monopoly is when you can hike the prices and people -must- pay
Once again, Windows is proprietary. Businesses have to standardize in order to keep their support costs as low as possible. If you want to exchange documents with 90% of the computer-using world, you need to have Microsoft Office. MS Office is only available on Windows and Mac platforms. The Mac platform is too risky because Microsoft can take Office away from them at any time and has threatened to do so in the past if Apple didn't do what they wanted. Businesses can't take that risk if it's not absolutely necessary.
I think the ruling in the DOJ case will confirm that Microsoft has a monopoly. It is due to network effects. They are real. They do exist, no matter what Microsoft's pet economist says. If they didn't have a monopoly, they wouldn't be able to threaten OEMs with higher prices. They wouldn't be able to keep competitors from gaining access to the OEMs, or at least limit their access. They can and do raise prices at will, just not on the retail shelves where it is highly noticeable. They raise the prices to OEMs using secret contracts that nobody else knows any details of.
There are massive barriers to entry for any new OS. The only thing that makes any competition possible at all is people DONATING their time, money, and effort into creating an alternative. That is not a sign of a healthy market. Be actually offered to let OEMs ship BeOS for free if they would just give customers the option of purchasing a machine preinstalled with it. Unfortunately, OEMs are still rather afraid of Microsoft. They can have their prices jacked through the roof if they don't behave. The only reason we see any Linux machines being offered now is because Microsoft can't do anything with the DOJ breathing down its neck. If they (by some miracle) win their case or if the result is another weak agreement, all hell will break loose and Microsoft will once again crack down on everyone who was not kissing their butt the entire time.
The bottom line is that the average computer user really doesn't have much choice since they can't buy preloaded machines from the top ten OEMs out there (only one of which is offering anything other than Windows on a desktop machine). As long as Microsoft is able to keep the OEMs from offering anything else, it will remain this way. Perhaps their grip will be broken when the trial is over. One can only hope. Businesses have virtually no choice, depending on what business they're in. They may be able to run various types of servers, but on the desktop, it's nearly always Microsoft. There is no other option for them because Office is THE standard. Trying to get businesses to switch from one standard to a new one will cause alot of chaos. That's why Microsoft is sitting pretty. People are locked in. It would take a huge influence to move them. Unless Microsoft raises its prices above what the market will bear (which is quite alot due to the costs of switching), their monopoly will stay intact.
Old E-mails, M$ Bias (Score:1)
One bad thing--
"Further, AOL plans to morph its ICQ instant messaging software into a desktop-based portal that would use Netscape technology as a browser -- something that could further increase its browser share."
Why, why, why do you take a very good niche product and try to make it everything? That philosophy makes software so large it creates it's own gravity, and it starts to suck. Will they turn WinAmp into a full-blown sound editing environment, with Mp3 support? My confidence in AOL's bidness acumen is diving past zero.
One other point...this was on MSNBC.com. (Yes,I read MSNBC, say what you will, MS knows GUI's) However, in the many stories I have read there (many on comp./int. news) this is by far the most biased. It's not subtle how they portray these companies, and I quote...
" "Our view of AOL is, let's take the interactivity they love and have come to depend on as a necessity in their life and take pieces of it linking it back to AOL and in the process finding new revenue streams per member so we're not only making new money for adding new members but adding devices that get revenue from the members," Pittman said. "
While this may be true, this paints a VERY negative picture of AOL from an "impartial" news source. (which obviously shows MSNBC isn't, making it even more insidious). Let's have a look see at the core of the M$ plan to expoit ppl.
(like MSNBC, the channel, every notice who buys all the ads there? All M$'s partners)
Bottom line, you can't trust any news sources, other than
'''''';;;;;;..... (core vented. meltdown avoided. Good job Homer)
Re:Hmm...not for awhile I think. (Score:1)
*CHEER*
*CHEER*
I agree completely! When I first started (circa 286s, we live in a backwards shit-town.. *sigh*)
I at least had the base intelligence to know that if my modem had a 9-pin connector, that it must connect to one of the 9-pins in the back of the pretty box.
I also started running a BBS with no knowledge of what to do, or how. But guess what.. it became pretty popular. I have since lost my records of user feedback, but from a 486/50 with a single 2400 modem as dialin I think that's good.
The way I see it if people can't use their heads for anything other than keeping rain off their necks, they have *NO* business owning a computer at all. Using one, fine. As long as someone with half a brain takes care of the thing.
Re:Well, Elliot. First things first... (Score:1)
Call it a dead paradigm if you wish, but when people pay money for goods, they want to OWN something.
That's why marriage is more popular than prostitution.
"The number of suckers born each minute doubles every 18 months."
-jafac's law
Re:MSNBC on the Greek Trio (Score:1)
If you think that MSNBC is credible news, then...
Does anyone know if MSNBC ran a piece summarizing the judgement in the MSTemps case, and if so, how was it slanted?
Re:great comments! (Score:1)
In the best-case scenario, it could be fun, and give msnbc.com a bunch of sensational hits.
On the other hand, in the worst-case scenario. .
"The number of suckers born each minute doubles every 18 months."
-jafac's law
Re:The "AOL PC" is a pipe dream (Score:1)
Oddly enough, this form of Windows running subservient to Linux, I can handle...
Re:Microsoft positions itself as victim (Score:2)
(Obviously, the AC to whom I am responding DID notice it.)
Anyway, the article was a totally transparent: "Don't be afraid of poor abused little ol' innocent Microsoft, be afraid of this horrible three-headed monster we are depicting!"
Sheesh!
Sun, AOL, and Netscape have to band together like this just to keep their head above water against the "We WILL be the ONLY writer of any kind of software on planet earth" monster from Redmond. World dominance will not be the result of the Sun, AOL, Netscape alliance. Mere survival may be.
A blatant FUD piece to attempt to draw attention away from the REAL ISSUE!
I, for one, am not buying.
Re:And this is different... (Score:1)
Sun wants Java to be the programming language. AOL wants everyone to download applications from them for an hourly rate. Netscape wanted their browser to make the underlying OS irrelevant. You'd have a layered monopoly, each partner controlling a layer. Of course, Microsoft will just roll over and die...
Re:Speaking of M$ Office (slightly offtopic) (Score:1)
Re:This reminds me of my Latin lessons... (Score:1)
...and I completely missed this!
DOH!
Any ideas for Netscape?
submissions to me at goochieboy@hotmal.com.
The AOL-Netscape-Sun Triune want to slay Microsoft (Score:1)
Other headlines:
Experts Announce: Fire is Hot
"Pope is Catholic," Theologian Claims
--
"Please remember that how you say something is often more important than what you say." - Rob Malda
Re:And this is different... (Score:2)
Amanda Ryals? (Score:1)
Re:The "AOL PC" is a pipe dream (Score:1)
What if someone came up with a modem (POTS, Cable or xDSL) for PlayStations &/o N64? Wouldn't be much different, there, either.
Re:Well, Elliot. First things first... (Score:1)
A better comparison would probably be local phone calls. Nearly everyone takes unlimited local calls, though metered is often an option. Except for people that almost never use the phone, it's a great convenience not to have to monitor use, even if you could save a little money on metered.
People are often quite happy to rent rather than to buy, if the financial terms are right. For instance, many people lease cars - and many others have no interest in this, even though it might be more financially sensible. I think it's fair to say that people prefer to own rather than rent, and prefer flat rate to metered, but are willing to consider those options if there's a large financial benefit. Given that metered computing is rarely cheaper for most people (and certainly not slashdotters) than the fixed rates that come up, it's being squished.
If Microsoft offered Office licenses at $10/year or $400 for unlimited use, people would flock to the yearly license.
Re:Hmm...not for awhile I think. (Score:1)
MS knows GUI'S? (Score:1)
I'll assume from context you really meant "MS knows First Impressions" or something, because brother, they ain't got a clue on GUI's.
I actually PREFER to use FVWM in X to Win95 (except for cut 'n' paste).
--
"Please remember that how you say something is often more important than what you say." - Rob Malda
Re:No influence? (Score:3)
serf, huh?
I don't know how familiar you are with the news business in general. First of all if you think ANY news is not under some corporate control, you aren't paranoid enough!
All news organizations (or at least the good ones)are in a constant stuggle to protect the editorial side (that is news, etc.) from the publishing/advertising side.
In every newspaper in america, you will find ad executives furious about how some upstart reporter daned to go and write an exposee that pissed off their client, and now they have to sweet talk that client or lose a huge customer that's a giant source of revenue, and don't these reporters understand that they shouldn't piss these people off?
But we do understand. All too well. And we are intentionally shielded from those ad people. Most news organization have strict firewalls to prevent reporters and editors from worrying about ads and revenue.
So no. Microsoft does not have hiring and firing power over me. No Microsoft representative has ANY input in my evaluations. And I believe firmly that if Bill Gates himself called Merrill Brown (the editor in chief of MSNBC) and told him to fire me, Merrill would say, "Bill, go take a flying fuck in a rolling donught."
You think us reporter-serfs who live, eat, breath, and deficate scrutiny and public disclosure really wouldn't notice influence if it were there?
How come you're not so worried about GE's influence? We are FAR more tied to NBC and CNBC from a content standpoint then we are to MS.
MS paid for half of MSNBC. True. They have revenue goals they want met. They want us to use their technology. But they have NEVER repeat NEVER altered our content and news judgement. I am very impressed with the quality of my editors (and I don't say that lightly as I have authority problems and my respect is not easily earned).
This is not to say that the scenario you painted has never happened at any news organization in America. Take Disney's influence in making ABC pull a story on the lack of safety at Disney's theme parks...
But what happens? The rest of the media chews them a new asshole. As I hope they would to MSNBC if a similar scenario ever were to happen.
I know I would be the first whistleblower. It's not like we make much money as reporters anyway... we basically have nothing to lose but our reputations and love of the truth.
Sun is the threat, not AOL (Score:1)
become more involved in a struggle which increases the
opportunities for open source to become more of an alternative
for "average" consumers.
AOL's resentment towards MS has been well known for some
time - it goes back to their feeling pressured by MS to use the
IE browser.
The danger is exactly what Sun wants - fat servers and thin
clients, and metered computing in which software is downloaded
and rented. Back to the old mainframe timesharing days -
that is Sun's mentality.
Java is ok, but if it is inceasingly used in this way then it
does pose a far greater threat than MS. We don't need
clients or servers, but end users who are both. The potential
of the internet is nullified if it does not become more like
a peer-to-peer network in every way. In such an environment
open source can thrive. With fat servers and thin clients.
everything is controlled at the server end - some of you
sysadmins may like that, befause it takes control out of the
hands of users. But, it really puts more control in the hands
of fewer and fewer giant multinationals.
AOL is also ok. It provides a needed service for many people.
If open source cannot do the same (it already is with some
portals for Linux), then it will never reach the majority of
users. However, it is also possible for users to move beyond
that especially with Linux. We just need software to help
novices set up web sites and servers, with hands-on help
from nerds who can charge for their services.
We need lots of little centers of activity everywhere, not just
a few big ones.
Sun and AOL are forgetting what PC's are all about. End user
control. People using Windows do have the illusion that they
have control of their own systems, but many are now realizing
that they don't have much control. This is an opportunity for
Linux and other systems to move in. This stragegy by
Sun and AOL will not work so long as people want to control
their own systems and store data on their own systems.
Yes, the internet is nice, but most PC users do a lot more with
their computers - games, business, desktop apps, etc. They
will not be inclined to "rent" software which is downloaded onto
their thin clients with Java because then it will become very
clear that they have absolutely no control.
Anyway, it will take some time for people to free themselves
from MS Office dependency.
In summary, this strategy will only partially work for AOL and
Sun but will hurt MS and make open source (Linux, BSD) look
a lot more attractive to a lot more people.
Re:Careful there (Score:1)
"Not Aol Anywhere...AOL EVERYWHERE!" (Score:1)
Just what we need is the #1 source of idiocy on the internet dominating everything...I mean I'm not fond of Microsoft either, but at least they don't proliferate the net with complete morons...
M$ will win (Score:2)
How can anyone compete against a copany that controls time?
BitPoet
Re:This reminds me of my Latin lessons... (Score:1)
In your analogy the decaying republic turned roman empire (i.e. propriatary s/w companies and wannabe monopolists) eventually collapses in on itself and the real winners are the unwashed hordes at the fringes of the empire who go on to rule Europe.
Who are the unwashed hordes of barbarians at the gate who eventually win... the goths, visigoths, vandals and franks?
Hello!! Reality check time. (Score:1)
What do you expect it to say! You never bite
the hand that feeds you!
It is an article. It has words in it. Any actual
value is buried in the noise. (sigh)
AOL is everywhere (Score:1)
Newbies just coming online now already might not be able to see the distinction between AOL and the rest of the web --- so I can definitely see someone buying a machine because they heard of AOL and all that instant messaging fun (!) from other newbie friends. And the AOL marketing will probably play off that.
=moJ
- - - - - -
swagmag.com [swagmag.com]
nitpick (Score:1)
Re:Antitrust (Score:1)
Re:great comments! (Score:2)
Well, perhaps I'm biased myself, not liking Microsoft's positions and attitudes, but I find it strange that an MSNBC article is the only one favoring Microsoft on Colburn's testimony and AOL's plans. Everyone else was reporting that the judge was openly skeptical of what point MS was trying to make with this issue, and generally not making it sound like MS scored any points.
Eg.: Colburn's memo about dropping IE, which MS tried to make much of. They weren't happy to have Case's response to that memo brought in as well, in which Case basically said that dropping IE wasn't feasible due to repercussions from MS if they did that.
Re:The "AOL PC" is a pipe dream (Score:2)
Apparently you have never installed MS Ofiices 2000. You do not need an application server at all, just install and go. You have the option, as you have with all recent versions of MS Office to run it from a server, a CD or local HDD. The server install is noghting new and at smaller companies where storage can get tight at times, it is very helpful. I have never run COrel so I won't comment on it, but what MS is doing is nothing new, they won't make Office have to have an application server anytime soon, but you will always be able to run it from the server.
Not illegal (Score:2)
Besides, how much power would AOL have if they "won"? They would be providing a service based on Open Standards, through third-party telco's, with little proprietary content. What would be the barrier to entry for a better product, with the V.C. available these to support advertising?
Re:Hmm...not for awhile I think. (Score:1)
On a PII-333 with the Symantec JVM a SSH client written in Java (no assembly assist for the Crypto) can scroll the screen far faster then I can read. I can tell when it GC'es (or maybe when the network loses a packet) because it pauses long enough to make out a few words (my "standard" test is to cat an ~500K
/etc/termcap).
Disabling the SSH encryption it timed as faster then the Telnet that came with my Windows box (i.e. receving SSH packets in the clear, doing a CRC, and displaying them was faster in Java then doing even less work in C -- I assume they didn't feel a need to tune for speed, and I did).
On a PPro-180 under FreeBSD the Sun JDK limped along fairly painfully, I could guess how many lines were in eatch SSH "packet". (actually it was a Duel PPro-180, but I dout Java tryed to schedule diffrent threads on diffrent CPUs)
The JVM implmentation makes a huge diffrence, but I would say Java is fast enough for a lot more tasks then people tend to give it credit for.
Re:This is scary (Score:1)
Look at what Microsoft has been investing in: content and content providers (Northwave Communications, RoadRunner, WebMD, MSNBC, etc.).
And, why did MS want Intuit? Not to sell Quicken, but to get the banking connections that Intuit has fostered...
I think Microsoft is slowly morphing into a transaction-based shell of a company. Perhaps MS will still make software in the future, perhaps they won't. But they'll be making residuals wherever they leave their investment dollars...
Why make $190/copy of Office2K when you can make $0.001/transaction on a few (hundred) million or billion transactions a day with no effort?
Re:The "AOL PC" is a pipe dream (Score:1)
Re:Who cares. .DOC and .XLS are all that matter (Score:1)
That several companies offer conversion software for Word Docs should be clue enough.
At least for spreadsheets, is there much market other than Excel (so who cares)?
Re:Hmm...not for awhile I think. (Score:1)
I don't think so (Score:1)
Corporate masturbation by Swami PCDeadananda (Score:1)
Really an economy can't go on if it stops itself.
DUH!
Plus at this point it should be quite clear that a company that controls the market and its consumers globally where they have nowhere to put the money since it is global only smacks of corporate onanism. Totally not the real thing. Look at it like this a snake that bites everything it owns including itself. Globalization is going to cause economies to literally cancel each other if only few control the world leaving an equilibrium that will kill everything. On the other hand globalization with small businesses is going to open up many opportunities.
But that means at least partial knowledge of the things you work with.
We really have to get rid of guys like this Swami DesktopComputingIsDeadanananananda.
As for marriage/prostitution, there's a lot of ownership themes in pimping. Why do you think it's illegal? It's so women should feel guilty and dirty which makes them hardly a voice in the system and so only the pimps make the money off the women's work.
Re:MSNBC on the Greek Trio (Score:2)
Where is the irony in that?
The only victim that matters is the consumer (Score:2)
If you are referring to the picture at the top of the article, you might notice that the three people at the bottom (with swords?) have the symbols of the Trio on their backs. That would make MS the 3-headed monster. I made the same mistake on immediate examination.
However, the fact that they were making plans, and had ideas on how to implement them, gives weight to the idea that the market isn't as closed as they would have us believe. That article makes me think that all these companies really want is to replace MS, not make the environment better for users.
In the end, that is (should be) the goal of anti-trust legislation: guarantee competition. If MS acted to prevent these companies from replacing their OS as dominant, that's one thing. If, on the other hand, these companies were just too wary of the development and advertising costs associated with entering the fray, why are we spending money on them? I don't remember seeing AOL or Sun or Netscape releasing an OS, MS drastically cutting the price of Winblows, and raising it back after the other died.
Basically, this trial is about the other companies being upset that MS is better at pulling the wool over the consumers' eyes than thay are. I think Netscape is a greatly superior browser (my employers, unfortunately, don't), but MS has greatly superior advertising (in that it exists), and the great majority of our fellow countrypeople are more swayed by advertising than quality.
ohhhh.... bad post.... long.....
If MS attorneys had defended Standard Oil... (Score:2)
Furthermore, we would have heard that the alleged monopoly on oil could fall apart at any moment, as a multitude of people were digging in the earth with the purpose of striking oil.
And lastly, they would have told that oil was about to be overtaken by nuclear and solar energy (and one shouldn't bother about questioning whether these can provide energy for cars) so, despite Standard Oil's market share, the company wouldn't have market power at the moment of investigation.
Nice URL (Score:4)
Correct URL is [msnbc.com]
Need a new icon for the Terrible Threesome (Score:3)
This year, they're the good guys. Next year maybe they'll be the bad guys. Ah well, if we didn't want excitement and constant change, we wouldn't be working in technology, eh?
Re:And this is different... (Score:2)
In any case, in the long run, a "monopoly" built on platform independence is still better than one that thrives on platform dependence.
Besides, companies have been trying these strategies against MS for a long time. I don't think this one would look as good as it does if it weren't for the fact that MS is currently in trial and on their best behavior.
I'm probably just writing this 'cause I'm tired... (Score:2)
Microsoft is like some ancient emperor. MS wants to get control of all the 'kingdoms' in the area. It doesn't want to make their citizens (both old an new) live in horrible conditions, but it also isn't the goal to make them live in eden-like spleandor. This means that the citizens (for the most part; not a universal rule) can still live their day-to-day lives even if everything about the empire they live in is trash. While most surely any sane person would want to leave, it's akin to wanting to walk out on a bad movie instead of escape chinese water torture.
I'm really not a history person (despite taking it
Type A: Martin Luther King's. The Ghandis. These people don't often succeed, but when they do, they do so while being completely 100% in the right (subjectively) and changing the world for the better. This would be Linux, for the most part. Passive-Resistance. Peace, man.
Type B: The Angry Mob. In the empire analogy, these guys would consist mostly of the nobles who probably still have it pretty good. What do they do? They find the nearest town that's a main part of the empire (as in, a town of the empire before it decided taking over their neighbor's lands was fun for the whole family) and they torch the houses of all the citizens. Althought they feel better, the rest of the citizens of the empire suddenly get the mistaken impression that the empire is good because of the terrorists they now have to compare to. Instead of creating a new empire of 'nobles', they end up getting caught and in jail. Nothing is accomplished, except perhaps a bit of thankfulness on the citizens part that the empire has locked a few cruel people in the dungeons, and a perspective that it could be worse... they could be under the hands of those freaks.
I was being quite careful... (Score:2)
(B) the raised price was for a newer version of Windows that IBM hadn't contributed dev effort to.
(A)What newer version of Windows hasn't included all the older crap (that IBM apparently helped to create)?
(B) Why justification does Microsoft have for charging IBM 2-3 times as much for Windows as other OEMs just because a new version with some extra "features" was released?
(C) How do you explain the threats by Joachim Kempin that if IBM didn't stop marketing and/or offering OS/2, they would have to pay alot more for Windows.
(D) How do you explain the deal that Kempin tried to arrange that involved having IBM stop shipping SmartSuite for six months to a year in order to receive a discount on Windows (which would have still had IBM paying more than any other major OEM).
If the press reports seemed to be slanted against Microsoft, it's because Microsoft earned it. They were trying to use their prices to prevent IBM from competing. That's illegal if you have a monopoly, which seems to be pretty well established in court now. I believe there was even an email from a Microsoft exec to an IBM exec that stated that IBM can have Compaq's deal when IBM stops competing. Just another one of those damning emails that show exactly what Microsoft's intentions were.
great comments! (Score:3)
Anyway, for the record, when I first took the job at MSNBC, I shared all the concerns voiced here about the relationship between the news organization and Microsoft. I mean, I had just been covering the MS trial for the Mercury News, so it wasn't like I was ignorant about how MS goes about its business.
But I was very pleased to discover that MS has NEVER tried to influence the editorial content of the site. I know its hard to believe. But I know I personally never would have taken the job if I thought otherwise. Now, three months later, I am pleased to say my editors are tickled pink when I (or my colleagues) are tough on MS, and have never told me to slant my news in ANY way, let alone pro-MS.
Anyway, this story was really interesting to dig into. AOL/Sun/Netscape really look like they are trying to out-Microsoft Microsoft, in that they want to establish and control the standards, which has always been MS' game.
You expect that of AOL, but what puzzles me more than anything else is Sun's involvement. They have been pretty big open-standard proponents in the past and I'm a little surprised to see them in this role. Thoughts anyone?
Anyway, thanks for all the insight!
-Elliot
Re:And this is different... (Score:2)
Basically, more of the same
Hmm...not for awhile I think. (Score:4)
1. This seems unfeasible(sp?) until there is inexpensive and common high/ultrahigh-bandwidth connections to peoples homes. Perhaps AOL wants to buy Qwest?
2. Is a JVM system really fast enough now to work as a real OS or even application on its own?
3. Somehow it seems to me that using the net as a giant application server is a very good way to both reduce security both on the server end (cracker modifies the Java code? BOOM) and in the data stream (we would want uber-encryption on this data, and it is still decoded on the server side, returning to my previous point).
4. Who would run the massively high-speed computers to do all this processing? I would think that serving apps for x number of net users, combined with whatever encryption is needed on the data would slow most computers (I mean even SMP servers and clusters) to a crawl. And if you limit the number of connections to each server, what happens if there is a surge in users and the servers are overloaded? Can you say lawsuit?
5. The 'net, even though it is designed to be redundant, occasionally loses connection with parts of itself. How would this be handled? For those on modem access, what if you are suddenly disconnected after typing 9 pages of a term paper? Are there accounts on these servers in which your abandoned document is saved, or does it just expire as soon as the connection times out?
Tom Byrum
Will this challenge OS dominance? (Score:3)
The focus of the anti-trust trial (which gets very little fanfare in this article) is whether Microsoft currently has a monopoly over operating systems, and whether they use that monopoly maliciously. Frankly, this is just smoke up the public's ass, trying to cloud the issue.
-- Mid
What this means. (Score:4)
It is a truely desperate effort on Microsoft's part to use this kind of material as a defense. It could backfire on them. These three companies are only going through with these deals because of Microsoft's dominance. And in the end, their plans could amount to nothing more than pipe dreams.
Most of the quoted documents were apparently written by Sun. I am a Java advocate, but I'll be the first to tell you that Sun inappropriately likes to see their plight on a mythic scale. I equate some of their comments to those chain-letter type of e-mails that run around the internet comparing MS to a dragon, or a car, or a giant spider, etc.. When I was an OS/2 user (duck) I used to see these types of e-mails all the time. And we OS/2 users always held onto the belief that some day our OS would beat the evil MS. We knew it wasn't true, but that's what faith is all about.
Alot of the triad's plans sound similarly dream-like. I do think that Java will become more wide-spread but you have to have a pretty faithful imagination to think it will dominate the desktop. (Nothing would please me more but I still have a few OS/2 pains where those muscles are.) There other plans are just that. Plans. Add 50 cents and you might be able to buy a cup of coffee. Only time will tell how much they succeed.
Fortunately, the Judge in the trial seems to have a pretty good head on his shoulders. He seems to be able to recognize smoke and mirrors when he sees it. Still, only time will tell if they succeed.
However, if MS does make it through this trial completely unscathed then I don't think the triads plans amount to anything.
MS's "downfall" would have to come from another direction. And I won't hold my breath.
Re:power mongers and whining companies (Score:2)
Netscape the browser might be alive and well, but Netscape the company ain't makin a dime off it right now... Guess who made it that way?
None of those other OSes can really compete with Microsoft. MacOS is the closest thing to a competitor, but they can't do anything to cheese off MS or they lose Office and whatever else MS decides to do to them. BeOS is still too new and has no application support, and the OEMs are afraid that Microsoft will jack their prices for Windows through the roof if they offer it on desktop machines. Linux is making headway in the server arena, but that's not where Microsoft's monopoly power lies. When Microsoft has all the OEMs by the short and curlies, along with its main OS competitors (IBM and Apple), what can they do? It's blatantly obvious that Microsoft holds alot of power over these companies. Haven't you paid attention to what they did to IBM with the Windows pricing and development info? Jeez... wake up.
details (Score:2)
Hasn't MS been able to have such excruciating details excluded from public view, in order to protect their business plans?
I don't recall such intimate details of similar MS docs being published. Just excerpts of email, high level plans, etc. Perhaps this a decision by the particular publishers?
I do think the article tried to deliver a resouding "look at all this competition MS has" to the readers. As if some javabox running AOL is gonna run MS out of business. We all know it's actually gonna be linux
The only character missing from this insidious consipracy is ex-borlander and wannabe MS killer, P. Kahn. that would make Bill tremble, eh? | https://slashdot.org/story/99/06/16/1913233/the-aol-netscape-sun-triune-want-to-slay-microsoft | CC-MAIN-2017-30 | refinedweb | 9,959 | 72.56 |
Java web services and SOA
RESTful Web services made easy
By Jason Tee
TheServerSide.com
Many Java professionals are interested in learning the basics about RESTful Web services, but they often find tutorials that are overly complicated. In this quick tip, we’re going right back to the basics by demonstrating how easy it is to create a RESTful Web service using nothing more than the JDK, a simple text editor like Notepad and an installation of Tomcat 7.
First off, here’s just a little bit about RESTful Web services. They are intended to be used in the same manner that the HTTP protocol over which they run was designed. The HTTP protocol provides four basic methods: GET, POST, PUT and DELETE. In our case, we’re just going to focus on the GET method, and leave the other members of this barbershop quartet for later
Invoking a RESTful Web service
For the first iteration of our RESTful Web service implementation, we are going to have a single URL that only responds to a GET invocation. It will be accessed by the following URL:
Shooting the URL at you right up front is rather ambitious; but it’s good to take a look how the URL is composed because when we start creating the service, you will see how the various parts of the URL—namely, the context root restful, the mapping of the RESTful resource container resources, and the name of the service itself—all manifest themselves in either the code or the XML configuration files.
The requisite web application folder structure
As you would expect from its name, a Java based RESTful Web service needs to be deployed to a Java EE compliant Servlet Container, and as such the service itself must comply with the folder structure that is required by the Servlet and JSP API. Basically, that means that below the root of application, we need a WEB-INF folder, and that folder needs a deployment descriptor, a lib folder to contain any of the various JAR files that are required at runtime, and a classes folder under WEB-INF in which all of the compiled Java code will be contained:
+\(root folder)
++WEB-INF\
++++classes\
++++lib\
For this particular application, the root folder will be named _restful, which will be placed smack dab under the root of C:\.
With the required folder structure set up, it’s time to start populating the WEB-INF, classes and lib folders with the pertinent resources. Let’s start off by populating the lib directory.
Obtaining Jersey: the JSR-311 reference implementation
Implementing RESTful web services isn’t about ‘rolling your own’ implementation. JSR-311 is the API specification for implementing RESTful services, and we can leverage this JSR by grabbing an implementation of the spec. An open source implementation of JSR-311 is Jersey, and it can be downloaded at jersey.java.net.
There are a few files available for download at jersey.java.net. The one you want is the zip file containing “the Jersey jars, core dependencies and JavaDoc.” The file I downloaded was named jersey-archive-1.6.zip and was an affordable 5.7 megabytes in size. When extracted to the hard drive, it provided a sweet little lib folder with all sorts of delicious jar files inside of it.
The various jar files in the lib folder of the Jersey download need to be copied into the WEB-INF\lib folder of the Web application we are currently developing, taking care of the runtime and compile time dependencies of the Web application.
Creating the Java based resource
For the implementing, Java-based component, we are simply going to write a class named HelloWorldResource in the com.mcnz.ws package, that has a single method named getMessage() which returns the String ‘Rest Never Sleeps.’ The method itself is rather uninteresting, but what is interesting are the various annotations that decorate the code.
As you code this class, note that it must be placed under the classes directory of the WEB-INF folder, and under a folder structure that maps to the package name com.mcnz.ws.
Here’s the code. Save it in a file named HelloWorldResource.java under the directory specified above.
The implementing class
package com.mcnz.ws;
import javax.ws.rs.*;
@Path("helloworld")
public class HelloWorldResource {
@GET
@Produces("text/plain")
public String getMessage() {
return "Rest Never Sleeps";
}
}
As was mentioned before, this service is only going to respond to HTTP GET invocations, and accordingly, you can see that the getMessage() method is decorated with the @GET annotation. Furthermore, a subsequent @Produces annotation indicates that the method will simply be returning plain text ("text/plain") as the MIME type. When someone gets our Web service through the specified URL, the text String “Rest Never Sleeps” will be returned.
Right up front, it was stated that the following URL would be used to invoke this service:
Notice how the @Path("helloworld") annotation maps directly to the name of the resource specified at the end of the URL.
With the Java file saved, compile the code using the following command.
C:\> c:\_jdk1.6\bin\javac -classpath "C:\_restful\WEB-INF\lib\*"
C:\_restful\WEB-INF\classes\com\mcnz\ws\*.java
When completed, the folder containing the .java file should be accompanied by a .class file as well.
Configuring Jersey in the deployment descriptor
With the required jar files thrown into the WEB-INF\lib directory, and the HelloWorldResource.java file coded and compiled, the last step before deployment is to edit the deployment descriptor to let the Servlet container know that a Jersey implementation will be handling RESTful Web service invocations. So, create a file named web.xml, place it directly inside of the WEB-INF folder, and code it up like so:
<?xml version="1.0" encoding="UTF-8"?>
<web-app
xmlns="" version="3.0"
xmlns:xsi=""
xsi:
<servlet>
<servlet-name>RestfulContainer</servlet-name>
<servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class>
<init-param>
<param-name>com.sun.jersey.config.property.packages</param-name>
<param-value>com.mcnz.ws</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>RestfulContainer</servlet-name>
<url-pattern>/resources/*</url-pattern>
</servlet-mapping>
</web-app>
There are really two items of note in this web.xml file. First is the fact that a Servlet is being configured to handle RESTful invocations. The Servlet bears the name ServletContainer and is part of the Jersey libraries that were downloaded from Sun and placed into the Web application's \lib directory.
The other important thing to notice is the url-pattern in the Servlet mapping named /resources/*. Notice how the URL-pattern, resources, maps to the part of the URL that comes directly before the name of the resource (helloworld), and after the name of the web application’s context root (restful).
Now, with the web.xml coded and saved, it’s time to package up the application as a war file and deploy it to Tomcat. As a last and final check, your development environment should have the following resources in the following folder structure:
+ \ ( root folder C:\_restful )
++ WEB-INF\
++++ web.xml
++++ lib\
++++++ asm-3.1.jar
++++++ jackson-core-asl-1.7.1.jar
++++++ jackson-jaxrs-1.7.1.jar
++++++ jackson-mapper-asl-1.7.1.jar
++++++ jackson-xc-1.7.1.jar
++++++ jersey-client-1.6.jar
++++++ jersey-core-1.6.jar
++++++ jersey-json-1.6.jar
++++++ jersey-server-1.6.jar
++++++ jettison-1.1.jar
++++++ jsr311-api-1.1.1.jar
++++ classes\
++++++ com\mcnz\ws\HelloWorldResource.class
++++++ com\mcnz\ws\HelloWorldResource.java
Deployment and testing
The following command, which as you can see is being run from the _restful directory, will take all of the resources contained within the _restful folder and package those resources up as a deployable war file. This command actually places the war file in the webapps folder of the Tomcat 7 installation. If you have Tomcat installed in a different location, you’ll need to edit the command accordingly.
C:\_restful> %JAVA_HOME%\bin\jar -cvf
C:\_tomcat\webapps\restful.war *.*
With the war file placed in the webapps directory, the only thing left to do is to start Tomcat and invoke the service.
The following command will start Tomcat, assuming the servlet engine has been installed in the _tomcat directory:
C:\_tomcat\bin> startup
Once Tomcat is started, since typing a URL into a web browser triggers an HTTP GET invocation, the RESTful Web service can be invoked through a browser using the following URL:
(Notice that the context root, restful, maps to the name of the war file, restful.war.)
And here’s what the Chrome browser looks like after invoking the service:
Summary
And that’s it! That’s how easy it is to configure a basic development environment for RESTful Web services development, and to code an extremely simple RESTful web service that responds to basic GET invocations.
Stay tuned for more tips that move beyond the GET invocation and demonstrate the use of the HTTP POST, PUT and DELETE methods.
18. | http://www.theserverside.com/tip/RESTful-Web-services-made-easy | CC-MAIN-2014-10 | refinedweb | 1,524 | 51.18 |
Important: Please read the Qt Code of Conduct -
How to create .ui file for manually edited application (.h & .cpp only)?
I have Qt 5.5 Creator and Qt 5.5 Designer working in Ubuntu 14.04.
I have walked through creation of several example projects which generate *.ui file per project allowing Qt objects to be edited in Qt Creator.
Next I explored a manually coded test application here ...
This uses .h and .cpp and image files manually edited and placed in project folder.
Files:
mainwindow.cpp
mainwindow.h
main.cpp
application.pro
application.qrc
Images:
images/copy.png
images/cut.png
images/new.png
images/open.png
images/paste.png
images/save.png
But there is no *.ui file in the project since this was not created through Qt Creator.
I can open this C++ project in QtCreator and Run to see the working app.
My first question is ..
Q. How can I create a *.ui file from a manually edited app (as above) so that it can be edited in Qt Creator and extended?
While there is a process for .h and .cpp to create .ui, it seems that there is no tool I've found to reverse engineer this process. Although I read through here ..
... ...
I thought to get around this I might be able to create a blank widget container in Qt Creator and import the above test C++ app. But I don't see how to do this in Qt Creator .. integrating external coded apps.
My second question is ..
Q. How do I integrate external .h and .cpp files (external app) into a Qt Creator created project?
I found this discussion.
- Chris Kawa Moderators last edited by Chris Kawa
How can I create a *.ui file from a manually edited app
- In Qt Creator go to
File -> New File or Project -> Qt -> Qt Designer Form. Select a widget type, e.g.
Widgetand give the file a meaningful name (usually the same as your h/cpp file):
mainwindow.ui. It will be added to you project.
- Open it in the designer, select the widget and set its property
objectNameto match the name of your class in your h/cpp (it's not necessary but recommended to keep things in order).
- In your .h file add:
namespace Ui { class MainWindow; //this is the property `objectName` from the previous step. They need to match }
and then a private field in your class declaration:
class MainWindow : ... private: Ui::MainWindow* ui; //this is again the matching name }
- in your .cpp file include the header generated by uic:
#include "ui_mainwindow.h". the
mainwindowpart is the name of your .ui file.
- in your constructor create the ui instance and setup:
MainWindow::MainWindow(QWidget* parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); }
- in the destructor destroy the ui instance:
MainWindow::~MainWindow() { delete ui; }
How do I integrate external .h and .cpp files (external app)
just open your .pro file and add them to HEADERS and SOURCES variables. | https://forum.qt.io/topic/63965/how-to-create-ui-file-for-manually-edited-application-h-cpp-only | CC-MAIN-2020-50 | refinedweb | 496 | 68.57 |
I hope this isn't too silly a question...
I have code similar to the following in my project:
public class ConfigStore {
public static class Config {
public final String setting1;
public final String setting2;
public final String setting3;
public Config(String setting1, String setting2, String setting3) {
this.setting1 = setting1;
this.setting2 = setting2;
this.setting3 = setting3;
}
}
private volatile HashMap<String, Config> store = new HashMap<String, Config>();
public void swapConfigs(HashMap<String, Config> newConfigs) {
this.store = newConfigs;
}
public Config getConfig(String name) {
return this.store.get(name);
}
}
volatile
volatile
Since changing references is an atomic operation, you won't end up with one thread modifying the reference, and the other seeing a garbage reference, even if you drop
volatile. However, the new map may not get instantly visible for some threads, which may consequently keep reading configuration from the old map for an indefinite time (or forever). So keep
volatile.
As @BeeOnRope pointed out in a comment below, there is an even stronger reason to use
volatile:
"non-volatile writes [...] don't establish a happens-before relationship between the write and subsequent reads that see the written value. This means that a thread can see a new map published through the instance variable, but this new map hasn't been fully constructed yet. This is not intuitive, but it's a consequence of the memory model, and it happens in the real word. For an object to be safely published, it must be written to a
volatile, or use a handful of other techniques.
Since you change the value very rarely, I don't think
volatile would cause any noticeable performance difference. But at any rate, correct behaviour trumps performance. | https://codedump.io/share/Zn8uWDQ0ma0J/1/in-java-is-it-safe-to-change-a-reference-to-a-hashmap-read-concurrently | CC-MAIN-2017-26 | refinedweb | 278 | 55.13 |
◽️ ️Buttons¶
There are four buttons on the top of the Tingbot. These can be used in programs to trigger functions in your code.
import tingbot from tingbot import * state = {'score': 0} @left_button.press def on_left(): state['score'] -= 1 @right_button.press def on_right(): state['score'] += 1 def loop(): screen.fill( color='black') screen.text( state['score'], color='white') tingbot.run(loop)
This is a simple counter program. Whenever the right button is pressed, the score goes up by one. On the left button, the score goes down.
@
right_button.
This ‘decorator’ marks the function to be called when a button is pressed.
buttoncan be one of: left_button, midleft_button, midright_button, right_button.
@left_button.press @midleft_button.press @midright_button.press @right_button.press def on_button(): state['score'] -= 1
Only presses shorter than a second count - anything longer counts as a ‘hold’ event.
@
Button.
hold¶
This marks the function to be called when a button is held down for longer than a second.
@
Button.
down¶
This marks the function to be called as soon as a button is pushed down. This could be the start of a ‘press’ or a ‘hold’ event.
This one is useful for games or when you want the button to be as responsive as possible. | https://tingbot-python.readthedocs.io/en/latest/buttons.html | CC-MAIN-2017-13 | refinedweb | 202 | 77.33 |
Contents tagged with monodevelop
ASP.NET Podcast Show #149 - MonoDroid Development on the Apple Macintosh OSX
Given that I have a cast on my arm, I installed the MonoDroid Development Framework for Apple Macs today. I walked through it and found that things are pretty much the same as with the MonoDroid plugin for Visual Studio 2010. This post shows the video displaying this. This video is based on MonoDroid Preview 11.1.
MonoDroid on the Mac.
UIPicker in the iPhone with MonoTouch
The UIPicker is visually different than the drop down listbox that most .NET developers are familiar, however, it is designed to perform the same type of function. It allows users to select from a fixed set of data isntead of typing in data in a text box. Programming with it is fairly simple. Inherit from the UIPickerViewModel class and then bind the data.
Here's the class:
using System;
using MonoTouch;
using MonoTouch.UIKit;
using MonoTouch.Foundation;
namespace OpenUrl
{
public class ProtocolData : UIPickerViewModel
{
public static string[] protocols = new string[]
{
"http://", "tel:","", "sms:",
"mailto:"
};
public string[] protocolNames = new string[]
{
"Web", "Phone Call", "Google Maps", "SMS", "Email"
};
AppDelegate ad;
public ProtocolData(AppDelegate pad){
ad = pad;
}
public override int GetComponentCount(UIPickerView uipv)
{
return(1);
}
public override int GetRowsInComponent( UIPickerView uipv, int comp)
{
//each component has its own count.
int rows = protocols.Length;
return(rows);
}
public override string GetTitle(UIPickerView uipv, int row, int comp)
{
//each component would get its own title.
string output = protocolNames[row];
return(output);
}
public override void Selected(UIPickerView uipv, int row, int comp)
{
ad.SelectedRow = row;
}
public override float GetComponentWidth(UIPickerView uipv, int comp){
return(300f);
}
public override float GetRowHeight(UIPickerView uipv, int comp){
return(40f);
}
}
}
And then you bind data doing something like this:
ProtocolData protocolDataSource = new ProtocolData(this);
ProtocolSelection.Model = protocolDataSource;
And there you go, you now have a UIPicker with a list of data.
Want to know more about developing with the iPhone? Check out my Wrox Blox eBook on developing applications with MonoTouch for the iPhone/iPod touch for .NET/C# developers.
The Mono / MonoTouch Soft Debugger
Honestly, I thought that it was really cool when the Novell guys put a soft debugger into Mono/MonoTouch so that it is possible to debug an application running on the iPhone Simulator or on the actual device. Basically, its a set of code inside of MonoTouch that will talk back to the debugging device. According to the document, it works in the simulator, an iPhone attached to your macintosh, or over wifi if you are ont eh same network. Thanks guys!
Want to know more about developing with the iPhone? Check out my Wrox Blox eBook on developing applications with MonoTouch for the iPhone/iPod touch for .NET/C# developers.
Be. | http://weblogs.asp.net/wallym/Tags/monodevelop | CC-MAIN-2014-52 | refinedweb | 457 | 54.63 |
Choosing a page namespace for Internationalizing DebianWiki : This page attempts to list various problematics and constraints to be taken into account before choosing a namespace... This should help choose the best solution.
Your contribution is welcome :
- Add more constraints / needs below.
- Add more namespace proposal.
- Add brilliant ideas on how to implement those namespaces too !
Contents
- Constraints and needs
- Summary
- Alternatives with Transalated pages names
- Alternatives with English pages names
- fr.wiki.debian.org/Hardware
- wiki.debian.org/Hardware/French
- wiki.debian.org/HardwareFrench
- wiki.debian.org/Hardware.Fr
- wiki.debian.org/Hardware.fr
- wiki.debian.org/Hardware_fr
- wiki.debian.org/Hardware-fr
- wiki.debian.org/French/Hardware
- wiki.debian.org/FrenchHardware
- wiki.debian.org/Fr/Hardware
- wiki.debian.org/fr/Hardware
- wiki.debian.org/FrHardware
- wiki.debian.org/DebFr/Hardware
- wiki.debian.org/DebFrHardware
- Possible Tricks
- Connecting Translated pages
- History
- ToDo
- Resources
- WikiVote
Constraints and needs
- Easy to search (#Search)
- It should be easy to search.
- English version should be presented if localized version isn't available.
- Find a page using localized words (assuming the user clicks "Full-text" search).
- Clear page subject (#Subject)
Is it easy to identify what's the subject of a page, according to it's title ? SupportFrench and InstallDebian.pl are prone to confusion), as opposed to fr.wiki.debian.org/Support or pl/!?InstallDebian.
- Clear page language (#IdLang)
Is it easy to identify what's the of a language ? A French visitor couldn't guess if he should click on Support or Assistance to get the French version (since both words are synonyms)...
- Linking translated pages (#OtherLng)
- Make it easy to find the same page in other languages.(moinmoin don't have a feature to automatically connect/link localized versions of a page).
- Moinmoin CamelCase linking (#MmLnk)
Does Moinmoin create link automatically ? page containing a slash ("/") or a period (".") doesn't work, like InstallDebian/French or InstallDebian.fr . However, single word are never converted as links by moinmoin, so it's not really an issue since editors have to write ["Foo"] anyway... they can write ["fr/Foo"] too.
- Country code collision (#IsoColl)
.pl can be the short for perl files or polish translations.
- Language name collision (#LangColl)
- Some language may be spelled the same way in their local language (?).
- Translated word collision (#WrdColl)
The word Installation is the same in French and English. not to mention that nouns, projects name , brands, technologies aren't translated : Debian, DebianInstaller, iSCSI, HP.
- Problem mapping pages(#Map)
- It should be easy to find the different versions/languages of a page, in case link is missing.
- Page not translated
- When a page isn't translated, the visitor should be presented the English version.
- URL truncated by MUA (#TruncURL)
Some links may get broken by some MUA, like
- relative links (#RelLnk)
- breaks relative links : "../Subpage" in Hardware/Wifi must be rewritten "../../Subpage/French" for Hardware/Wifi/French.
- short URLs (#ShortURL)
it's best if URL don't become too long. unlike :
- sub-locales
- People from Brazil will appreciate if we handle pt_BR nicely.
wishlist
Wishlist features, not implemented by Moinmoin.
- Automatic language Negotiation (#LangNego)
When a user request "Hardware", the server would redirect to visitors preferred language (using the Accepted-Languages field sent in the user-agent's HTTP request).
Summary
Legend :
The namespace doesn't support the feature properly
The namespace supports the feature properly.
? Undecided / Not clear, yet.
Alternatives with Transalated pages names
The following are various alternatives for translation page names for a sample 'Hardware' page, where the page title is translated.
wiki.debian.org/Materiel
Cons:
- Subject collision : The same word can be used for different subject in different languages
- Language collision : two language can use the same word.
- Difficult to guess the page language (see point above).
- Difficult to list other languages-version of a given page.
- Difficult to present English page if {lang} doesn't exist.
Pros:
- Easy to search information for non-english people
wiki.debian.org/fr/Matériel
Cons:
- High maintenance cost : It's Difficult to list other languages-version of a given page : Renaming pages should be handled with care to avoid orphan. Also, one shouldn't forget to create stub English page.
- non-ascii caracters may get broken because of characters encoding and conversion in URL, MUA, IRC (ISO-8859-* ; UTF-8).
Another issue is that the translated word may not carry the exactly same meaning as the original. So the contents of the pages may tend to "fork" (like "What is Debian" and "About Debian").
Pros:
- Allowing non-ascii characters allow proper i18n handling.
- Easy to search for non-english speaking people..
Useful implementation Hints
Using List translated pages (#translated-pages) could make it this solution quite Maintainable.
wiki.debian.org/fr/Materiel
This is a variant of above #wiki.debian.org/fr/Matériel, but without non-ascii characters. Check above Pros and Cons.
Cons:
- Difficult to list other languages-version of a given page.
- Disallowing non-ascii characters prevents proper i18n handling.
Pros:
- Easy to search information for non-english people
- URL never gets broken due to charset encoding.
Alternatives with English pages names
The following are various alternatives for translation page names for a sample 'Hardware' page, where the page title is not translated (i.e in English).
fr.wiki.debian.org/Hardware
This can be achieved by two means :
fr.wiki.debian.org/Hardware (vhost)
Create multiple moinmoin wiki instances in multiple virtual hosts.
Cons
- Editors would have to create their accounts in each instances.
- Non-english visitor wouldn't be able to search in English pages (someone proposed to use English as underlay, which could solve this.).
fr.wiki.debian.org/Hardware (rewrite)
Use Apache to rewrite fr.wiki.debian.org/Hardware to wiki.debian.org/fr/Hardware. We would need to hack moinmoin to fallback to English version.
Cons
- The pages are accessible through multiple URI (fr.wiki.debian.org/Hardware and wiki.debian.org/fr/Hardware) which could lead to a major confusion (accidental deletion) !
fr.wiki.debian.org/Hardware (layers)
Modify Moinmoin to use the English version as underlay of any other language. Pros
- URL scheme is the same in English and other language.
- If a page doesn't exist in a given language, the English version is presented.
Cons
- Non-native Moinmoin behaviour (underlay is supposed to be for help pages).
- Google detects duplicate content (not translated pages), which leads to lower rating.
- Help pages may not be available in alternate languages (except if we hack Moinmoin to allow 3 layers).
- Wiki users has to sign-in multiple times.
wiki.debian.org/Hardware/French
Cons
Breaks ?SubPages relative Links.
- If we wanted to have a page about "I18n/French", we should name it "I18n/French/French" and "I18n/French/English" ! (that's unlikely to happend)
wiki.debian.org/HardwareFrench
Cons
Can be ambigous : consider ?WhyFrench ; ?SupportFrench, etc..
The URL could become long for some language (esp. ?PortugueseBrazil)
wiki.debian.org/Hardware.Fr
Cons
some links may get broken by some MUA =>
- very confusing (.pl for perl or polish) ?
see trick #page-list-macro.
wiki.debian.org/Hardware.fr
Cons
some links may get broken by some MUA =>
- very confusing (.pl for perl or polish) ?
wiki.debian.org/Hardware_fr
Cons
wiki.debian.org/Hardware-fr
Cons
Page language isn't very clearly identified (for visitors not used to this convention). it's still fairly obvious.
wiki.debian.org/French/Hardware
wiki.debian.org/FrenchHardware
Can be ambigous : consider ?FrenchWhy ; ?FrenchSupport, etc..
The URL could become long for some language (esp. ?PortugueseBrazil)
wiki.debian.org/Fr/Hardware
Help ! any cons ?
wiki.debian.org/fr/Hardware
Help ! any cons ?
Pros :
Even better than "/Fr/*", since a wiki page name should start with an uppercase, we know that a page matching regexp:^[a-z]{2}/.*$ is a localised version.
wiki.debian.org/FrHardware
Cons
wiki.debian.org/DebFr/Hardware
Cons
- Longer, than "Fr/*" !.
(I guess It was actually a short for ?DebianFrFoobar )
wiki.debian.org/DebFrHardware
- Longer, than "Fr/*" !.
?DebDeInstall ?=? ?DebianDesinstall ? (I hope you don't have dyslexia problem).
Possible Tricks
The following tricks can be used to improve Alternatives with English pages names proposals.
wiki.debian.org/fr/Materiel stub
Create linker to help non-english people to find article by title search tool?
(optional) To help user that doesn't speak English, you can create a linker page named "Initial of language" + "/" + "PageInNativeLanguage". This page redirect to the English page with the content #redirect "Language" + "/"+ "NameOfReferentPage"
Pros :
- non-english people can Search article by title.
Cons :
Connecting Translated pages
PageList for alternatives (#page-list-macro)
If the page name structure of translated pages is formally structured (#wiki.debian.org/fr/Hardware, #wiki.debian.org/Hardware.fr, #wiki.debian.org/Hardware-fr...), it's possible to use the PageList macro to list available languages, like :
[[PageList(re:^MacBook/[a-z][a-z]$)]]
this trick could be used in our Moinmoin theme, so it doesn't to be added in every pages..
List translated pages (#translated-pages)
It's possible to automatically list translated versions of the page (even if the pages names are translated). demonstration: ?TestInEnglish.
Everypage should have the following header (adjust #language en)
#language en ##TranslationMasterPage:EnglishName##
And the following footer.
---- Translated versions : [[FullSearchCached(##TranslationMasterPage:TestInEnglish##)]]
It should be possible to implement this feature in Moinmoin itself. Moinmoin would do something similar to FullSearchCached, but it would also fetch the page header's #language xx and display the matching language name (rather than the page name).
Automatic Prefix/suffix pages (#i18nPrefix)
For fr.wiki.debian.org, wiki.debian.org/fr/* and wiki.debian.org/*.fr schemes, It's possible to automatically list alternates languages (by adding a prefix or suffix to the current page name)..
Typically, the Moinmoin template would show the links : Brasileiro - Deutsch - Español - Français - Italiano - Kurdî - Nederlands - Norsk - Polski - Português - Русский - Svenska - Türkçe - 简体中文 - 繁體中文 - فارسی for every pages (without checking if the translated page exist).
with #fr.wiki.debian.org/Hardware_rewrite It's probably not possible to detect "broken link"/"missing page".
with #fr.wiki.debian.org/Hardware_vhost It's should be possible to detect "broken link"/"missing page".
with #wiki.debian.org/fr/Hardware It's should be possible to detect "broken link"/"missing page".
with #wiki.debian.org/Hardware.fr It's should be possible to detect "broken link"/"missing page".
for all schme, detecting "broken link"/"missing page" would require serious Moinmoin patching. It's probably not a goos idea, unless the patch is included upstream.
it might also be possible to redirect (or present) the English version, when localized version is missing. again, this would involve serious Moinmoin modification.
History
PaoloPan proposal for <language-identifier><page title in english>, on 2005-08-15.
debian-www Debian Wiki: Give your opinion thread on the mailing list, on 2007-10-21.
Add SalokineTerata's proposal : stub
ToDo
Once the decision is made :
- Update editor guide.
- Rename existing pages.
- Renmame Frontpages + get Content negotiation updated ( /var/lib/python-support/python2.4/MoinMoin/i18n/fr.py ?)
Resources list of locales
WikiVote
Vote start 05/01/2008 and will finished 15/01/2008
- wiki.debian.org/Materiel
- fr.wiki.debian.org/Hardware (vhost)
- fr.wiki.debian.org/Hardware (rewrite)
- wiki.debian.org/Hardware/French
wiki.debian.org/!?HardwareFrench
- wiki.debian.org/Hardware.Fr
- wiki.debian.org/Hardware.fr
- wiki.debian.org/French/Hardware
wiki.debian.org/!?FrenchHardware
- wiki.debian.org/Fr/Hardware
wiki.debian.org/fr/Hardware 1 ?23
wiki.debian.org/!?FrHardware
- wiki.debian.org/!DebFr/Hardware
wiki.debian.org/!?DebFrHardware
Mailing lists contacted on @lists.debian.org: debian-i18n, debian-l10n-english, debian-www
Initial mail: | https://wiki.debian.org/DebianWiki/TranslationNamespace?highlight=DebianFrFoobar | CC-MAIN-2015-11 | refinedweb | 1,930 | 51.04 |
.
To get you started, I'll walk you through class generation in JaxMe. While this is pretty basic stuff, it should get you familiar enough with JaxMe to look at some of its more interesting features.
Setting up JaxMe is a piece of cake. Visit the JaxMe project site (see Resources for links) and download the binaries from one of the Apache mirrors. As of this writing, the file I downloaded was incubated-jaxme-0.2-bin.tar.gz. (The 0.3 version became available shortly before this went to press; the instructions are the same, but the filename is incubated-jaxme-0.3-bin.tar.gz). Extract this to your development machine. While you can work with JaxMe on the command line, it's a pain (lots of JAR files) -- this article uses Ant to handle JaxMe tasks. You're strongly encouraged to do the same, and all the relevant Ant files are included here, easily modifiable for your own use.
As with JAXB, you'll need some XML before you can do much of anything with JaxMe. Listing 1 shows a very simple XML schema that defines a student. Obviously, this leaves a lot to be desired, but I've kept things simple so you can focus on JaxMe rather than on schema semantics.
Listing 1. Simple student schema
If you've got your Ant build process set up correctly (described in detail at the end of this article), you can just type
ant generate to create classes from this schema. I've left these details for the end of the article so that you can focus on JaxMe and its semantics throughout the article, and then look at all the Ant specifics later on. I'd actually recommend you read through the article once, and then work through the code, piece by piece. That way, you'll have the concepts down by the time you're actually entering code, and can probably troubleshoot any problems much more quickly.
All of your generated classes will be placed in the package specified in the
targetNamespace attribute. This convention is unique to JaxMe, so you'd do well to understand how it works. Look at the URI provided as an argument to this attribute:. This is turned into a package name by the JaxMe schema compiler. First, "http://" is dropped. Then, the host name portion of the URI (in this case, "dw.ibm.com") is actually reversed -- resulting in "com.ibm.dw". That may seem a little odd, but it's actually the typical mechanism for packaging, and should make sense to those of you who are used to developing site-specific classes or beans, particularly tag libraries.
Finally, the remainder of the URI is split on the slash character (
/) and appended to the base package that's derived from the host name. So the complete package for the schema in Listing 1 turns out to be
com.ibm.dw.jaxme.student. All classes generated from this schema are dropped into this package.
In addition to providing information to JaxMe, the
targetNamespace attribute has
some XML-specific implications. It tells the schema processor to put all constructs created (like the
Address complex type) in that namespace. This means that you'll need to refer to those constructs as being in that namespace; the long and short of this is that you should define a prefix that maps to that namespace.
Note: If that last sentence didn't make any sense to you, you may need to brush up on your XML, and particularly on how XML Schema works with XML. Consult the Resources at the end of this article (as well as the other articles on the developerWorks XML zone) for more information. For now, you can continue reading, and just trust me -- but you should definitely take the time to fully understand namespaces before considering yourself a JaxMe expert!
In the schema in Listing 1, this was done with the
stu prefix. With this prefix, it's easy to define types and then refer to them (using the namespaced prefix) throughout the schema.
You also should be aware that JaxMe works great with the
include directive (which is not used in this example). For particularly large schemas, you can segment these definitions into multiple files. Then, in your top-level schema, just reference them as follows:
The XML schema processor hands off this information to JaxMe without any distinction of files, so you're advised to use multiple schemas as much as needed.
Once you have generated the classes, take time to familiarize yourself with what's been created. While these are similar to the constructs created by JAXB, you'll find a few subtle differences.
This represents the one XML element (from Listing 1) and is the basic structure for the class hierarchy. It's actually just a very simple interface that extends
StudentType (created by JaxMe) and
Element, which is part of the JaxMe runtime API. Of course, this class (as well as all other generated classes) is in the
com.ibm.dw.jaxme.student package.
Anything named
XXXType in JaxMe (where
XXX is a name like "Student" or "Address") is the definition derived from a schema. Listing 2 shows this class's source, which turns out to be pretty self-explanatory.
Listing 2. StudentType generated code
This has accessor (
getXXX()) and mutator (
setXXX()) methods for all its properties, whether they are simple string types or more complex types like
Address. Of course, those types are all represented by classes that end in "Type," which is why you see references to
AddressType in the generated listings.
AddressType.java and CollegeType.java
Now that you've seen Listing 2 and StudentType.java, these should be pretty obvious. I'll leave you to examine the source for yourself.
This file is in many ways the bridge between JaxMe and your domain-specific classes. Most important are the methods:
newInstance()creates a new element in the domain-specific context and returns it to you.
createStudentType()creates a new top-level
Studentelement.
Oddly enough, this file isn't mentioned in any of JaxMe's documentation, and isn't used in any of their examples. Personally, I've found it sort of useful, but I wouldn't recommend relying on it. You can perform all the tasks you need without it, and as it's conspicuously missing in documentation, it could easily disappear in a later release.
This XML file handles the mapping from XML to Java code (and back again). It relates elements to classes, fields to properties, and so on. Listing 3 shows the relatively simple mapping file for the sample.
Listing 3. Configuration.xml for example code
Currently, JaxMe requires you to directly modify this file to change the mapping. The most common change you might want to make is to substitute your own classes for the generated classes. Of course, you'll have to (at least in current versions of JaxMe) generate the default classes and then modify this file. It's also worth noting that this isn't really supported behavior -- the mapping supports it, but it's not particularly well tested or well documented.
Note: In future articles, I may explore this behavior in more detail. If this is of interest to you, please e-mail me or post feedback to this article, and let me know! That's the best way to request coverage of a specific topic.
This is the standard JAXB properties file, of course modified for JaxMe. It is just a single line, and tells the JAXB factory classes to use the JaxMe implementation classes, as follows:
For those of you who are familiar with JAXP, this is exactly the way that implementations like Xerces instruct the JAXP architecture to load their implementations.
The various source files in the
impl sub-directory are all concrete implementations of the interfaces created by JaxMe. Generally, you don't have to worry about these -- they handle the XML processing of your documents, and conversion to Java equivalents.
For those of you who like to look under the hood, these classes are SAX classes, as JaxMe uses SAX to parse XML. In fact, the Type classes implement (indirectly) SAX's
org.xml.sax.ContentHandler interface. You'll see methods like
startDocument() and
characters() in the source code (which is too long to show here).
Although you probably won't have to mess around with these classes much, it's good to have some basic familiarity with them. You'll use them in your code (which I'll show you shortly), and you'll also find an understanding of them quite helpful in troubleshooting and debugging.
As a final step in looking at these classes, compile them. This may seem obvious, but you wouldn't believe the questions I get that relate to lack of compilation, or class path issues (also covered, a little later). So before you try to work with your classes, be sure to compile them. As always, I find this is easiest to do with Ant, and I just use
ant compile to do the trick.
For those of you doing things the hard way (or who just have a passion for typing
javac, be sure to include jaxme-api.jar and jaxme.jar in your class path. jaxme-api.jar contains the JAXB API, while jaxme.jar has the JaxMe implementation classes. As output, you should get everything in both the base and
impl directories compiled. Finally, you'll want to copy over the support files used: Configuration.xml and jaxb.properties. Those will be important at run time for marshalling and unmarshalling.
You'll find no substitute for a good build tool. It saves you the hassle of dealing with classpath issues over and over again, as well as remembering specific command-line options. This article deals with several tasks, each of which can be handled automatically by Ant. I want to take a little time to let you see my Ant files, so you can incorporate Ant into your own build environment. This also allows me to largely skip this detail in later articles (and cover more meat -- always good, right?).
First, you'll want to use JaxMe's schema compiler/class generation tool, represented by the
org.apache.ws.jaxme.generator.Generator interface. So you could conceivably use Ant's
java target to fire up a specific instance of this interface. However, that's a bit messy -- you are hard-coding in an implementation, and you have to mess with your build file to change that implementation. You could define the implementation class as a property, but that's still messing with fairly low-level coding issues. Fortunately for those of you using Ant, JaxMe includes an Ant
taskdef (task definition) for inserting class generation into your build file, and it takes care of all of these details for you. Just tell Ant you've got a custom task definition, as shown in Listing 4.
Listing 4. Ant taskdef for JaxMe
By including this fragment in an Ant build file, it's trivial to generate classes. You can use the XML shown in Listing 5 to do just that.
Listing 5. Generating classes
With the classes generated, you now need to compile them and copy over your support files. Listing 6 takes care of this task, and even deals with classpath issues.
Listing 6. Compiling classes
As obvious as this should be, it's often helpful to be able to clean up what you've done and start from scratch. While this isn't anything JaxMe-specific, it is worth looking at. The typical way to handle this is through a target called
clean, as shown in Listing 7.
Listing 7. Cleaning up
Listing 8 is a larger Ant file that puts all these elements together. It's actually the file I've used throughout, so it works great for everything described here.
Listing 8. Ant taskdef for JaxMe
You need to change the value of the
lib.jaxme property, and then you're ready to go. In this case, you could simply type
ant generate to generate the classes from your schema. You'll want to keep utilities like this around (as well as Ant), as it makes compiling and handling tricky class paths a piece of cake.
With a solid understanding of how JaxMe handles class generation, you can easily get started converting to and from XML. I'll tackle that in the next article, and then move on to some JaxMe-specific features like working with databases. Until then, try messing around with mapping and Configuration.xml -- you'll have a good time (probably breaking things once or twice) and really get a handle on how JaxMe works. While you're doing that, I'll be working on the next installment -- see you then!
Information about download methods
- Participate in the discussion forum.
- Download the code for student.xsd and build.xml.
- Visit the JaxMe Web site to learn more about this new API.
- Visit the Java Architecture for XML Binding (JAXB) page.
- Check out the Apache Incubator, where new and ingenious projects like JaxMe are coming online all the time.
- Browse for books on these and other technical topics.
- Look at several XML data binding approaches using code generation from W3C XML Schema or DTD grammars for XML documents in Dennis Sosnoski's article "Data binding, Part 1: Code generation approaches -- JAXB and more" (developerWorks, January 2003).
- Obtain text parsing utilities from the Jakarta Commons package.
- Get the scoop on Sun's XML APIs from their Web site.
- Learn how to use JAXB to develop enterprise applications with WebSphere Studio Application Developer V5.1 in this article by Tilak Mitra (developerWorks, February 2004).
- Discover more data binding resources on the developerWorks XML and Java technology zones.
- Find out how you can become an IBM Certified Developer in XML and related technologies. to the EJBoss project, an open source EJB application server, and Cocoon, an open source XML Web-publishing engine. | http://www.ibm.com/developerworks/xml/library/x-pracdb4/ | crawl-003 | refinedweb | 2,345 | 63.09 |
This action might not be possible to undo. Are you sure you want to continue?
1120-REIT
Name Please Type or Print
U.S. Income Tax Return for Real Estate Investment Trusts
For calendar year 1993 or tax year beginning , 1993, ending , 19
OMB No. 1545-1004
Department of the Treasury Internal Revenue Service
Instructions are separate. See page 1 for Paperwork Reduction Act Notice.
C Employer identification number
A Year of REIT status election
Number, street, and room or suite no. (If a P.O. box, see page 5 of instructions.)
D Date REIT established
B
Check if a Personal holding company (Attach Sch. PH)
City or town, state, and ZIP code
E Total assets (see instructions)
F Check applicable box(es):
(1)
Final return
(2)
Change of address
(3)
Amended return
$
Part I—Real Estate Investment Trust Taxable Income (See instructions.) Income (EXCLUDING income required to be reported in Part II or in Part IV)
1 2 3 4 5 6 7 8 9 10a 11 12 13 14 15 16 17 18 19 20 21 Dividends Interest Gross rents from real property Other gross rents Capital gain net income (attach Schedule D (Form 1120)) Net gain or (loss) from Form 4797, Part II, line 20 (attach Form 4797) Other income (see instructions—attach schedule) Total income. Add lines 1 through 7 Compensation of officers c Balance Salaries and wages b Less jobs credit Repairs and maintenance Bad debts Rents Taxes and licenses Interest Depreciation (attach Form 4562) Advertising Other deductions (attach schedule) Total deductions. Add lines 9 through 18 Taxable income before net operating loss deduction, total deduction for dividends paid, and section 857(b)(2)(E) deduction. Subtract line 19 from line 8 21a Less: a Net operating loss deduction (see instructions) 21b b Total deduction for dividends paid (Schedule A, line 6) c Section 857(b)(2)(E) deduction (Schedule J, line 3c) 21c 1 2 3 4 5 6 7 8 9 10c 11 12 13 14 15 16 17 18 19 20
Deductions (EXCLUDING deductions directly connected with income required to be reported in Part II or in Part IV)
21d 22 23
Tax and Payments
Real estate investment trust taxable income. Subtract line 21d from line 20 Total tax (Schedule J, line 9) 24 Payments: a 1992 overpayment credited to 1993 24a 24b b 1993 estimated tax payments ) d Bal 24d c Less 1993 refund applied for on Form 4466 24c ( 24e e Tax deposited with Form 7004 24f f Credit from regulated investment companies (attach Form 2439) 24g g Credit for Federal tax paid on fuels (attach Form 4136) 25 Estimated tax penalty (see instructions). Check if Form 2220 is attached 26 Tax due. If line 24h is smaller than the total of lines 23 and 25, enter amount owed 27 Overpayment. If line 24h is larger than the total of lines 23 and 25, enter amount overpaid 28 Enter amount of line 27 you want: Credited to 1994 estimated tax Refunded 22 23
24h 25 26 27 28
Please Sign Here
Paid Preparer’s Use of officer Preparer’s signature Firm’s name (or yours if self-employed) and address Cat. No. 64114F
Date Date
Title Preparer’s social security number Check if selfemployed E.I. No. ZIP code Form
1120-REIT
(1993)
Form 1120-REIT (1993)
Page
2
Part II—Tax on Net Income From Foreclosure Property
(As defined in section 856(e)) (Caution: See instructions before completing this part.) 1 2 3 4 5 6 Net gain or (loss) from the sale or other disposition of foreclosure property described in section 1221(1) (attach schedule) Gross income from foreclosure property (attach schedule) Total income from foreclosure property. Add lines 1 and 2 Deductions directly connected with the production of income shown on line 3 (attach schedule) Net income from foreclosure property. Subtract line 4 from line 3
Tax on net income from foreclosure property. Multiply line 5 by 35%. Enter here and on Schedule J, line 3b
1 2 3 4 5 6
Part III—Tax Imposed Under Section 857(b)(5) for Failure To Meet Certain Source-of-Income Requirements (Caution: See instructions.)
1a Enter total income from Part I, line 8 1b Enter total income from foreclosure property from Part II, line 3 Total. Add lines 1a and 1b Multiply line 1c by 95% Enter income on line 1c from sources referred to in section 856(c)(2) Subtract line 3 from line 2. (If zero or less, enter -0-.) Multiply line 1c by 75% Enter income on line 1c from sources referred to in section 856(c)(3) Subtract line 6 from line 5. (If zero or less, enter -0-.) Enter the greater of line 4 or line 7. (If line 8 is zero, do not complete the rest of Part III.) Enter the amount from Part I, line 20 Enter the net capital gain from Schedule D (Form 1120), line 12 Subtract line 10 from line 9 12a Enter total income from Part I, line 8 Enter the net short-term capital gain from Schedule D (Form 1120), 12b line 5. (If line 5 is a loss, enter -0-.) c Add lines 12a and 12b 13 Enter capital gain net income from Part I, line 5 14 Subtract line 13 from line 12c 15 Divide line 11 by line 14. Carry the result to five decimal places 1a b c 2 3 4 5 6 7 8 9 10 11 12a b 16 Section 857(b)(5) tax. Multiply line 8 by line 15. Enter here and on Schedule J, line 3c
1c 2 3 4 5 6 7 8 9 10 11
12c 13 14 15 16
Part IV—Tax on Net Income From Prohibited Transactions (See instructions.)
1 2 3 4 Gain from the sale or other disposition of property Deductions directly connected with the production of income shown on line 1 Net income from prohibited transactions. Subtract line 2 from line 1 Tax on net income from prohibited transactions. Multiply line 3 by 100%. Enter here and on Schedule J, line 3d 1 2 3 4
Schedule A
1
Deduction for Dividends Paid (See instructions.)
2 3 4 5 6
Dividends paid (other than dividends paid after the end of the tax year). Do not include dividends considered paid in the preceding tax year under section 857(b)(8) or 858(a), or deficiency dividends as defined in section 860 Dividends paid in the 12-month period following the close of the tax year under a section 858(a) election to treat the dividends as paid during the tax year Dividends declared in October, November, or December deemed paid on December 31 under section 857(b)(8). (See instructions.) Consent dividends (attach Forms 972 and 973) Total dividends paid. Add lines 1 through 4 Total deduction for dividends paid. If there is net income from foreclosure property on line 5, Part II, see instructions for limitation on the deduction for dividends paid. Otherwise, enter total dividends paid from line 5 here and on line 21b, page 1
1 2 3 4 5
6
Form 1120-REIT (1993)
Page
3
Schedule J
Tax Computation (See instructions.)
1 Check if the REIT is a member of a controlled group (see sections 1561 and 1563) 2a If the box on line 1 is checked, enter the REIT’s share of the $50,000, $25,000, and $9,925,000 taxable income brackets (in that order): (1) $ (2) $ (3) $ b Enter the REIT’s share of: (1) additional 5% tax (not more than $11,750) $ (2) additional 3% tax (not more than $100,000) $ 3a 3a Tax on REIT taxable income 3b b Tax from Part II, line 6, page 2 3c c Tax from Part III, line 16, page 2 3d d Tax from Part IV, line 4, page 2 e Income tax. Add lines 3a through 3d 4a 4a Foreign tax credit (attach Form 1118) 4b b Nonconventional source fuel credit QEV credit (attach Form 8834) c General business credit. Enter here and check which forms are attached: Form 3468 Form 5884 Form 3800 Form 6765 Form 6478 Form 8586 4c Form 8830 Form 8826 Form 8835 4d d Credit for prior year minimum tax (attach Form 8827) e Total credits. Add lines 4a through 4d 5 Subtract line 4e from line 3e 6 Personal holding company tax (attach Schedule PH (Form 1120)) 7 Recapture taxes. Check if from: Form 4255 Form 8611 8 Alternative minimum tax (attach Form 4626) 9 Total tax. Add lines 5 through 8. Enter here and on line 23, page 1
3e
4e 5 6 7 8 9
Yes No
Schedule K
Other Information (See instructions.)
Yes No
1 Check method of accounting: a Cash b Accrual Other (specify) c 2 Did the REIT at the end of the tax year own, directly or indirectly, 50% or more of the voting stock of a domestic corporation? (For rules of attribution, see section 267(c).) If “Yes,” attach a schedule showing: (a) name and identifying number, (b) percentage owned, and (c) taxable income or (loss) before NOL and special deductions of such corporation for the tax year ending with or within your tax year. Is the REIT a subsidiary in a parent-subsidiary controlled group? If “Yes,” enter the employer identification number and name of the parent corporation Did any individual, partnership, corporation, estate, or trust at the end of the tax year own, directly or indirectly, 50% or more of the REIT’s voting stock? (For rules of attribution, see section 267(c).)
6
Was the REIT a U.S. shareholder of any controlled foreign corporation? (See sections 951 and 957.) If “Yes,” attach Form 5471 for each such corporation. Enter number of Forms 5471 attached At any time during the 1993 calendar year, did the REIT have an interest in or a signature or other authority over a financial account in a foreign country (such as bank account, securities account, or other financial account)? If “Yes,” the REIT may have to file Form TD F 90-22.1. If “Yes,” enter name of the foreign country Was the REIT the grantor of, or transferor to, a foreign trust that existed during the current tax year, whether or not the REIT has any beneficial interest in it? If “Yes,” the REIT may have to file Forms 926, 3520, or 3520-A.
7
3
8
4
9
If “Yes,” attach a schedule showing name and identifying number. (Do not include any information already entered in 3 above.) Enter percentage owned 5 Did one foreign person at any time during the tax year own, directly or indirectly, at least 25% of: (a) the total voting power of all classes of stock of the REIT entitled to vote, or (b) the total value of all classes of stock of the REIT? If “Yes,” a Enter percentage owned b Enter owner’s country c The REIT may have to file Form 5472. Enter number of Forms 5472 attached
10
During this tax year, did the REIT pay dividends (other than stock dividends and distributions in exchange for stock) in excess of the REIT’s current and accumulated earnings and profits? (See sections 301 and 316.) If “Yes,” file Form 5452. Check this box if the REIT issued publicly offered debt instruments with original issue discount If checked, the REIT may have to file Form 8281. Enter the amount of tax-exempt interest received or accrued $ during the tax year Enter the available NOL carryover from prior tax years. (Do not reduce it by any deduction on line 21a.) $
11
12
Form 1120-REIT (1993)
Page
4
Schedule L
1 2a b 3 4 5 6 7 8 9a b 10 11a b 12 13 14 15 16 17 18 19 20
Balance Sheets
Assets
(a)
Beginning of tax year (b) (c)
End of tax year (d)
Cash Trade notes and accounts receivable Less allowance for bad debts U.S. government obligations Tax-exempt securities (see instructions) Other current assets (attach schedule) Loans to stockholders Mortgage and real estate loans Other investments (attach schedule) Buildings and other depreciable assets Less accumulated depreciation Land (net of any amortization) Intangible assets (amortizable only) Less accumulated amortization Other assets (attach schedule) Total assets Liabilities and Stockholders’ Equity Accounts payable
Mortgages, notes, bonds payable in less than 1 year
(
)
(
)
(
)
(
)
(
)
(
)
Other current liabilities (attach schedule) Loans from stockholders
Mortgages, notes, bonds payable in 1 year or more
21 22 Retained earnings—Appropriated (attach schedule) 23 Retained earnings—Unappropriated ( ) ( ) 24 Less cost of treasury stock 25 Total liabilities and stockholders’ equity Note: Schedules M-1 and M-2 do not have to be completed if total assets on Schedule L, line 13, column (d) are less than $25,000.
Other liabilities (attach schedule) Capital stock: a Preferred stock b Common stock Paid-in or capital surplus
Schedule M-1
Reconciliation of Income (Loss) per Books With Income per Return
7 Income recorded on books this year not included on this return (itemize): Tax-exempt interest $ 8 Deductions on this return not charged
against book income this year (itemize):
1 Net income (loss) per books 2a Federal income tax (Schedule J, line 9) $ ) b Less: Section 857(b)(5) tax $ ( c Balance 3 Excess of capital losses over capital gains 4 Income subject to tax not recorded on books this year (itemize):
5
Expenses recorded on books this year not deducted on this return (itemize): a Depreciation $ b Section 4981 tax $ c Travel and entertainment $
$ a Depreciation b Net operating loss deduction $ (page 1, line 21a) c Deduction for dividends paid (page 1, line 21b) $
9 10 11 12
6 1 2 3
Add lines 1 through 5 Balance at beginning of year Net income (loss) per books Other increases (itemize):
Net income from foreclosure property Net income from prohibited transactions Add lines 7 through 10 REIT taxable income (line 22, page 1)— line 6 less line 11
Schedule M-2
Analysis of Unappropriated Retained Earnings per Books (line 23, Schedule L)
5 Distributions: a Cash b Stock c Property 6 Other decreases (itemize): 7 Add lines 5 and 6 8 Balance at end of year (line 4 less line 7)
Printed on recycled paper
4
Add lines 1, 2, and 3
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/541165/US-Internal-Revenue-Service-f1120rei-1993 | CC-MAIN-2017-04 | refinedweb | 2,435 | 52.12 |
- Windows
- 'wm geometry . +0+0' will move the main toplevel so that it nestles into the top-left corner of the screen, with the left border and titlebar completely visible.
- MacOS X
- 'wm geometry . +0+0' will move the main toplevel so that it nestles into the top-left corner of the screen, with the left border completely visible. The titlebar is also within the screen, but it (and possibly a few pixel rows of contents) is completely obscured by the menu bar.
- X11
- 'wm geometry . +0+0' will move the main toplevel so that its contents are nestled into the top-left corner of the screen, but with the left border and titlebar completely offscreen and invisible.
proc decoration + menubar (if it exists) thickness set titleMenubarThickness [expr {$contentsTop - $decorationTop}] return [list $titleMenubarThickness $decorationThickness] }Is only useful on MacOS X/Windows (where it returns the thickness of the titlebar/menubar and the thickness of the left window border). On X11, it simply returns 0 0.Difference 3: geometry of withdraw windowsOn immediate creation:
toplevel .tt ; wm withdraw .tt ; wm geometry .ttwill return 1x1+0+0 on all platforms, but:
toplevel .tt ; wm withdraw .tt ; update; wm geometry .ttwill return 1x1+0+0 on X11, 200x200+198+261 (or something similar) on Windows, and 1x1+45+85 (or something similar) on OS X. Similarly, winfo height .tt will return (of course) 1 on x11 and 200 on Windows.A constant here is that winfo reqheight .tt will return 200 (or equivalent) on all platforms. So there is at least a workaround for this difference in behaviour....
[laterne] - 2015-06-19 08:09:45The regular expression in Difference 2 should be{^([0-9]+)x([0-9]+)([+-])(\-?[0-9]+)([+-])(\-?[0-9]+)$}to handle negative positions and orientations | http://wiki.tcl.tk/11502 | CC-MAIN-2018-05 | refinedweb | 294 | 59.9 |
Compiling and Linking
Two steps are required to build Go programs: compiling and linking. (Since we are assuming the use of the gc compiler, readers using
gccgo will need to follow the compile and link process described in golang.org/doc/gccgo_install.html. Similarly, readers using other compilers will need to compile and link as per their compiler's instructions.) Both compiling and linking are handled by the
go tool, which can not only build local programs and packages, but can also fetch, build, and install third-party programs and packages.
For the
go tool to be able to build local programs and packages, there are three requirements. First, the Go bin directory (
$GOROOT/bin or
%GOROOT%\bin) must be in the path. Second, there must be a directory tree that has an
src directory and under which the source code for the local programs and packages resides. For example, the examples unpack to
goeg/src/hello,
goeg/src/bigdigits, and so on. Third, the directory above the
src directory must be in the
GOPATH environment variable. For example, to build the
hello example using the
go tool, we must do this:
$ export GOPATH=$HOME/goeg $ cd $GOPATH/src/hello $ go build
In both cases we assume that
PATH includes
$GOROOT/bin or
%GOROOT%\bin. Once the go tool has built the program, we can run it. By default the executable is given the same name as the directory it is in (e.g., hello on Unix-like systems and hello.exe on Windows). Once built, we can run the program in the usual way.
$ ./hello Hello World!
Note that we do not need to compile—or even explicitly link—any other packages (even though as we will see, hello.go uses three standard library packages). This is another reason why Go programs build so quickly.
If we have several Go programs, it would be convenient if all their executables could be in a single directory that we could add to our
PATH. Fortunately, the go tool supports this as follows:
$ export GOPATH=$HOME/goeg $ cd $GOPATH/src/hello $ go install
The
go install command does the same as
go build only it puts the executable in a standard location (
$GOPATH/bin or
%GOPATH%\bin). This means that by adding a single
PATH (
$GOPATH/bin or
%GOPATH>%\bin) to our
PATH, all Go programs that we install will conveniently be in the
PATH.
In addition to the examples given here, we are likely to want to develop our own Go programs and packages in our own directory. This can easily be accommodated by setting the
GOPATH environment variable to two (or more) colon-separated paths (semicolon-separated on Windows).
Although Go uses the go tool as its standard build tool, it is perfectly possible to use make or some of the modern build tools, or to use alternative Go-specific build tools, or add-ons for popular IDEs.
Hello Who?
Now that we have seen how to build the
hello program we will look at its source code. Here is the complete
hello program (in file
hello/hello.go):
// hello.go package main import ( "fmt" "os" "strings" ) func main() { who := "World!" if len(os.Args) > 1 { /* os.Args[0] is "hello" or "hello.exe" */ who = strings.Join(os.Args[1:], " ") } fmt.Println("Hello", who) }
Go uses C++-style comments:
// for single-line comments that finish at the end of the line and
/* ... */ for comments that can span multiple lines. It is conventional in Go to mostly use single-line comments, with spanning comments often used for commenting out chunks of code during development.
Every piece of Go code exists inside a package, and every Go program must have a
main package with a
main() function that serves as the program's entry point, that is, the function that is executed first.) In fact, Go packages may also have
init() functions that are executed before
main().) Notice that there is no conflict between the name of the package and the name of the function.
Go operates in terms of packages rather than files. This means that we can split a package across as many files as we like, and from Go's point of view, if they all have the same package declaration, they are all part of the same package and no different than if all their contents were in a single file. Naturally, we can also break our applications' functionality into as many local packages as we like, to keep everything neatly modularized.
The
import statement imports three packages from the standard library. The
fmt package provides functions for formatting text and for reading formatted text, the
os package provides platform-independent operating-system variables and functions, and the
strings package provides functions for manipulating strings.
Go's fundamental types support the usual operators (for example, + for numeric addition and for string concatenation), and the Go standard library supplements these by providing packages of functions for working with the fundamental types, such as the
strings package imported here. It is also possible to create our own custom types based on the fundamental types and to provide our own methods—that is, custom type-specific functions—for them.
The reader may have noticed that the program has no semicolons, that the imports are not comma-separated, and that the
if statement's condition does not require parentheses. In Go, blocks, including function bodies and control structure bodies (so, for
if statements and
for loops), are delimited using braces. Indentation is used purely to improve human readability. Technically, Go statements are separated by semicolons, but these are inserted by the compiler, so we don't have to use them ourselves unless we want to put multiple statements on the same line. No semicolons and fewer commas and parentheses give Go programs a lighter look and require less typing.
Go functions and methods are defined using the
func keyword. The main package's
main() function always has the same signature—it takes no arguments and returns nothing. When
main.main() finishes the program will terminate and return 0 to the operating system. Naturally, we can exit whenever we like and return our own choice of value.
The first statement in the
main() function (using the := operator) is called a short variable declaration in Go terminology. Such a statement both declares and initializes a variable at the same time. Furthermore, we don't need to specify the variable's type because Go can deduce that from the initializing value. So in this case we have declared a variable called
who of type
string, and thanks to Go's strong typing we may only assign strings to
who.
The
os.Args variable is a slice of strings. Go makes use of arrays, slices, and other collection data types but for these examples it is sufficient to know that a slice's length can be determined using the built-in
len() function and its elements can be accessed using the
[] index operator using a subset of the Python syntax. In particular,
slice[n] returns the slice's nth element (counting from zero), and
slice[n:] returns another slice which has the elements from the nth element to the last element. In the collections chapter we will see the full generality of Go's syntax in this area. In the case of
os.Args, the slice should always have at least one string (the program's name), at index position
0. (All Go indexing is zero-based.)
If the user has entered one or more command-line arguments the
if condition is satisfied and we set the
who string to contain all the arguments joined up as a single string. In this case, we use the assignment operator
(=), since if we used the short variable declaration operator
(:=) we would end up declaring and initializing a new
who variable whose scope was limited to the
if statement's block. The
strings.Join() function takes a slice of strings and a separator (which could be empty, that is,
""), and returns a single string consisting of all the slice's strings with the separator between each one. Here we have joined them using a single space between each.
Finally, in the last statement, we print
Hello, a space, the string held in the
who variable, and a newline. The
fmt package has many different print variants, some like
fmt.Println() which will neatly print whatever they are given, and others like
fmt.Printf() that use placeholders to provide very fine control over formatting. | http://www.drdobbs.com/parallel/getting-going-with-go/240004971?pgno=2 | CC-MAIN-2014-23 | refinedweb | 1,424 | 61.06 |
07 December 2010 11:19 [Source: ICIS news]
DUBAI (ICIS)--?xml:namespace>
“Toluene [supply] will be long in Asia,” the trader said at the sidelines of the 5th Gulf Petrochemicals and Chemicals Association (GPCA) forum in
This oversupply condition would likely keep toluene prices depressed in 2011, the trader said.
China is the largest importer of toluene in
Notwithstanding the increase in domestic production, China may also see an influx of US material next year, the trader added.
Strong imports amid weak demand had kept toluene inventory levels in eastern China at 120,000–150,000 tonnes since March this year, market sources said.
For more on tolu | http://www.icis.com/Articles/2010/12/07/9417157/gpca-10-asia-toluene-supply-to-remain-high-in-2011.html | CC-MAIN-2013-48 | refinedweb | 108 | 59.23 |
#include <MetaData.h>
The MetaData class lies at the core of Alembic's notion of "Object and Property Identity". It is a refinement of the idea of Protocol (for Objects) and Interpretation (for Properties) in OpenGTO. It is, essentially, an UNORDERED, UNIQUE DICTIONARY of strings. It turns itself into a regular string for serialization and deserialization. This is not a virtual class, nor is it intended to be used as a base for derivation. It is explicitly declared and implemented as part of the AbcCoreAbstract library. It is composed (not inherited) from Alembic::Util::TokenMap. In order to not have duplicated (and possibly conflicting) policy implementation, we present this class here as a MOSTLY-WRITE-ONCE interface, with selective exception throwing behavior for failed writes.
Definition at line 59 of file MetaData.h.
const_iterator typedef this dereferences to a const value_type reference.
Definition at line 89 of file MetaData.h.
Const reference type This is what the iterators dereference to.
Definition at line 85 of file MetaData.h.
const_reverse_iterator typedef this dereferences to a const value_type instance.
Definition at line 93 of file MetaData.h.
Data type. Data is associated with a key, with each key being unique.
Definition at line 76 of file MetaData.h.
Key type. Keys are unique within each MetaData instance.
Definition at line 72 of file MetaData.h.
Our internals are handled by a TokenMap, which we expose through these typedefs.
Definition at line 68 of file MetaData.h.
Value-type This is what the MetaData class "contains", when viewed as a standard container.
Definition at line 81 of file MetaData.h.
Default constructor creates an empty dictionary. ...
Definition at line 101 of file MetaData.h.
Copy constructor copies another MetaData. ...
Definition at line 105 of file MetaData.h.
append appends the given MetaData to this class. Duplicates are overwritten.
Definition at line 211 of file MetaData.h.
append appends the given MetaData to this class. Duplicate keys are ignored, and the original value remains untouched
Definition at line 222 of file MetaData.h.
append appends the given MetaData to this class. Duplicate values will cause an exception to be thrown.
Definition at line 236 of file MetaData.h.
Returns a const_iterator corresponding to the beginning of the MetaData or the end of the MetaData if empty.
Definition at line 148 of file MetaData.h.
Deserialization will replace the contents of this class with the parsed contents of a string. It will just clear the contents first. It will throw an exception if the string is mal-formed.
Definition at line 123 of file MetaData.h.
Returns a const_iterator corresponding to the end of the MetaData.
Definition at line 152 of file MetaData.h.
get returns the value, or an empty string if it is not set. ...
Definition at line 192 of file MetaData.h.
getRequired returns the value, and throws an exception if it is not found.
Definition at line 199 of file MetaData.h.
The matches function returns true if each of the fields in the passed iMetaData are found in this instance and have the same values. it returns false otherwise. This is not the same as "equals", because this MetaData may contain fields that are not included in the passed iMetaData. This should be the default "matching" function.
Definition at line 256 of file MetaData.h.
the matchesExactly function returns true if we're exactly equal in every field. This is a rarely useful concept with MetaData. It is for this reason that we explicitly do not overload the == operator.
Definition at line 288 of file MetaData.h.
The matchesExisting function returns true if, for each of the fields in the passed iMetaData, we have either no entry, or the same entry.
Definition at line 271 of file MetaData.h.
Assignment operator copies the contents of another MetaData instance.
Definition at line 109 of file MetaData.h.
Returns a const_reverse_iterator corresponding to the beginning of the MetaData or the end of the MetaData if empty.
Definition at line 156 of file MetaData.h.
Returns an const_reverse_iterator corresponding to the end of the MetaData.
Definition at line 160 of file MetaData.h.
Serialization will convert the contents of this MetaData into a single string.
Definition at line 132 of file MetaData.h.
set lets you set a key/data pair. This will silently overwrite an existing value.
Definition at line 168 of file MetaData.h.
setUnique lets you set a key/data pair, but throws an exception if you attempt to change the value of an existing field. It is fine if you set the same value.
Definition at line 177 of file MetaData.h.
Definition at line 140 of file MetaData.h. | https://www.sidefx.com/docs/hdk/class_alembic_1_1_abc_core_abstract_1_1_a_l_e_m_b_i_c___v_e_r_s_i_o_n___n_s_1_1_meta_data.html | CC-MAIN-2022-27 | refinedweb | 781 | 61.12 |
:
Oh yeah, here's my code so far (don't yell at me for the goto, I only used it once. Feel free to toss a way to not use it my way, though.)
Code:
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
double bill, paid, dollars, quarters, dimes, nickels, pennies, remaindol = (paid - bill);
cout << "Welcome to my change-maker program! " << endl;
cout << "To begin, press ENTER. " << endl;
getchar();
restart:
cout << "\n\nEnter how much the bill is. " << endl;
cout << "Do not use a $ sign. Just enter the decimal number. " << endl;
cin >> bill;
cout << "Now enter how much cash you paid. " << endl;
cin >> paid;
if (paid < bill) {
cout << "Uh-oh! You need to pay more cash. You " << endl;
cout << "haven't paid the whole bill yet. " << endl;
cout << "You still owe " << setprecision(2) << fixed << (bill - paid) << ". " << endl;
goto restart;
}
if (bill <= paid) {
cout << "You should get " << (paid - bill) << " in change. " << endl;
cout << "The most efficient way to get your " << (paid - bill) << " would " << endl;
cout << "be to get: " << endl;
//need to fit in change. narrow down.
}
getchar();
return (0);
} | http://cboard.cprogramming.com/cplusplus-programming/135450-change-maker-problems-%25-printable-thread.html | CC-MAIN-2015-14 | refinedweb | 179 | 93.64 |
in reply to Noodling with natural sorting in perl6
I played around and it seems that creating the copies is where a lot of time is lost in the second version causeing it to be slow. So i moved the processing out of the map and into a function, the cached results out of it, and used that for the sort. Makes it appear about the same speed as the first sort, not positive the best way to benchmark in perl6 yet though.
my %cache;
sub pre_process($word) {
unless %cache.exists($word) {
%cache{$word} = $word.subst(/(\d+)/, -> $/{ sprintf( "%s%c%s",
+0 , $0.chars, $0)}, :g).lc;
}
return %cache{$word};
}
sub natural_cmp ($a, $b) {
return (pre_process($a) cmp pre_process($b));
}
[download]
A very nice modification, one that should have been obvious to me in retrospect. You're basically implementing an orcish. | http://www.perlmonks.org/?node_id=856324 | CC-MAIN-2017-26 | refinedweb | 141 | 74.08 |
I'm back to the UWP app after not too long of a break; and we really just need to get some things happening.
I guess I'm more working to plan out the flow of the project. This is now fairly similar to a project for work I'll be helping lay the foundations of.
I've got the idea behind the core network functionality. Honestly, for the next app; I'm going to be concerned if we use the exact same, or
Request object; or develop new - whatever. That it's a clean easy REST service is what matters.
We have that kinda clean rest implementation; with behind the scene black magic. We know what TYPE of objects we need and their structure.
Do we ... build those objects; without clear need? Would the be "real" objects? I'm not going to expose their internals until it's needed? So we build the layer above? And on and on and on... IIRC, I went through a similar phase for Android - I've got the network; I can fake the network responses... I need the functionality driving the code - In this case it's the UI. I need more UI. I don't like UI. ... FINE... I'll create an item.
Except I can't... not entirely. I need to update windows first. At the cheer studio; I refuse. OK; gonna work on the VBM layer (deal with it, I'm still calling it that) and the clean architecture.
I should probably put things in a ... in a list... which is mostly ui... GAH... computers.
time passes
I've worked on the UI a little just to figure out how to to get my Template Control extending the
TextBox to display. I finally got it.
I had a co-worker's help with that. He used Blend which I might pull up for more UI things, or maybe just animation stuff; don't know yet. He's more familiar with it, so will see when he suggests it (probably always).
The issue was that I created my
Template Control, extended
TextBox, and applied my mix-in
IText and had this
public sealed class QgTextBox: TextBox, IText { public QgTextBox() { this.DefaultStyleKey = typeof(QgTextBox); } }
Which wouldn't display. I might have included it in my earlier post, and anyone familiar with UWP/XAML/WHATEVER might be going; Yes, that won't show. OBVIOUSLY...
There is no
DefaultStyleKey for
QgTextBox
If I change it to
this.DefaultStyleKey = typeof(TextBox);
This shows.
Why would I apply the
TextBox style when it inherits from
TextBox so it should already HAVE the style applied.
What if I do
public sealed class QgTextBox: TextBox, IText {}
What if it works!
Then I'll be excited!
This ends up doing the same as the Java code; just a wrapper to enable to mix-in. No functionality should be implemented here.
It works. Now I can move onto implementing... the ... UI? for a story.
SHOW A TITLE!
We shall display a title!
LIES!
We shall display a count!
This is the simplest step to get something from the network layer to the UI. We're not going to be writing the high level test and then implementing every step; we're going to TDD this. Probably a lot like was done for Android; but I don't recall exactly how that went down. Also not sure how this will go down.
It's an exciting process. ... and I screw it up. Right off the bat.
I have a fake class; } }
that I'm using for my
MainPageBridgeTests. What's shown here is a pretty quick thing thrown together to see if
IText worked. I'll leave it until it should go away.
I started a test for showing the item count... jumped right into updating
FakeMainPage. The test wasn't driving the code. I've been writing this blog and working on pounding TDD into my head for five months at this point... Screw up the process a lot.
At least I'm catching myself sooner. Anyway... back to a TEST...
I've implemented a test (a couple actually, I still do the minimum step when I can) that sets the Count to a value!
A few days ago; Steve and I were doing a kata and I had us back out a step to then do the smallest possible (hard coded value) then do a new test to force the code... then delete the second test.
I do this to myself, Steve - not just to annoy you.
The test I wrote to test displaying a count:
[TestMethod] public void ShouldDisplayCountOfItems() { FakeMainPage fakeMainPage = new FakeMainPage(); MainPageBridge mainPageBridge = new MainPageBridge(fakeMainPage); mainPageBridge.DisplayItems(new Items(new ItemId[]{null, null, null})); fakeMainPage.TxtStoryCount.Text.Should().Be("3"); }
and the code in the Bridge to implement this
public void DisplayItems(Items items) { IText item = _mainPage.Count(); item.Text = $"{items.Count()}"; }
It's dead simple. If you've followed the Android code at all; you'll see that this has all come out pretty damn dead simple. I'm hoping to keep that trend for the work UWP app. They tend to have some extra bits that aren't as simple; but TDD will set you free!
I've got the Bridge in place. Time to bring in a Mediator. I'm not doing this arbitrarily. It's not to use the VBM; but because I have the View. It's only job is to know what control maps to the lower level's interface;
MainPageBridge.IMainPageView. That's all it does. The View understands what what wdigets map to what display control.
The Bridge understands what data a display control has. I could argue for having a smaller class for EACH display control -> data mapping. I think that's a little bit of overkill. Though it'd be doing ONE thing and ONE thing well.
I might play with that later just to see what it looks like. It'd be a simple class, so ... maybe? That's what these projects are for. More importantly, I now have a possible refactor for when the class appears to be doing too much.
Anyway; The Bridge is doing one thing. We need the mediator to do another thing.
Remember from reading my write up of VBM; the Mediator is a layer more so than the Bridge or the View. These form a triangle smallest to largest; View->Bridge->Mediator for the number of classes that should be involved. The View should have 1 to N supporting Bridge classes. EACH Bridge should have 1 to M supporting Mediator classes. The Meditator layer itself should have 1 to Z classes handling the various interactions with the rest of the system.
In the end; I need a more complex project; Why Hello Work...; to test this against more robustly. So I shall.
Back to the code; I now need the Mediator to reach out and do things.
Mediator
I have a very basic test/mediator starting
[TestClass] public class MainPageMediatorTests { [TestMethod] public void ShouldGetItems() { MainPageMediator mainPageMediator = new MainPageMediator(); mainPageMediator.DisplayItems(new Items(new ItemId[]{ null, null, null, null })); } } public class MainPageMediator { public void DisplayItems(Items items) { throw new System.NotImplementedException(); } }
Which doesn't currently assert. I need to pass in the Bridge. I know the double constructor ain't great; but it's what the best solution I have. I am in C#, and I could use Unity (I think). Until it's needed, no thanks.
I'm using a real
FakeMainPageView. Use real as much as you can; ensure a much functionality as you can. Mock/Fake what you must.FakeMainPageView
I've got the test in place to feed the Item count into the display text, and my daughters cheer practice is over. So I'll end the work for now. I'm going to keep the post going until I have it operating off of fake network data.
time passes
I'm looking for what I was doing... helps if I have the correct stuff pulled...
Finally writing the test to pull from a faked network test! And it's getting happy path'd!
[TestMethod] public async Task ShouldLoadItemsFromNetwork() { FakeResponseHandler fakeResponseHandler = new FakeResponseHandler(); fakeResponseHandler.AddFakeResponse(new Uri($"{HostUrl}/topstories.json"), new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(@"[{""id"":123},{""id"":1234},{""id"":12345}]") }); HackerNewsAccess hackerNewsAccess = new HackerNewsAccess(fakeResponseHandler); FakeMainPageView fakeMainPageView = new FakeMainPageView(); MainPageBridge mainPageBridge = new MainPageBridge(fakeMainPageView); MainPageMediator mainPageMediator = new MainPageMediator(mainPageBridge, hackerNewsAccess); await mainPageMediator.LoadItems(); fakeMainPageView.TxtStoryCount.Text.Should().Be("3"); }
And it passes! We've TDD'd our way into full stack communication.
The other test in this class can be deleted. It's testing the same thing; with less integration.
[TestMethod] public void ShouldGetItems() { FakeMainPageView fakeMainPageView = new FakeMainPageView(); MainPageBridge mainPageBridge = new MainPageBridge(fakeMainPageView); MainPageMediator mainPageMediator = new MainPageMediator(mainPageBridge, new HackerNewsAccess()); mainPageMediator.DisplayItems(new Items(new ItemId[]{ null, null, null, null })); fakeMainPageView.TxtStoryCount.Text.Should().Be("4"); }
This test also exposing the
DisplayItems method which doesn't need to be exposed now.
We can delete the test; and refactor the encapsulation to keep it hidden!
Initial
public class MainPageMediator { private readonly HackerNewsAccess _hackerNewsAccess; private readonly MainPageBridge _mainPageBridge; public MainPageMediator(MainPageBridge mainPageBridge, HackerNewsAccess hackerNewsAccess) { _mainPageBridge = mainPageBridge; _hackerNewsAccess = hackerNewsAccess; } private void DisplayItems(Items items) => _mainPageBridge.DisplayItems(items); public async Task LoadItems() => DisplayItems((await _hackerNewsAccess.TopStories()).Body()); }
I haven't added the empty constructor yet to implement Double Constructor as there is no test driving it. I had
ctor typed out to bring it to life via snippet; but then... there's not test. It's not a thing yet. Not making it happen.
I'll be expressionbody-ing the hell outta this project as I touch files. Until then... Done with this post for now! | https://quinngil.com/2017/09/10/uwp-hackernews-network-items/ | CC-MAIN-2020-29 | refinedweb | 1,620 | 68.87 |
Warning
Warning will not be deleted by a cache after validation unless a full response is sent.
- <warn-agent>
The name or pseudonym of the server or software adding the
Warningheader (might be "-" when the agent is unknown).
- <warn-text>
An advisory text describing the error.
- <warn-date>
A date. This is optional. If more than one
Warningheader is sent, include a date that matches the
Dateheader.
Warning codes
The HTTP Warn Codes registry at iana.org defines the namespace for warning codes.
Examples
Warning: 110 anderson/1.3.37 "Response is stale" Date: Wed, 21 Oct 2015 07:28:00 GMT Warning: 112 - "cache down" "Wed, 21 Oct 2015 07:28:00 GMT"
Specifications
Browser compatibility
BCD tables only load in the browser | https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Warning | CC-MAIN-2022-21 | refinedweb | 124 | 68.47 |
cPige records an audio stream, separating into individual
"Artist - Track.mp3" files. It can also record on an
hour-by-hour basis.
WWW:
NOTE: FreshPorts displays only required dependencies information. Optional dependencies are not covered.
To install the port: cd /usr/ports/audio/cpige/ && make install cleanTo add the package: pkg install cpige
cd /usr/ports/audio/cpige/ && make install clean
pkg install cpige
No options to configure
Number of commits found: 21 looking up the host name addresses on 64 bits platforms
(use in_addr_t instead of long and check it against INADDR_NONE)
- adjust MASTERSITES and WWW
Please note that the project is dead upstream.
PR: 165481
Submitted by: Howard Goldstein
- Get Rid MD5 support
OK, you can laught:
Really update the port to 1.5
Submitted by: pav@
- Update to 1.5
- GTK GUI will be available in a month
Adjust MASTER_SITES to an other server of mine
- adjust the path on my backup server
- switch to my @FreeBSd.org address
Add an extra MASTER_SITE
PR: 96070
Submitted by: Ion-Mihai IOnut Tetcu <itetcu@people.tecnik93.com> (maintainer)
Requested by: krismail
Update to 1.4-2
PR: 94484
Submitted by: Ion-Mihai "IOnut" Tetcu <itetcu@people.tecnik93.com>
(maintainer)
Pass maintainer-ship to submitter of last patch [1]
PR: 93613 [1]
Discusses on: irc
- Update to 1.4
- Point WWW: to english version of author page
PR: 93613
Submitted by: Ion-Mihai "IOnut" Tetcu <itetcu@people.tecnik93.com>
- Update to 1.3-1
re-roll of tarball
Only in cpige-1.3-new/: LICENCE
Only in cpige-1.3-new/: Makefile.windows
diff -ru cpige/cpige.c cpige-1.3-new/cpige.c
--- cpige/cpige.c Fri Nov 25 10:08:13 2005
+++ cpige-1.3-new/cpige.c Wed Nov 30 07:06:12 2005
@@ -21,7 +21,10 @@
#include <fcntl.h>
#include <dirent.h>
#include <sys/stat.h>
-#include <regex.h>
+
+#ifndef WIN32
+ #include <regex.h>
Upgrade to 1.3
- Add some SHA256 checksums
Use MAKE_ARGS
Update to 1.2. In an email from the author:
Possibility to use cPige as a daemon was added. cPige now parses URLs
directly, rather than requiring the user to specify host port and
mountpoint. Statistics and a logfile were added.
Correct the PLIST_FILES variable name (dunno what I'd been
thinking there), bump the PORTREVISION, and assign to ports@.
Add cpige, a shoutcast stream ripper.
cPige records an audio stream, separating into individual
"Artist - Track.mp3" files. It can also record on an
hour-by-hour basis.
WWW:
Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD
10 vulnerabilities affecting 22 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities | http://www.freshports.org/audio/cpige/ | CC-MAIN-2014-10 | refinedweb | 446 | 59.9 |
in reply to The difference between my and local
EPP also suggests you use 'local' when messing with variables in another module's namespace, but I can't think of a RL situation where that could be justified - why not just scope a local variable? Perhaps someone could enlighten me?
%hash = ( one => "a\tb", two => chr(7) );
{
use Data::Dumper;
local $Data::Dumper::Useqq = 1;
local $Data::Dumper::Terse = 1;
print Dumper \%hash;
}
[download]
This way you temporarily replace the value of settings, while perl resets them back to their original state afterwards.
Enlarge the circle
Square the circle
Think about rounding the square
Redefine Pi
Tell them to hire consultants
Fold the square
Use a hammer
Try to create a squircle
Results (150 votes). Check out past polls. | https://www.perlmonks.org/?node_id=620005 | CC-MAIN-2018-34 | refinedweb | 129 | 55.58 |
As you have probably seen, we can use a pointer to any data type, or even to a function. If you think about this for just a moment it makes sense. Everything in your program is loaded into memory. A pointer is simply an indicator of an address in memory. Your variables are places in memory, other pointers are referencing a place in memory, and functions are loaded in memory. Because all this is at some location in memory, it makes sense that you should be able to have a pointer variable that references that location in memory.
Because a structure is a complex data type, you can also have a pointer to a structure variable. In fact, pointers to structures are a commonly used technique. Recall from Chapter 7, when we first discussed structures, that a structure variable allows you to pass several pieces of data, and allows the function to return several pieces of data. A pointer to a structure allows you to point to that space in memory, just as you would point to any other variable’s space in memory.
There is one slight change when dealing with pointers to structures. Normally you access an element of a structure by giving the structure name, a (.), then the name of the element. For pointers to structures you must use a -> rather than a pointer. Let’s examine an example that should clarify this.
Step 1: Write the following code into your favorite text editor.
#include <iostream> using namespace std; struct account { int accountnum; float balance; float interestrate; }; int main() { account myaccount; account *ptraccount; //set the balance of the structure myaccount.balance = 1000; //initialize the pointer ptraccount = &myaccount; //change the pointers balance ptraccount->balance = 2000; //print out the structures balance cout << myaccount.balance << "\n"; return 0; }
Step 2: Compile the code.
Step 3: Run the program. You should see something similar to Figure 9.5.
Figure 9.5: Pointers to structures.
It is important to note, in this example, that when you change an element of the pointer to the structure, you are actually changing the element of the structure itself. This is because, like all points, a structure pointer is simply redirecting any commands or operations to the address in memory it points to. | https://flylib.com/books/en/2.331.1.77/1/ | CC-MAIN-2019-35 | refinedweb | 377 | 64.3 |
What we expect from a Web site has changed dramatically over the last few years. In the early days of the Web, just finding a site with useful information was a thrill. Today we expect Web sites to be highly dynamic with advanced search capabilities, personalization, online ordering, and accurate shipment tracking functions — all accessible through an easy-to-use, visually appealing user interface. Developing this type of site requires the cooperation of many people with different skills. Perhaps the largest challenge is keeping the request-processing code and HTML markup separate so they can be worked on independently. JavaServer Pages (JSP) is a popular technology that can be used to accomplish this.
Why JSP?
Java has proven itself as a great language and platform for server-side applications. Java servlets first appeared in 1997 and have been embraced by all major Web servers as the alternative to Common Gateway Interface (CGI) scripts. As described in last month’s column (available online at), servlets are applications created by extending certain Java classes. They are managed by a Web container that provides the runtime environment and access to other resources. The container runs in a permanent process, giving servlets a performance advantage over CGI scripts, which can require a new process to be created for each request.
But there’s a problem with servlets: besides the request-processing code, a servlet also includes statements to emit the HTML elements for the response. This makes it virtually impossible for a Web designer without programming experience to modify the design of the Web application, since even a minor design changes requires help from a programmer.
JavaServer Pages () were added to the Java toolbox in 1999 to help solve this problem. A JSP is a text file (with a .jsp suffix) that includes standard HTML elements along with JSP elements (that look similar to HTML) to control the dynamic portions of the page. These can be things such as search results, shopping cart contents, or a delivery tracking number. The JSP elements map to Java methods that are called when Web server processes the request for the JSP. By placing the application logic into a simple HTML-like element, anyone familiar with HTML can add dynamic behavior to a Web page without needing to know Java programming. Conversely, a programmer can develop the application in Java without needing to know how the output will be presented.
While it may sound like JSP replaces servlets, the truth is that JSP is typically used in combination with servlets. In fact, there’s an even closer connection between JSP and servlets.
When the Web container receives the first request for a JSP, it actually converts the JSP into a servlet. This servlet is then compiled and executed by the container, and sends its response to the browser. Subsequent requests for the same JSP invoke the already-compiled servlet. When a JSP is modified, the next request for that page causes the container to translate and compile the modified file.
Let’s take a closer look at what a JSP file might look like.
JSP Elements
As mentioned previously, a JSP is a text file with a .jsp file extension to tell the server what it is. The file contains static content plus the JSP elements. The static content is called template text and can be anything the client can handle (HTML, WML, or text). Listing One shows a sample JSP file.
There are three types of JSP elements that can be used in a page: directive elements, scripting elements, and action elements.
DIRECTIVE ELEMENTS
A directive element has the form <%@ directive attr= “value” … %> and describes the page itself. These are things that are the same no matter when, or by whom, the page is requested.
There are three directive elements, on lines 1-3. The page directive on line 1 has two attributes that define the type of content (MIME type) the page will contain and an error page to return if there are any runtime errors.
Lines 2 and 3 contain the taglib directive. This directive shows that we wish to make use of a custom tag library. We’ll go into more detail on custom tag libraries when we cover action elements below. For now, just notice that each use of taglib has a different value for its prefix attribute.
SCRIPTING ELEMENTS
Scripting elements were the original way to add dynamic behavior in JSP, by allowing you to place Java code in the JSP file. There are three types of scripting elements:
Line 9 contains a scripting expression of Java code that creates a new java.util.Date object. The container executes the expression, creating the object. It then adds a string representation of the object to the response. This places the current date in the response.
You should avoid using too many scripting elements because it takes us back to the problem we set out to solve: a mixing of code and markup elements. That’s why there’s an easier way: action elements.
ACTION ELEMENTS
Action elements are the preferred way to add dynamic behavior to a Web page, and have mostly superseded scripting elements. JSP supports two types of actions: standard actions and custom actions. Both types must have an XML-style namespace prefix to uniquely identify them.
Standard actions use the jsp prefix and are defined as part of the JSP specification (). Line 23 contains a standard action element, <jsp:include>. This action functions like the C and C++ #include directive, placing the contents of the page attribute directly into the the response. In this case, since the page is another JSP, that page is executed and its output placed into the response. This action is typically used for shared page fragments such as headers and footers.
Other standard actions can let you access properties in a JavaBean or even let another Web resource continue the processing of a request. (For more on JavaBeans, see “Bean Soup.”)
Bean Soup
JavaBeans is the term used for a Java class that represents a “component” in its broadest sense. They are often used in JSP and servlets to encapsulate information (called properties) about data, such as customers and products. Each instance of a JavaBeans class is called a bean, and every bean contains only properties.
A JavaBeans class has a no-argument constructor. To read and set a property, you use the getPropertyName() and the setPropertyName() methods.
A typical use for a bean would hold information about a GUI widget, such as a text box. It might have properties for the text, font, color, size, etc. Another application could read and modify the bean’s properties.
Beans are also often used to represent business objects, such as a customer bean with properties like name, address, and telephone number. You can think of a bean as a container of information that can be discovered at runtime and accessed by other classes in a generic way, such as by the <c:expr> action elements shown in Listing One.
Despite the similarity in names, JavaBeans and Enterprise JavaBeans (EJB) don’t have a lot in common. Although EJB’s are also components similar to JavaBeans, they must follow a large set of rules and can only be used within a special EJB container that’s part of the J2EE package. For more on Enterprise JavaBeans, see Sun’s J2EE Web site ().
Custom actions are Java classes that follow a specific interface. These classes can access the entire set of Java APIs which let you do nearly anything you can do in Java. They are packaged into custom tag libraries that also contain a mapping of an action element to a Java method and are accessed from a JSP via the uri attribute of the taglib directive element.
Each action element also contains a prefix attribute. In lines 2 and 3, we’ve loaded two tag libraries and assigned them the prefixes c and mylib. These custom prefixes are used with the action element to specify which library contains that specific custom action.
Listing One: Sample JSP Page: foo.jsp
1 <%@ page contentType=”text/html” errorPage=”/error.jsp” %>
2 <%@ taglib prefix=”c” uri=”” %>
3 <%@ taglib prefix=”mylib” uri=”” %>
4
5 <html>
6 <body>
7 <h1>Product List</h1>
8 Here’s a list of our products as of
9 <%= new java.util.Date() %>
10
11 <mylib:getProducts var=”productList” />
12
13 <table>
14 <c:forEach items=”$productList” var=”current”>
15 <tr>
16 <td><c:expr value=”$current.name” /></td>
17 <td><c:expr value=”$current.descr” /></td>
18 <td><c:expr value=”$current.price” /></td>
19 </tr>
20 </c:forEach>
21 </table>
22
23 <jsp:include page=”/footer.jsp” />
24 </body>
25 </html>
For example, the action corresponding to the <mylib: getProducts> action element on line 11 can be found in the library associated with the mylib prefix by the taglib directive on line 3.
The call on line 11 to <mylib:getProducts> specifies that the container should call the getProducts action in the tag library associated with mylib. This calls a Java method (probably, but not necessarily called getProducts()). We don’t know exactly what it does, but based on its name, we can assume that it retrieves product information from a database, and saves it in the variable productList, as specified by the var attribute of the getProducts element. Since a custom action is, as its name implies, custom-made for a specific application, it’s up to the Java programmers on the team to decide which custom actions to develop, what they should do, and what they return.
Custom Tag Libraries and the JSTL
Many projects have “reinvented the wheel” by writing custom actions to handle simple tasks. The Apache Jakarta Taglibs Project () has a number of custom tag libraries that handle various tasks. One of these is the JSP Standard Tag Library (JSTL,), which is working to set standards that define actions for database access, flow control, XML processing, external resource access, and internationalization, as well as a language for easy access to request parameters and other data. The JSTL is currently in its Early Access Release and is scheduled to be officially released this summer.
The taglib directive on line 2 loads the JSTL Early Access library, and uses “c” as its prefix. The <c:forEach> action on line 14 is a loop that evaluates its body (the elements from lines 15 through to the end of the <c:foreach> action on line 20) once for each element in the collection specified by the items attribute, storing the current element in the variable named by the var attribute. In this example, the collection contains JavaBeans representing products which were retrieved by the <mylib:getProducts> custom action on line 11. The three <c:expr> actions on lines 16, 17, and 18 put the values of the current bean’s properties into the response.
JSP and J2EE
JSP is one of the technologies that makes up the J2EE, or Java 2 Enterprise Edition. J2EE is a collection of Java technologies that are useful for the server side of the enterprise application equation.
For enterprise applications, the system components are often assigned to different “tiers,” that can run on the same or different servers to provide scalability. In J2EE 1.3, the four tiers are the Client Tier, the Web Tier, the Business Tier, and the Enterprise Information System (EIS) Tier. The Client Tier holds general-purpose clients (e.g., HTML or WML browsers) as well as regular GUI applications. The Web Tier is made up of JSP and servlets. The Business Tier is the domain of the Enterprise JavaBeans (EJB). The EIS Tier contains databases and legacy systems. J2EE also includes APIs that can be used to access databases (JDBC), process XML (JAXP, etc.), use a naming and directory service (JNDI), and access a messaging service (JMS), among other things.
Figure One shows a scenario involving JSP and all the tiers. A user fills out a form in a Web browser (Client Tier) and submits it. The request is received by a servlet (Web Tier), that validates the input (date and number formats, for instance). It then locates an EJB component (Business Tier) responsible for processing this type of request. The EJB accesses a database (EIS Tier) that returns a result (e.g. a list of items retrieved from the database). When the servlet gets the result, it passes it on to the JSP page, which merges the dynamic data with static markup (a navigation bar, a common header and footer, etc.) and sends back the complete response to the browser. The communication between the servlet and the JSP is based a mechanism defined by the servlet specification: the RequestDispatcher and request attributes. You can read more about the RequestDispatcher in the servlet specification ().
So, do you need a complete J2EE environment to use JSP? Not at all. For a simple application, such as a searchable employee directory, a JSP that accesses a database using a custom action fits the bill perfectly. More complex applications often use a servlet for the request processing and JSP for the user interface. This gives you the best of both worlds: a programmer can use the full power of Java and APIs for request processing in the servlet, and a Web designer can design the site using JSP to add dynamic content. As long as you use only JSP and servlets, all you need is a Web container and the Java 2 Standard Edition (J2SE) environment.
Deploying your JSP
To deploy a JSP on your system you need a Web server that supports the JSP specification. Apache Tomcat 4 is the reference implementation for the JSP 1.2 and the Servlet 2.3 specifications and works great as a JSP development server. For more information on setting up Tomcat, see “Hangin’ With Tomcat” in the October 2001 issue ().
Once Tomcat is installed, adding a JSP is as simple as placing it in a directory underneath $CATALINA_HOME/ Webapps (remember, Tomcat expects to find applications in the Webapps directory and treats it as though it were equivalent to the document root). For example, you could copy our JSP file foo.jsp to $CATALINA_HOME/Webapps/examples/ jsp/mytest/foo.jsp and then access it by browsing the URL. Note that you do not necessarily need to do anything with the web.xml file, unless you need the special functionality it provides (see last month’s column on servlets at for more on the web.xml file).
Tomcat 4 comes with JSP examples that you can play with, as well as JavaDocs for both the JSP and Servlet APIs. The JSP examples are located in a Web application (directory) named examples, organized according to the standard layout defined by the servlet specification. This layout was described in detail in last month’s column.
By now, you’re hopefully beginning to see how the different pieces of the Java puzzle fit together and you can start to experiment on your own. A useful next step would be to further learn more about taglibs, which are covered in the extensive Tomcat documentation and on the Jakarta site as well (). You should also check out the JSP Specification, Sun’s tutorial (), and the JSPInsider () to learn more about JSP. | http://www.linux-mag.com/id/1058/ | CC-MAIN-2016-44 | refinedweb | 2,553 | 62.27 |
Sounds as SAP.Connector.dll (and SAP.Connector.Rfc.dll for 2.0) is not present in GAC or is not copied to the Bin-folder of the project. In 1.x version the assemblies are NOT installed to GAC, thus the "Copy flag" must be turned on. In 2.0 the assemblies are installed to GAC, thus the flag is usually off.
I´ve had the same problem after deploying my application.
Just check, as Reiner Hille-Doering wrote, if you set the properties of SAP.Connector "Lokale Kopie" -> "True" (German VS2003). Only then the copy of SAP.connector.dll in your bin folder is used. Otherwise GAC is used.
Gerhard Rausch
[WebServiceBinding(Name="", Namespace="urn:sap-com:document:sap:rfc:functions")]
public class SAPProxy1 : SAPClient
{
// Constructors
--->public SAPProxy1(){}
public SAPProxy1(string ConnectionString) : base(ConnectionString){}
that's the point where the webapp crashes....
in my browserwindow I receive among other things theese informations where 0z1tvv_5 differs every time I run my project:
=== Pre-bind state information ===
LOG: Where-ref bind. Location = C:\WINNT\TEMP\0z1tvv_5.dll
LOG: Appbase =
LOG: Initial PrivatePath = bin
Calling assembly : (Unknown).
===
LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind).
LOG: Attempting download of new URL.
I just figured it out: The process did not have any write-permissions in c:\winnt\temp though it could not create the temporarely DLLs
Add comment | https://answers.sap.com/questions/818935/index.html | CC-MAIN-2019-18 | refinedweb | 237 | 60.11 |
Buy some stuff on Amazon by clicking a pushbutton connected to your WiFi101 enabled Arduino. It's like a Dash button, but cooler!
Motivation
Back in the Spring of 2015 Amazon released the Dash Button to help facilitate frictionless product purchases with just the click of a button! For example this dash button ensures my pups insatiable appetite for Greenies dog treats is always well served and my pantry never runs low! You register the button to your amazon account and when you're about to run out you press the button, a couple days later voilá treats at your doorstep! Cool.
Then for the developer community Amazon released the AWS IoT button!
It's possibilities are endless! Amazon describes the AWS IoT Button."
This is great if you're a web savvy developer comfortable with AWS or are looking to learn more about the IoT services Amazon is now supporting. Unfortunately this boxes out us hardware hackers! Where's my I/O? What if mildly "device-specific code" is sort of your thing? Sure you could hack open a Dash Button and dive headfirst into some "bare metal" dash embedded dev like the talented folks at Adafruit have documented for you. But if you're still building your chops on Arduino and aren't quite ready to dive into hardware abstraction layers and shift registers you're still out of luck. Until now!
Amazon has released the Dash Replenishment API for device manufacturers and developers!
The good news... now we can create hardware devices that can initiate frictionless purchases all on their own! This means our coffee makers can purchase filters for us before we run out, or our laundry machines can order detergent based off of usage statistics. It's now up to us to build these creative frictionless purchasing devices.
The bad news... communicating directly between the Arduino and the Dash API involves OAuth2.0 handshakes, building POST requests, parsing JSON responses and other non-trivial tasks to be handled in a single Arduino sketch.
So to lower the barrier to entry for Arduino DRS hackers I've abstracted these boring details out and into the AmazonDRS Arduino Library. This way you can continue to focus on creating unique purchasing actuators(buttons are boring) and spend less time concatenating strings and exchanging tokens! With this amazonDashButton example sketch you can start purchasing items on Amazon after editing only a handful of lines of code.
requestReplenishmentForSlot(slotId); //It's that easy
Let's walk through getting all set up and get you ordering all your favorite vices on Amazon automatically!
Getting Started
Hardware
I've decided to use the Arduino MKR1000 as the hardware platform for this project, for a few reasons...
- Similarly sized form factor to dash button.
- 100% Arduino IDE compatible.
- Utilizes the Atmel ATWINC1500 WiFi Module(TLS capable w/ onboard SHA-256 encryption).
- ARM SAMD21 Cortex-M0 (256KB of Flash : ) ).
- LiPo Battery JST plug for portability.
That being said this library should work well with any ATWINC1500 WiFi enabled Arduino that uses the WiFi 101 library. Other options could include an Arduino Zero or Due sporting the WiFi101 shield, an Adafruit MO WiFi Feather, or really any Arduino with sufficient space, and Atmel's ATWINC1500 WiFi module.
Here's what you'll need...
The battery is of course optional and the resistor can be of any value really. We're just using it to pull down one of the push button pins to ground. The small push button will have 4 pins that are organized into two pairs. You'll want to bridge the gap in the breadboard by placing the button so that the longer gap between pins is utilized to span the space between the two sides of the breadboard. Then wire up the breadboard like so...
Any digital I/O pin will do for the pushbutton. Just remember to set
const int dashButton = 14; //DIO number of the pushbutton pin
to whichever pin you end up choosing. I've used jumper cables and a smaller breadboard so my final configuration looks like this...
If you feel like making this a bit more portable you can grab a prototype board, and a header plug to create a sort of push button shield for the MKR1000.
Software
The initial setup procedure and configuration does involve a good amount of steps but once you are set up and configured the development process becomes that much easier. I've written a full fledged getting started guide in the readme/wiki pages located at the AmazonDRS GitHub repo which I'll paraphrase in this section. If you haven't set up "Login with Amazon", AWS SNS, your dash device, or stepped through the authCodeGrant example sketch head over to the repo wiki and follow along. By the time you come back you should have...
- Imported the AmazonDRS library into your Arduino IDE, as well as the Arduino Wifi101 and ArduinoJson libraries.
- Created your LWA Security Profile, created a Dash Replenishment Device, and your Amazon Web Services Simple Notification Service configuration.
- Completed the LWA Authorization Code Grant process and exchanged your auth_code for your refresh token by running the authCodeGrant example sketch.
- Updated the AmazonTokens.h header file in your AmazonDRS library 'src' directory with values for...client id, client secret, refresh token, and redirect uri.
If you've made it this far you're ready to test your new Arduino dash button! Let's take a look at the sketch in detail.
amazonDashButton
#include "AmazonDRS.h" AmazonDRS DRS = AmazonDRS(); //WiFi creds ----------------------------------------------------------------- char ssid[] = ""; // your network SSID (name) char pass[] = ""; // your network password //---------------------------------------------------------------------------- #define slotNumber 1 //dash buttons typically only serve one product/slot const int dashButton = 14; //DIO number of the pushbutton pin static long buttonHigh = 0; //millis of last button push for switch debounce static String slotStatus = "";//boolean if slot is available for replenishment static String slotId = ""; //unique slot id void setup() { Serial.begin(115200); while (!Serial) { ; // wait for serial port to connect. Needed for native USB port only } pinMode(dashButton, INPUT); //Start up DRS DRS.begin(ssid,pass); //initialize slots DRS.retrieveSubscriptionInfo(); //check slot statuses slotStatus = DRS.getSlotStatus(slotNumber); slotId = DRS.getSlotId(slotNumber); } void loop() { //Check for button push on the arduino dash button //if the slot status is true proceed to request replenishment for the associated slot if (buttonPushed()) { //Check if slot is available, if so replenish if(slotStatus == "true") //if the product in slot are available { //we have a match! replenish the products associated with that slot! DRS.requestReplenishmentForSlot(slotId); } else { Serial.print("Sorry, slot "); Serial.print(slotId); Serial.println(" is not available at this time"); } } } bool buttonPushed(void) { int buttonState = digitalRead(dashButton); if(buttonState && ((millis() - buttonHigh) > 5000)) { buttonHigh = millis(); Serial.println("Button pressed!!"); return true; } else { return false; } }
The sketch starts out by including our libraries header file and instantiating our DRS object which we'll use to access the DRS API endpoints exposed by the library. The first lines you'll need to edit are the WiFi SSID and password.
//WiFi creds ----------------------------------------------------------------- char ssid[] = "yourSSIDhere"; // your network SSID (name) char pass[] = "yourPasswordHere"; // your network password //----------------------------------------------------------------------------
Fill in these values so we can initialize DRS and connect our button to WiFi. Next we'll want to take note of some constants and global variables.
#define slotNumber 1 //dash buttons typically only serve one product/slot
SlotNumber refers to the number representing the slot in your Dash Replenishment Device that you created earlier on. You may have created multiple slots for the device, currently this sketch is only configured to purchase one item from one slot. (Just like a dash button).
const int dashButton = 14; //DIO number of the pushbutton pin static long buttonHigh = 0; //millis of last button push for switch debounce
These variables are used to handle details around the button push. Be sure to set dashButton = to whichever digital I/O pin you've decided to connect to your push button.
ButtonHigh is a sort of flag thats sole purpose is to prevent you from overloading the sketch/API with a burst of subsequent purchase requests. Switches can send some mixed signals when switching from pressed to released and vice versa. This just prevents those fluctuations from being processed as additional requests to purchase. You'll notice once the button press is registered it takes about 5 seconds for the switch to respond again. That magic happens here in the buttonPushed() function.
if(buttonState && ((millis() - buttonHigh) > 5000))
Check out this great article on switch debounce if for some reason this topic really gets your juices flowing. The author does a great job of explaining this topic in detail and suggests a much more elegant software debounce solution that would come in handy for applications that require more frequent button pushes.
static String slotStatus = "";//boolean if slot is available for replenishment static String slotId = ""; //unique slot id
SlotStatus and SlotId will store the information contained in the response of...
DRS.retrieveSubscriptionInfo(); //check slot statuses
This method is responsible for carrying out the /subscriptionInfo API endpoint. It'll return to us and store the slotId and slotStatus for the slotNumber we requested.
slotStatus = DRS.getSlotStatus(slotNumber); slotId = DRS.getSlotId(slotNumber);
If for whatever reason that product isn't available our sketch will now be able to let us know. We also now possess the slotId which we'll need to pass to...
DRS.requestReplenishmentForSlot(slotId);
in order to place your order. If all goes well you'll receive an OrderPlacedNotification and an e-mail alerting your to the purchase. Any subsequent requests for replenishment will return an OrderInProgress response.
Ok, so burn this sketch on over to your Arduino and open up a serial terminal! Don't forget to set the baud rate to 115200. Give it a moment to connect to the network and update the status of your devices slots. Then go ahed... push the button... do it!
Don't worry about actually purchasing something and having to cancel the order. Back when you created the "Login with Amazon" consent request we tagged the device as a test device. So we'll still receive purchase notifications as if the product was bought, but that's it. If you get a shipment notification you might want to back track!(and expect a surprise gift in a ~2days)
Yay!
You've done it! You've just initiated a purchase of a product on Amazon by pressing a pushbutton connected to an Arduino! This is great! But if you're like me this will start to get your mind racing on how else you can creatively "Push" the button and order things on Amazon.
Need some more inspiration? There are a bunch of great ideas bouncing around the Amazon DRS Developer Challenge. Also stay tuned for another project write up I'm working on which incorporates NFC! You can take a sneak peak by checking out the amazonDashNfc example sketch.
If you have any questions feel free to drop a line in the comments and I'll do my best to help out!
Enjoy! | https://www.hackster.io/bcarbs/amazon-dash-button-for-arduino-937dd1?ref=challenge&ref_id=78&offset=2 | CC-MAIN-2019-43 | refinedweb | 1,841 | 64.1 |
According to the C Library Manual, the prototype for function s17acc looks like this:
#include <nag.h> #include <nags.h> double s17acc(double x, NagError *fail);The function takes two arguments. The first one is of type double and is the argument x of the Bessel function Y0 (x). The second argument to s17acc is the error handling argument fail which is of type NagError. See the NAG C Library Manual for more information about the NagError type; we need not be overly concerned with it here except to know that it allows us to get feedback regarding the correct operation of s17acc.
To keep things as simple as possible for this first example, we are not going to try to pass the contents of the NagError structure back to Java. Thus, in our Java program, we declare the function like this:
// Declaration of the Native (C) function private native double s17acc(double x);i.e. a method with a double argument, returning double. The native keyword tells the Java compiler that the method will be implemented outside Java.
System.loadLibrary("nagCJavaInterface");will search for a library named "libnagCJavaInterface.so", whereas under Microsoft Windows it will search for a library named "nagCJavaInterface.dll".
public class Bessel { // Declaration of the Native (C) function private native double s17acc(double x); static { // The runtime system executes a class's static // initializer when it loads the class. System.loadLibrary("nagCJavaInterface"); } // The main program public static void main(String[] args) { double x, y; int i; /* Check that we've been given an argument */ if (args.length != 1) { System.out.println("Usage: java Bessel x"); System.out.println(" Computes Y0 Bessel function of argument x"); System.exit(1); } // Create an object of class Bessel Bessel bess = new Bessel(); /* Convert the command line argument to a double */ x = new Double(args[0]).doubleValue(); System.out.println(); System.out.println("Calls of NAG Y0 Bessel function routine s17acc"); for (i = 0; i < 10; i++) { /* Call method s17acc of object bess */ y = bess.s17acc(x); System.out.println("Y0(" + x + ") is " + y); /* Increase x and repeat */ x = x + 0.25; } } }
The main program simply gets a value of x from the command line, and calls the native method using that argument and nine other arguments derived from it.
Now that we have written our Java program, which includes the native declaration of function s17acc, we can compile it with the following command:
% javac Bessel.java
If all goes well, the compiler should produce a file named Bessel.class.
% javah -jni Bessel
After this, you should have a file named Bessel.h which looks like this:
/* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class Bessel */ #ifndef _Included_Bessel #define _Included_Bessel #ifdef __cplusplus extern "C" { #endif /* * Class: Bessel * Method: s17acc * Signature: (D)D */ JNIEXPORT jdouble JNICALL Java_Bessel_s17acc (JNIEnv *, jobject, jdouble); #ifdef __cplusplus } #endif #endif
Points to note about this header file:
JNIEXPORT jdouble JNICALL Java_Bessel_s17acc (JNIEnv *, jobject, jdouble);
The function is named Java_Bessel_s17acc, showing the Java class in which it is declared.
JNIEXPORT and JNICALL are defined via <jni.h>. They are used to alter calling conventions on some systems and need not concern us here.
#include <jni.h> /* Java Native Interface headers */ #include "Bessel.h" /* Auto-generated header created by javah -jni */ #include <nag.h> /* NAG C Library headers */ #include <nags.h> /* Our C definition of the function s17acc declared in Bessel.java */ JNIEXPORT jdouble JNICALL Java_Bessel_s17acc(JNIEnv *env, jobject obj, jdouble x) { double y = 0.0; static NagError fail; /* Tell the routine we want no output messages on failure */ fail.print = Nag_FALSE; /* Call the Y0(x) Bessel function s17acc */ y = s17acc(x, &fail); if (fail.code != 0) printf("Error: s17acc returned fail code %d for argument %g\n", fail.code, x); return y; }Points to note:
First compile the file BesselImp.c:
% gcc -c -fPIC -I/opt/jdk1.6.0_11/include -I/opt/jdk1.6.0_11/include/linux \ -I/opt/NAG/cll6a09dhl/include BesselImp.c
Note the -I switches telling the C compiler where to look for header files. The first directory mentioned, /opt/jdk1.6.0_11/include, must locate the jni.h header file. The second directory, /opt/jdk1.6.0_11/include/linux, is machine dependent and is needed by jni.h to find type definitions. At least the linux element of this name will alter depending on your machine type. The third and fourth include directories should point to the location of the NAG C Library header files installed on your system.
When BesselImp.c has successfully compiled, turn it into a shareable object with the command
% ld -G -z defs BesselImp.o -o libnagCJavaInterface.so \ /opt/NAG/cll6a09dhl/lib/libnagc_nag.so -lm -lc -lpthread
The -G flag means create a shareable object. The -z defs flag means fail to link unless all symbols are resolved at link time. This flag is not strictly necessary, but it can avoid egregious "failed to load library" Java run-time messages (egregious because the messages may refer to libnagCJavaInterface.so when the problem really is due to something else needed by that library). The -o flag names the shareable library as libnagCJavaInterface.so, the name needed by the LoadLibrary() call in our Java code. Finally, the -lm and -lc flags ensure that we link with required system mathematical and C run-time libraries.
Note that on other UNIX machines it may be necessary to add further libraries at link time, depending on the operating system and the version of the NAG Library being used.
We compile and build the DLL in one step:.dllAs under UNIX, the three -I switches tell the C compiler where to look for header files. The first directory mentioned, c:\jdk1.6.0_11\include, must locate the jni.h header file. The second directory, c:\jdk1.6.0_11\include\win32, is the Windows version of the machine-dependent directory needed by jni.h to find type definitions. The third include directory, "c:\Program Files\NAG\CL09\clw3209dal\include", should point to the location of the NAG C Library header files on your system. "c:\Program Files\NAG\CL09\clw3209dal\lib\CLW3209DA_nag.lib" is the location of the NAG C Library installed on your system. The /Gz compiler option (use the __stdcall calling convention) is IMPORTANT. Without it, the code may compile and link, and even start running, but eventually it may cause an access violation. The -LD flag means "build a DLL". The -Fe flag names the output file as nagCJavaInterface.dll.
% java Bessel 1.0The expected output looks like this:
Calls of NAG Y0 Bessel function routine s17acc Y0(1.0) is 0.08825696421567678 Y0(1.25) is 0.25821685159454094 Y0(1.5) is 0.38244892379775886 Y0(1.75) is 0.4654926286469062 Y0(2.0) is 0.510375672649745 Y0(2.25) is 0.5200647624572783 Y0(2.5) is 0.4980703596152319 Y0(2.75) is 0.44865872156913167 Y0(3.0) is 0.37685001001279034 Y0(3.25) is 0.288286902673087
Tip: If you get a Java error message saying that the interface library nagCJavaInterface cannot be found, or that the NAG C Library cannot be found, you may need to set an environment variable to tell the system where to look. The environment variable name is operating-system dependent.
% setenv LD_LIBRARY_PATH .:/opt/NAG/cll6a09dhl/libwill ensure that both the current directory (.) and directory /opt/NAG/cll6a09dhl get searched.
% javac Bessel.java
% javah -jni Bessel
% gcc -c -fPIC -I/opt/jdk1.6.0_11/include -I/opt/jdk1.6.0_11/include/linux \ -I/opt/NAG/cll6a09dhl/include BesselImp.c % ld -G -z defs Bessel.where Bessel 1.0 | http://www.nag.co.uk/doc/techrep/html/Tr2_09/Example1/TRExample1.html | CC-MAIN-2013-20 | refinedweb | 1,266 | 59.4 |
Java HTTPS client FAQ: Can you share some source code for a Java HTTPS client application?
Sure, here’s the source code for an example Java HTTPS client program I just used to download the contents of an HTTPS (SSL) URL. I actually found some of this in a newsgroup a while ago, but I can’t find the source today to give them credit, so my apologies for that.
I just used this program to troubleshoot a problem with Java and HTTPS URLs, including all that nice Java SSL keystore and cacerts stuff you may run into when working with Java, HTTPS/SSL, and hitting a URL.
I’ve found through experience that this Java program should work if you are hitting an HTTPS URL that has a valid SSL certificate from someone like Verisign or Thawte, but will not work with other SSL certificates unless you go down the Java keystore road.
Example Java HTTPS client program
Here’s the source code for my simple Java HTTPS client program:
package foo; import java.net.URL; import java.io.*; import javax.net.ssl.HttpsURLConnection; public class JavaHttpsExample { public static void main(String[] args) throws Exception { String httpsURL = ""; URL myUrl = new URL(httpsURL); HttpsURLConnection conn = (HttpsURLConnection)myUrl.openConnection(); InputStream is = conn.getInputStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader br = new BufferedReader(isr); String inputLine; while ((inputLine = br.readLine()) != null) { System.out.println(inputLine); } br.close(); } }
Just change the URL shown there to the HTTPS URL you want to access, and hopefully everything will work well for you.
Authentication failure?
I try and do this with my URL being google mail:
and I get:
Exception in thread "main" java.io.IOException: Authentication failure
at sun.net.
at sun.net.
at sun.net.
at com.sun.net.ssl.internal.
at HTTPSExample.main(HTTPSExample.java:43)
Java HTTPS problem
Hmm, I'm still on the road traveling, but I just hit that URL from my laptop here in the hotel and it worked fine. The "Authentication failure" error message makes me wonder if you're going through some sort of proxy, like WebSense?
Awesome!
You're my hero! This works perfectly!
Thanks
Nice One
Was using com.sun.net.ssl.HttpsURLConnection which is Deprecated and causing a casting issue.
Your solution worked a treat.
Thanks
Works like a charm!!
Thank you, man! I was struggling with some other examples I've found, this is just simple and effective for what I need. Keep walking!
Great help !
You save the world.
Thank you !
This example is useful. | http://alvinalexander.com/index.php/comment/1910 | CC-MAIN-2019-47 | refinedweb | 423 | 64.51 |
Introduction
Continuous Integration/Delivery has gained widespread acceptance in the minds of developers, and has become an important aspect of the quick release cycles in the software industry. However, adopting continuous integration is not going to bring any benefits, if we don’t make sure that our build system can run in a CI environment with multiple stages (i.e. pipeline).
A highly granular build process requires the split of unit tests according to their speed, execution environment, and general stability. In this tutorial, we will learn how to split our JUnit tests into different categories, so that rather than executing them all at once, a different subset is active for any given build stage. This is an essential requirement in large enterprise projects with test suites that take a lot of time to finish.
We will cover:
- The combination of a custom naming scheme along with surefire exclusions,
- The adoption of the failsafe plugin for integration tests, and
- The usage of Categories which were added in newer JUnit versions.
We will also learn how all of these methods can be combined, for the ultimate testing pipeline.
Prerequisites
It is assumed that we already have a Java project with JUnit tests, that we can build locally but wish to automate on a build server. We will need:
- A sample Java project with JUnit tests,
- A valid
pom.xmlfile that builds the project,
- Maven installed (the command
mvnshould be available in your command line), and
- Internet access to download Maven dependencies.
It is also assumed that we already know our way around basic Maven builds. If not, then feel free to consult its official documentation first.
Using Default Maven Phases in a Build Pipeline
When setting up a build server for a Maven project, we should directly map the so-called Maven phases to build stages. The default Maven lifecycle has several phases. It may seem natural to us to map the Maven phases to build stages, in a continuous integration pipeline.
Here are some descriptions of Maven phases that are fit for a CI environment:
- compile – compile the source code of the project,
- test – test the compiled source code using a suitable unit testing framework, and
- deploy – done in the build environment, copies the final package to the remote repository.
The names of Maven phases can be deceptive. We may assume that these present a one-to-one mapping to build stages, and attempt to create a simple pipeline like this:
This pipeline might work for really small projects, but it is not enough for a production ready environment.
The Need for Fast Feedback
The problem with the basic pipeline, presented in the previous section, is the fact that all unit tests are executed in a single step. Unfortunately, on a large-scale project, this would be a major mistake because not all unit test have the same weight, stability, or speed.
On a large-scale project, with many developers, we want the basic feedback cycle to be really quick. By feedback cycle we define the moment when a developer commits a change until the moment the basic build runs, and unit tests are executed.
At any given case, we want this cycle to last under 5 minutes. Developers should know right away if their code breaks something critical on the project. Having a longer feedback cycle is the root of a lot of shortcomings in the build process.
However, there are some projects (e.g. bank software or hospital software), where the sheer amount of unit tests makes this 5 minute cycle impossible.
To achieve the 5 minute feedback cycle, we need to split the tests and define priorities in the way they run. Different projects might have different priorities, but in most cases we can see that some general categories quickly emerge:
- JUnit tests that focus on the capabilities of a single Java class,
- JUnit tests that need mocks to run, but are still confined in one or two Java modules,
- JUnit tests that require a database, a web server, or an external system in order to run,
- JUnit tests that read or write big files from the filesystem, and
- End-to-end tests that hit REST endpoints or even HTML pages.
Our goal is to select a subset of all these tests, and split them from the other build steps, after making sure that they are running under 5 minutes.
Here is the conversion of a pipeline, where the main feedback time has been reduced from 30 minutes to 2 minutes, by splitting tests into two categories.
That was the theoretical part, and the rest of the article will show us different ways we can split our JUnit tests.
Splitting JUnit Tests According to Execution Speed
A first step in splitting our tests is to examine their execution speed. Plain unit tests, that depend only on Java code, usually run very fast (i.e. in milliseconds), while tests that need a filesystem or a database may run slower (i.e. in seconds).
We’re going to split:
- Fast tests that run in milliseconds as a first build step, and
- Slow tests that need several minutes as a second build step.
Developers will be more confident about their commit if the first step has finished successfully, instead of waiting for all the unit tests to be finished.
Using Surefire Exclusions to Split Fast Tests from Slow Tests
A very basic way to split unit tests, is by using the build system itself. Maven supports different profiles on the build that can be customized to run different tests.
The Surefire Maven plugin, that is responsible for running JUnit tests, can be configured to exclude or include specific tests, according to a custom naming scheme.
For the purposes of this tutorial, we will assume that all fast unit tests have the word “Fast” somewhere in their name. We need to rename all our unit tests with this naming scheme.
Here is an example:
public class FastUnitTest { @Test public void aQuickTest(){ [..code that finishes in milliseconds...] } @Test public void anotherQuickTest(){ [..code that finishes in milliseconds...] } }
In a similar manner, slow unit tests contain “Slow” in their name.
public class SlowUnitTest { @Test public void anotherLengthyUnitTest() { [...code that is very slow..] } }
We will instruct Maven to run these tests with different profiles.
Even though we could create two profiles, one for the slow tests and one of the fast tests, it is far easier to leave the fast tests running on the default profile, and run the slow tests only if they are explicitly required.
To instruct Maven to run only the fast tests by default, we will modify the
pom.xml as following:
<build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> <configuration> <includes> <include>**/*Fast*</include> </includes> </configuration> </plugin> [....other plugins here...] </plugins> </build>
The important line here is the
include directive. This accepts Ant-style directory syntax. The double asterisks make the pattern recursive across all test directories. This means that we can have a deep hierarchy of “fast” tests in several folders. As long as they have the word “Fast” somewhere in their name, they will be executed.
The result of this
pom.xml is that if we now run
mvn test on our command line, only fast tests will be executed. The rest will be ignored. This means that the first build step should now take only seconds to run, or 2-5 minutes in the worst case.
For the slow unit tests, we will create a Maven profile that replicates the surefire configuration, but with a different naming pattern. Here is the respective segment from
pom.xml
<profiles> <profile> <id>slow-tests</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> <configuration> <includes> <include>**/*Slow*</include> </includes> </configuration> </plugin> </plugins> </build> </profile> </profiles>
If you notice the
include directive in the configuration part, you will see that it now looks only at the tests that have the word Slow somewhere in their name.
We will name this custom profile “slow-tests”. The final result is that if we now run
mvn test -Pslow-tests, only the slow tests will run, and the fast ones will be ignored.
We are now ready to split our pipeline in the following steps:
mvn compile(as before),
mvn test(for fast tests only),
mvn test -Pslow-tests(for slow tests only).
This is a good first step to split unit tests. In the next sections, we will refine this concept even further.
Splitting JUnit Tests According to Execution Environment
Another way to split tests is to observe their running environment. Two very obvious categories are
- Plain unit tests, that need only the Java source code, and
- Integration tests, that require a running instance of the application or a subset of it.
The first category is what most people would think of as “unit” tests, meaning tests that focus on one or two Java classes.
The second category comprises of tests that need an application server, database, special filesystem or even an external system to run. End-to-end tests, functional tests, integration tests and even performance tests fall into this category.
The general rule is that plain unit tests, which depend only on the Java source code, are usually faster and simpler to run, and running them on their own is a quick enhancement to the speed of the pipeline.
Integration tests might require some setup on their own (e.g. the launch of the application server), before they can actually run. Having them at a later stage in the pipeline is a very common technique to speed-up the main build.
Using the Failsafe Plugin to Split Plain Unit Tests from Integration Tests
We could use Maven profiles, as shown in the previous section, to further subdivide our tests. For integration tests, however, we do not need to do this, as Maven has a separate plugin that is specifically made to handle them.
The Maven failsafe plugin activates the extra Maven phases that come after the package phase. These are:
pre-integration-test,
integration-test,
post-integration-test, and
verify.
Using the failsafe plugin has two advantages compared to basic Maven profiles:
- We gain two phases to setup and tear-down the testing environment, the pre-integration-test and post-integration-test ones.
- Maven will only fail the build in the verify stage (i.e. if a unit test has failed), instead of stopping at the integration-test phase, which would leave a running environment in an unknown state.
To use the failsafe plugin we need to add it on our
pom.xml file as below:
<build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.19.1</version> <executions> <execution> <id>integration-test</id> <goals> <goal>integration-test</goal> </goals> </execution> <execution> <id>verify</id> <goals> <goal>verify</goal> </goals> </execution> </executions> </plugin>
The Maven failsafe plugin splits JUnit tests with a different naming scheme as well. By default, it will treat all JUnit files with a name that starts or ends with IT as integration tests.
A normal JUnit test, like below, will not be executed by the failsafe plugin as it does follow the IT naming convention.
public class PlainUnitTest { @Test public void simpleTest(){ [..code that checks a Java class...] } @Test public void anotherSimpleTest(){ [..code that checks a Java class...] } }
The following JUnit test will be executed by surefire as its name ends in IT.
public class DbRelatedIT { @Test public void aTestWithDBaccess() { [...code runs inside the application server..] } }
The final result is that with the failsafe plugin enabled, we can run the following:
mvn test(will run only the basic unit tests, and will stop the build if any of them fails),
mvn integration-test(will run integration tests, and will not stop the build if any of them fails), and
mvn verify(will stop the build if an integration test fails).
The main advantage of the failsafe plugin, is that it allows us to hook on the pre and post integration test phases, to set-up or tear down the execution environment. Here is an example:
The basic unit tests run, then another stage sets up the application server (e.g. it could launch Jetty or deploy the war to Weblogic), after that the integration tests run, the application server is stopped, and the result of the integration tests is examined.
Splitting JUnit Tests Into Logical Subsets
The two techniques shown so far, surefire exclusions and the failsafe plugin, are great for splitting JUnit tests when we’re working on a legacy project, that has an old version of JUnit.
Since version 4.8, JUnit comes with Categories, which is the modern way of splitting unit tests. If we have the option to upgrade JUnit to a new version, we should use this feature for large-scale projects.
Unlike the solutions shown before, JUnit categories allow us to create a deep hierarchy of unit test types. Each test can belong to multiple categories at once.
Using JUnit Categories to Create a Hierarchy of Unit Tests
JUnit categories don’t depend on any specific naming scheme. Instead, we can use the standard Java annotations to categorize our tests.
Firstly, we have to create marker interfaces, one for each category. Here are some examples:
public interface Fast { /* category marker */ } public interface Slow { /* category marker */ } public interface Integration { /* category marker */ } public interface Smoke { /* category marker */ } public interface QA { /* category marker */ } public interface DbRelated { /* category marker */ }
Once we have these categories, we can annotate any unit test with any combination of these categories. Here are some examples:
@Category(Fast.class) public class FastUnitTest { @Test public void oneAndOne(){ [...code redacted for brevity...] } } @Category({Slow.class,Smoke.class}) public class SlowUnitTest { @Test public void anotherLengthyUnitTest() throws InterruptedException{ [...code redacted for brevity...] } }
We can now select which categories we want to run on the command line by passing the
groups parameter to the Maven executable. Examples:
mvn test -Dgroups="com.codepipes.Fast"
mvn test -Dgroups="com.codepipes.Smoke,com.codepipes.Fast"
Alternatively, we can set up the category directly in the
pom.xml file
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> <configuration> <groups>com.codepipes.Fast</groups> </configuration> </plugin>
The XML
groups element is supported by both the surefire plugin, and the failsafe plugin. This allows us to gradually move to JUnit categories, even if we’ve already used the failsafe plugin for integration tests.
With Semaphore CI the different
mvn commands can be mapped directly to build commands in a thread giving you insight on the run time of each stage.
Combining all of the Techniques
Even though JUnit categories are very flexible, there is nothing stopping us from mixing all of the previously mentioned techniques together. It is possible to use Maven profiles along with the failsafe plugin, with custom category annotations on the same code base.
Scaling the Pipeline as the Project Grows
A large-scale enterprise project will often require a lot of JUnit categories. A valid scenario is to start with surefire exclusions at the inception of the code, adopt the failsafe plugin once the test/staging environment is active, and finally employ JUnit categories as the project nears completion status.
Here is an example of multiple unit test splits in a big build pipeline:
The build steps are:
- Compilation of code (takes 2 minutes),
- Basic unit tests (2 minutes),
- Integration tests with a mocked application server (10 minutes),
- Deployment to a real application server (2 minutes),
- REST endpoint tests (30 minutes),
- Deployment to a QA server (2 minutes),
- End-to-end browser testing (30 minutes),
- Deployment to performance server (2 minutes),
- Parallel running of performance tests (1 hour),
- Deploy to production server (2 minutes), and
- Run smoke tests (1 minute).
In this example, developers can notice critical problems after 4 minutes ,if the basic unit tests fail, or possibly after 14 minutes, if the integration tests fail, even though the total running time of all the tests is more than 2 hours.
Conclusion
Big enterprise projects require the setup of pipelines with different categories of unit tests. This allows developers to quickly find out about failed builds, without waiting for the full test suite to run, which may require hours.
If we’re using a recent version of JUnit 4, then JUnit categories is the best technique for splitting the running of unit tests along different build stages.
If our project still uses old JUnit version, we can use surefire exclusions or the failsafe plugin, and for the maximum flexibility on unit test categories, we can combine all of these techniques at once. | https://semaphoreci.com/community/tutorials/how-to-split-junit-tests-in-a-continuous-integration-environment | CC-MAIN-2019-47 | refinedweb | 2,793 | 52.7 |
After having explored the basics of firebase and react, I thought I'd use them all together in this tutorial. In this three part series, I am going to create another todo app. I'm going to use react, the basics of which I covered here where I made a simpler version of the same app. I'm also going to use react routing, which I also covered in this post.
Since I don't want this tutorial to be very long, I'm going to add firebase to this project in part two. In that tutorial, we'll move the data from our react component state to the firebase database. Then in part three we'll add authentication where users can add their own private todo items.
Create the static markup
First we'll quickly create the basic design of the app. Everything I'll do here I have already covered else where. Let's start by installing the package we need for routing in react.
yarn add react-router-dom
The
App component is going to be the main component. It will hold the state and the logic of the application. However, let's start by creating the basic structure. If you want to start in codesandbox that means start editing in
index.js. If you create a react application through the terminal, you start in
src/App.
import React, {Component} from 'react' import { BrowserRouter, Route, Link } from 'react-router-dom'; import './App.css' class App extends Component { state = { items: { 1123: { item: 'item one', completed: false }, 2564321: { item: 'item two', completed: true } } } render() { return ( <BrowserRouter> <div className="wrap"> <h2>A simple todo app</h2> <ul className="menu"> <li><Link to={'/'}>To do</Link></li> <li><Link to={'/completed'}>Completed</Link></li> </ul> <Route exact { lis } </ul> ) } } /> <Route exact { lis } </ul> ) } } /> </div> </BrowserRouter> ); } } export default App;
When loading the app in your browser, you'll be able to navigate between the homepage and
/completed and see the difference.
For an explenation on how the above code works, read my previous tutorial on the basics of React Router
Using child components
Let's create a child component which will take care of the duplicate code. Create a file at
components/ItemsComponent.js and add the following code.
import React from 'react' const ItemsComponent=({items, done})=> { let lis = [] let mark = done === false ? '\u2713' : 'x'; for(let i in items){ if(items[i].completed === done){ lis.push(<li key={i}>{items[i].item} <span >{mark}</span></li>) } } return(<ul className="items"> {lis} </ul> ) } export default ItemsComponent;
That is a stateless functional component, as you can see, it doesn't need a class (a shout out to @omensah for nudging me on this direction). It's perfect for cases like these, where the logic doesn't require to make use of functionality that we'd otherwise inherit from
Component class. Cory House has perfectly compared the two styles in this post
Let's modify the
App component to make use of
ItemsComponent which will also clarify the deconstructed arguments in line 2.
import ItemsComponent from './components/ItemsComponent'; class App extends Component { .. return ( <BrowserRouter> <div className="wrap"> ... <Route exact path="/" render={props => <ItemsComponent items={this.state.items} done={false}/> }/> <Route exact path="/completed" render={props => <ItemsComponent items={this.state.items} done={true}/> }/> </div> </BrowserRouter> ); } } export default App;
We render the
ItemsComponent component using
render rather than using the
component attribute, which I covered when writing about react routers because we needed to pass it the items a boolian to signal which items to display. With that the use of ES6 deconstruction is self explanatory:
const ItemsComponent=({items, done})=> { ... }
The above could have otherwise be writen as
const ItemsComponent=(props)=> { ... }
Which we would have had to then reach in the
props object to retrieve
items or
done.
Adding actions
The first two actions that we'll work on are the ability to mark an item as complete, and also completely delete any completed item.
As I said the
App component is going to be the main component. It holds our main state. So let's write the methods that modifies the state.
class App extends Component { state = { items: { 1123: { item: 'item one', completed: false }, 2564321: { item: 'item two', completed: true } } } completeItem=(id)=>{ let items = { ...this.state.items, [id]: {...this.state.items[id], completed: true } } this.setState({ items }) } deleteItem = (id) => { let {[id]: deleted, ...items} = this.state.items; this.setState({ items }) } ...
completeItem method takes the items from the current state, then we select the item with the relevant
id, and finally change its
completed property to
true.
Deleting the relevant object is slightly different. I'm currently trying to learn more about the spread operator and that's why I added it above. I found the snippet ... guess where? ... at stackoverflow
completeItem and
deleteItem methods need to be passed to the
ItemsComponent
render() { return ( ... <Route exact path="/" render={props => <ItemsComponent items={this.state.items} done={false} action={this.completeItem} /> }/> <Route exact path="/completed" render={props => <ItemsComponent items={this.state.items} done={true} action={this.deleteItem} /> }/> ... )
Finally we just strap
action to an
onClick event over at
components/itemsComponent.js
const ItemsComponent=({items, done, action})=> { let lis = [] let mark = done === false ? '\u2713' : 'x'; for(let i in items){ if(items[i].completed === done){ lis.push(<li key={i}>{items[i].item} <span onClick={()=> action(i)}>{mark}</span></li>) } } return(<ul className="items"> {lis} </ul> ) }
Note, the only thing that's changed is the deconstruction of the
action method in the first line. Then I added it to the span.
i is the id of each object within the
items object.
Adding items
A todo Application is no good if users can't add items. At the moment, the item's are hard coded, but that was to help us get to this point.
The way this will work is that I want users to be able to add new items only when thay are viewing the uncompleted items, in other words, only when they are at the root path and not the
/completed path. Let's add the input box inside the
components/ItemsComponent.js file:
const ItemsComponent=({items, done, action})=> { ... return ( <div> {done ? (<ul className="items"> {lis} </ul>) : ( <div> <form> <input type="text" /> </form> <ul className="items"> {lis} </ul> </div> )} </div> ); }
Remember,
done is a boolian, if
true it means the items are marked as completed, hence we do not want to see the form, else, we do.
React requires the outer div to wrap the entire output, and it also requires the
form and
ul to be wrapped with an element.
Finally, just as with the delete and complete operations, we'll add the form's logic at
App components and link it via props with the form. Let's create the logic in
App.js
class App extends Component { ... addItem=(e)=> { e.preventDefault(); let items = { ...this.state.items, [new Date().valueOf()]: { item: this.todoItem.value, completed: false } } this.setState({ items }); } render() { return ( ... <Route exact path="/" render={props => <ItemsComponent ... addItem={this.addItem} inputRef={el => this.todoItem = el} /> }/> ... ); } }
addItem will execute on form submit. Then it simply adds an item to the state.
new Date().valueOf() is a basic way of creating a unique id.
this.todoItem.value is created from the
inputRef attribute that we created in
ItemsComponent. You can read more about Refs (as they are called) in the documentation
Now let's use
addItem and
inputRef in the form over at
ItemsComponent.js.
const ItemsComponent=({items, done, action, addItem, inputRef})=> { ... return ( ... <form onSubmit={addItem}> <input ref={inputRef} </form> <ul className="items"> {lis} </ul> ... ); }
We attach the
input node as a reference to
inputRef (which is passed through props).
Conclusion
So far we have a basic react application where we can add items, mark them as complete then delete any that are completed. We also made use of routing to differentiate between the two.
The completed project can be found at github. I'll have a branch for each tutorial.
The next tutorial is going to connect the react state with the Firebase database.
Discussion (0) | https://dev.to/aurelkurtula/creating-an-app-with-react-and-firebase---part-one-814 | CC-MAIN-2021-21 | refinedweb | 1,341 | 57.98 |
Understand the structure of an echo bot
APPLIES TO: SDK v4
The Bot Framework templates and samples are written for ASP.NET (C#), restify (JavaScript), and aiohttp (Python). However, the web service features are not part of the Bot Framework SDK, but part of the web framework you choose to use.
All bot applications share some common features.
You can create an echo bot from the templates, as described in the quickstarts (for C#, JavaScript, or Python), or you can copy an echo bot project from the Microsoft/BotBuilder-Samples repository.
The C# and JavaScript templates have built-in support for streaming connections. This article does cover streaming features. For information about streaming connections, see how to connect a bot to Direct Line Speech.
Prerequisites
- Knowledge of bot basics.
- A copy of the echo bot sample in C#, JavaScript, or Python.
Bot templates
A bot is a web application, and templates are provided for each language.
The Bot Framework includes both VSIX and dotnet templates.
The templates generate an ASP.NET MVC Core web app. If you look at the ASP.NET fundamentals, you'll see similar code in files such as Program.cs and Startup.cs. These files are required for all web apps and are not bot specific.
Note
The VSIX package includes both .NET Core 2.1 and .NET Core 3.1 versions of the C# templates. When creating new bots in Visual Studio 2019, you should use the .NET Core 3.1 templates. The current bot samples use .NET Core 3.1 templates. You can find the samples that use .NET Core 2.1 templates in the 4.7-archive branch of the BotBuilder-Samples repository. For information about deploying .NET Core 3.1 bots to Azure, see how to deploy your bot to Azure.
The appsettings.json file specifies the configuration information for your bot, such as its.
The EchoBot.csproj file specifies dependencies and their associated versions for your bot. This is all set up by the template and your system. Additional dependencies can be installed using NuGet package manager or the
dotnet add package command.
Resource provisioning
The bot as a web app needs to create a web service, bot adapter, and bot object.
Many bots would also create the storage layer and memory management objects for the bot, but the echo bot does not require state. Other bots would also create any objects external to the bot object or adapter that either need to consume.
In ASP.NET, you register objects and object creation methods in the Startup.cs file.
The
ConfigureServices method loads the connected services, as well as their keys from appsettings.json or Azure Key Vault (if there are any), connects state, and so on. Here, the adapter and bot are defined to be available through dependency injection.
Then, the
Configure method finishes the configuration of your app.
ConfigureServices and
Configure are called by the runtime when the app starts.
Messaging endpoint
The template implements a web service with a messaging endpoint. The service extracts the authentication header and request payload and forwards them to the adapter.
The C# and JavaScript SDKs support streaming connections. While the echo bot does not use any of the streaming features, the adapter in the template is designed to support them.
Each incoming request represents the start of a new turn.
Controllers\BotController.cs
//, HttpGet] public async Task PostAsync() { // Delegate the processing of the HTTP POST to the adapter. // The adapter will invoke the bot. await Adapter.ProcessAsync(Request, Response, Bot); } }
The bot adapter
The adapter receives activities from the messaging endpoint, forwards them to the bot's turn handler, and catches any errors or exceptions the bot's logic doesn't catch.
The adapter allows you to add your own on turn error handler.
Startup.cs
The adapter to use is defined in the
ConfigureServices method.
// Create the Bot Framework Adapter with error handling enabled. services.AddSingleton<IBotFrameworkHttpAdapter, AdapterWithErrorHandler>();
AdapterWithErrorHandler.cs
public class AdapterWithErrorHandler : CloudAdapter { public AdapterWithErrorHandler(IConfiguration configuration, IHttpClientFactory httpClientFactory, ILogger<IBotFrameworkHttpAdapter> logger) : base(configuration, httpClientFactory, logger) { await turnContext.SendActivityAsync("The bot encountered an error or bug."); await turnContext.SendActivityAsync("To continue to run this bot, please fix the bot source code."); // Send a trace activity, which will be displayed in the Bot Framework Emulator await turnContext.TraceActivityAsync("OnTurnError Trace", exception.Message, "", "TurnError"); }; } }
The bot logic
The echo bot uses an activity handler and implements handlers for the activity types it will recognize and react to, in this case, the conversation update and message activities.
- A conversation update activity includes information on who has joined or left the conversation. For non-group conversations, both the bot and the user join the conversation when it starts. For group conversations, a conversation update is generated whenever someone joins or leaves the conversation, whether that's the bot or a user.
- A message activity represents a message the user sends to the bot.
The echo bot welcomes a user when they join the conversation and echoes back any messages they send to the bot.
Startup.cs
The bot to use is defined in the
ConfigureServices method.
// Create the bot as a transient. In this case the ASP Controller is expecting an IBot. services.AddTransient<IBot, EchoBot>();
Bots\EchoBot.cs
public class EchoBot : ActivityHandler {); } } } }
Next steps
- Learn how to send and receive text messages
- Learn how to send welcome messages to users | https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-create-a-bot-project?view=azure-bot-service-4.0 | CC-MAIN-2021-21 | refinedweb | 897 | 59.9 |
How to capture/store screen shot
We all are familiar with taking screen shot manully by pressing “Print Screen” key. There are two alternative for using “Print Screen” key
1) Alt+PrtScn -Image of active screen
2) PrtScn - Image of Desktop
Here I am trying to simulating above two method using C#.
/// <summary>
/// capture/store the screen shot
/// </summary>
/// <param name="currentScreen"> true-Active screen </param>
/// <returns>image</returns>
public static Image GetScreenShot(bool currentScreen)
{
if (currentScreen)
// Simulate Alt+PrtScn keypress.
SendKeys.Send("{%}({PRTSC})");
else
// Simulate PrtScn keypress.
SendKeys.Send("{PRTSC}");
// Get the image from the clipboard.
return (Image)Clipboard.GetImage();
}
In the above example, GetScreenShot() function gets the screen shot and returns an Image object from the Clipboard.
pl tell me wat are the namespaces required | http://www.dotnetspider.com/resources/19731-How-capture-store-screen-shot.aspx | CC-MAIN-2018-05 | refinedweb | 127 | 55.24 |
Closed
Core 2 Duo E6300 vs. Pentium D 945
DennyCraneBL
I know that the Core 2 Duo will generally trump the D 945 in every way, but as I'm on a budget which I can just stretch to get the E6300 despite the 945, which will offer more bang for my money??
23 answers Last reply
More about core e6300 pentium
-
-
- Will a cheaper motherboard impact me more on Core 2 Duo than Pentium D?
The one I have in mind is this:
Does that offerd good overclocking potention for Core 2 Duo?
- If you can afford it, get the core2. You might not need the performance now, but it's nice to have it around if you ever do.
However, you'll probably do fine with the Pentium D. If you're on a tight budget, and want to save a few $, there's nothing wrong with this option.
What is your budget for the upgrade, out of curiosity, and what parts are you planning to replace (or is it just the CPU?), and what are you planning to reuse? Just want to make sure we know what kind of advice to give.
edit Just saw your post - I don't know a lot about that dual-vsta board. From what I've heard, though, it's probably not a great overclocker, but is basically the only option if you want to use your DDR RAM instead of buying DDR2.
-
-
-
- Quote:I'm replacing everything except drives ... I might use my current DDR memory and maybe put the money towards a better processor.
My budget is around £280, I've spent £60 on case/cooling.
- OK, thats good to know.
Can anybody reccomend a Motherboard? I am trying to muster up some more money, as I hear the C2D has good overclocking potential and I'd like a motherboard which can overclock - unless it comes at an incredible cost.
The trouble I am having is the need for ATA connectors, most seem to have only 1 which would support 2 devices ... but I have 2 disk drives and 2 optical drives on IDE.
How is the Asus P5VD2-MX?
- I am currently running an e6300 on a P5VD2-MX and it's solid. Not a good overclocker though, or rather, not an overclocker at all.
Nice cheap MB though and it has the "normal" number of IDE connectors.
Only flipside is that the VIA Sata ports are SATA 1 and the JMicron SATA controller, which is SATA 2, have one internal and one external. So using the JMicron SATA for a RAID setup is only doable by routing a SATA cable back into the case. The JMicron is hotpluggable though, which is good. Have a SATA caddy and the ability to insert SATA drives without booting is nice.
I am looking to upgrade to a board that will OC, but no mATX for C2D currently have good OC abilities. I am holding out for January/February where the nVidia based boards should start arriving and hopefully that will mean mATX boards with better OC ability.
-
-
-
- Gigabyte S3 is probably THE best OC'ing mobo at the cheapest price. Excellent reviews everywhere, and ALOT of people are hitting 3.0 ghz on stock cooling on the E6300. With your aftermarket HSF you would be able to overclock the snot out of a 6300 on an S3 board, assuming decent RAM.
Do you need a 6300 for the uses you're proposing? No. A Sempron would do fine. But when you look at the performance return (ie-investment return) on some extra $$, the 6300 is unmatched.
Will a 6300 perform better? Hell yes, unquestionably. If you OC it, there will be a massive performance increase.
You may not require that extra peformance now, but you might in the future. It will open doors to do other things that a Sempron can only dream about. So it's not about today, any CPU will work today. It's about a year or 2 from now, whether it will handle future things. And the C2D is the best bet to prevent a quick trip to the "Obsolete Aisle" at your house.
-
- I'd advise against getting two single-core systems or getting a Sempron. Remember, a Dual-Core CPU is recommended for HD-DVD/Blu-Ray playback! I've got an Asus P5B Deluxe and it works great. In fact, most P5B derivatives are very good - the P5B, P5B Deluxe/WiFi-AP, and the P5B-E. The P5B and P5B-E might suit your needs better, as they are both cheap and overclock quite nicely.
-
- An Asus P5B Deluxe/WiFi-AP iP965, S775 is £128 - $250, without tax its around $200.
Most motherboards are about the same here, without tax, I mean we do earn more money than the average American so I can understand higher costs, and we get healthcare so I can understand higher taxes, but that motherboard seemed over priced still.
The Asus P5V-VM DH is floating my boat right now, the Digital Home features look very good, and thats costing around $100 with tax - I just wonder how it will overclock, the P5B costs a bit more than the P5V but it would be worth it to crank out some more power from the processor.
-
- Since you're on a very tight budget and is actually considering a Pentium D over a 6300, I have a few questions (rhetorical) and a different suggestion.
1. Do you already have DDR memory and/or an AGP video card (on the computer that is as old as a C64)?
If yes - definitely consider the Asrock 775Dual-VSTA. It can use your old memory and vcard, as well as let you slowing upgrade to PCI-E and DDR2. Works with pentium D and Core 2 Duo.
It does not overclock well, but I don't know why you'd even want to consider overclocking. It's not going to help you surf the net faster nor will it make any difference if your video games "run on a Commodore 64".
A stock 6300 should have zero problems watching a movie while running a virus scanner in the back - no need for an overclock there.
But here's a suggestion to consider - there are many cheap (very cheap, like a quarter of the cost of the ASUS boards - at least in Canada) standalone DVD players that support DivX/Xvid. Use this player for all your movie watching. Use your old computer to keep on DL'ing the movies, and burn them to DVD or re-recordable DVD.
This pretty much solves all your given requirements. I could never recommend someone upgrading their computer if all they do is surf the internet with it.
Seriously - that's what I'd do - just get the standalone player. If you're budget is really that tight, and your requirements really that low, if you really still feel the need to spend extra money for some reason, I highly recommend you wait until March to do it. RD600 motherboard should be out by then as well as more selection for 680i - and hopefully P965 and i975x prices will drop even more by then. There's also the "low-end" Core 2 Duo chips (E3400? ). They're supposed to be cheaper than the 6300, but rumored to overclock quite well, making them the best bang for the buck. They'll be released in 2007 I'm told.
March is typically one of best times to buy hardware (pricewise). Before Xmas is not.
Keep saving until March, and I think you'll be able to buy a much better system for longterm use for the same budget you have right now.
-
-
- But when spending $150.....for another meagre 22%, you can get a CPU that will destroy the 945. Seriously, what is another $25 on the total system cost? And mobo prices have come down, so the total cost to go with C2D is negligible when compared against the total cost of the system. Maybe 10-15% difference, tops? The performance payoff now, and in the future, will be far greater. So that, is what we call an excellent return on your investment
The 6300 simply beats any other chip out there when you consider the price-performance. It draws little power, overclocks like a cheetah on steroids, and all for a negligible price increase. If it were stocks, I'll sell everything I own to scoop them up because the investment and return is unmatched, regardless if he wants to just surf the web.
By buying a clearly inferior chip and technology for approx 85% of the price of a C2D system, that money is a waste because he's stuck with it. Might as well not spend anything. That extra 15%, however, will make a world of difference as to what he can do now and in the future on his computer. Chaining himself to a boat anchor now, when he can get a speedboat for about a $100 extra.....now THAT'S a helluva deal that, quite simply, cannot be matched, bar none.
Related Resources
Ask a new question
CPUs Core Movies Product
Related Resources
- Pentium M vs. Core 2 Duo
- INTEL Core 2 Duo E6300 vs INTEL PENTIUM D
- Dual core and pentium D
- Pentium Dual Core OR Core 2 Duo
- Pentium Dual Core or Core 2 Duo?
- Upgrad my processor from pentium 4 ht 3 2GHz to core 2 duo e6300 1 86GHz
- Pentium d 930 v intel core 2 duo e6300
- Changing Dual Core to Single Core?
- Pentium Dual core OC'd and Core 2 Duo gaming
- Pentium 4 641+ 3.2GHz Dual-Core versus Core 2 Duo E4500
- Pentium dual core E5800 vs core 2 duo E7500
- Core 2 duo 2.66 ghz or Pentium Dual Core G640
- Intel Pentium D (dual core) 2 x 2 8GHZ CPU vs core 2 duo 1 8 ghz
- Better between Core i3( Gen 1) , Pentium Dual Core (Gen 2), Amd A4-A10
- Intel Pentium Dual Core E6300 Hot?!!? | http://www.tomshardware.com/forum/210572-28-core-e6300-pentium | CC-MAIN-2018-09 | refinedweb | 1,684 | 79.8 |
What would make a survey submission invalid? The only likely error case for our QuestionVoteForm is if no answer is chosen. What happens, then, if we attempt to submit a survey with missing answers? If we try it, we see that the result is not ideal:
There are at least two problems here. First, the placement of the error messages, above the survey questions, is confusing. It is hard to know what the first error message on the page is referring to, and the second error looks like it is associated with the first question. It would be better to move the error messages closer to where the selection is actually made, such as between the question and answer choice list.
Second, the text of the error message is not very good for this particular form. Technically the list of answer choices is a single form field, but to a general user the word field in reference to a list of choices sounds odd. We will correct both of these errors next.
Coding custom error message and placement
Changing the error message is easy, since Django provides a hook for this. To override the value of the error message issued when a required field is not supplied, we can specify the message we would like as the value for the required key in an error_messages dictionary we pass as an argument in the field declaration. Thus, this new definition for the answer field in QuestionVoteForm will change the error message to Please select an answer below:
class QuestionVoteForm(forms.Form):
answer = forms.ModelChoiceField(widget=forms.RadioSelect,
queryset=None,
empty_label=None,
error_messages={'required':
'Please select an answer below:'})
Changing the placement of the error message requires changing the template. Instead of using the as_p convenience method, we will try displaying the label for the answer field, errors for the answer field, and then the answer field itself, which displays the choices. The {% for %} block that displays the survey forms in the survey/active_survey.html template then becomes:
{% for qform in qforms %}
{{ qform.answer.label }}
{{ qform.answer.errors }}
{{ qform.answer }}
{% endfor %}
How does that work? Better than before. If we try submitting invalid forms now, we see:
While the error message itself is improved, and the placement is better, the exact form of the display is not ideal. By default, the errors are shown as an HTML unordered list. We could use CSS styling to remove the bullet that is appearing (as we will eventually do for the list of choices), but Django also provides an easy way to implement custom error display, so we could try that instead.
To override the error message display, we can specify an alternate error_class attribute for QuestionVoteForm, and in that class, implement a __unicode__ method that returns the error messages with our desired formatting. An initial implementation of this change to QuestionVoteForm and the new class might be:
class QuestionVoteForm(forms.Form):
answer = forms.ModelChoiceField(widget=forms.RadioSelect,
queryset=None,
empty_label=None,
error_messages={'required':
'Please select an answer below:'})
def __init__(self, question, *args, **kwargs):
super(QuestionVoteForm, self).__init__(*args, **kwargs)
self.fields['answer'].queryset = question.answer_set.all()
self.fields['answer'].label = question.question
self.error_class = PlainErrorList
from django.forms.util import ErrorList
class PlainErrorList(ErrorList):
def __unicode__(self):
return u'%s' % ' '.join([e for e in sefl])
The only change to QuestionVoteForm is the addition of setting its error_class attribute to PlainErrorList in its __init__ method. The PlainErrorList class is based on the django.form.util.ErrorList class and simply overrides the __unicode__ method to return the errors as a string with no special HTML formatting. The implementation here makes use of the fact that the base ErrorList class inherits from list, so iterating over the instance itself returns the individual errors in turn. These are then joined together with spaces in between, and the whole string is returned.
Note that we're only expecting there to ever be one error here, but just in case we are wrong in that assumption, it is safest to code for multiple errors existing. Although our assumption may never be wrong in this case, it's possible we might decide to re-use this custom error class in other situations where the single possible error expectation doesn't hold. If we code to our assumption and simply return the first error in the list, this may result in confusing error displays in some situations where there are multiple errors, since we will have prevented reporting all but the first error. If and when we get to that point, we may also find that formatting a list of errors with just spaces intervening is not a good presentation, but we can deal with that later. First, we'd like to simply verify that our customization of the error list display is used.
Debug page: Another TemplateSyntaxError
What happens if we try submitting an invalid survey now that we have our custom error class specified? An attempt to submit an invalid survey now returns:
Oops, we have made another error. The exception value displayed on the second line makes it pretty clear that we've mistyped self as sefl, and since the code changes we just made only affected five lines in total, we don't have far to look in order to find the typo. But let's take a closer look at this page, since it looks a little different than the other TemplateSyntaxError we encountered.
What is different about this page compared to the other TemplateSyntaxError? Actually, there is nothing structurally different; it contains all the same sections with the same contents. The notable difference is that the exception value is not a single line, but is rather a multi-line message containing an Original Traceback. What is that? If we take a look at the traceback section of the debug page, we see it is rather long, repetitive, and uninformative. The end portion, which is usually the most interesting part of a traceback, is:
Every line of code cited in that traceback is Django code, not our application code. Yet, we can be pretty sure the problem here was not caused by the Django template processing code, but rather by the change we just made to QuestionVoteForm. What's going on?
What has happened here is that an exception was raised during the rendering of a template. Exceptions during rendering are caught and turned into TemplateSyntaxErrors. The bulk of the stack trace for the exception will likely not be interesting or helpful in terms of solving the problem. What will be more informative is the stack trace from the original exception, before it was caught and turned into a TemplateSyntaxError. This stack trace is made available as the Original Traceback portion of the exception value for the TemplateSyntaxError which is ultimately raised.
A nice aspect of this behavior is that the significant part of what is likely a very long traceback is highlighted at the top of the debug page. An unfortunate aspect is that the significant part of the traceback is no longer available in the traceback section itself, thus the special features of the traceback section of the debug page are not available for it. It is not possible to expand the context around the lines identified in the original traceback, nor to see the local variables at each level of the original traceback. These limitations will not cause any difficulty in solving this particular problem, but can be annoying for more obscure errors.
Note that Python 2.6 introduced a change to the base Exception class that causes the Original Traceback information mentioned here to be omitted in the display of the TemplateSyntaxError exception value. Thus, if you are using Python 2.6 and Django 1.1.1, you will not see the Original Traceback included on the debug page. This will likely be corrected in newer versions of Django, since losing the information in the Original Traceback makes it quite hard to debug the error. The fix for this problem may also address some of the annoyances previously noted, related to TemplateSyntaxErrors wrapping other exceptions.
Fixing the second TemplateSyntaxError
Fixing this second TemplateSyntaxError is straightforward: simply correct the sefl typo on the line noted in the original traceback. When we do that and again try to submit an invalid survey, we see in response:
That is not a debug page, so that is good. Furthermore, the error messages are no longer appearing as HTML unordered lists, which was our goal for this change, so that is good. Their exact placement may not quite be exactly what we want, and we may want to add some CSS styling so that they stand out more prominently, but for now they will do.
Summary
In this article, on encountering the debug page, we learned about all of the different sections of the debug page and what information is included in each. For the debug page encountered, we used the information presented to locate and correct the coding error.
If you have read this article you may be interested to view : | https://www.packtpub.com/books/content/handling-invalid-survey-submissions-django | CC-MAIN-2016-50 | refinedweb | 1,516 | 51.78 |
TL;DR – The Python map function is for applying a specified function to every item in an iterable (a list, a tuple, etc.) and showing the results.
Contents
The basic syntax of the Python map() function
The syntax for writing Python
map() function looks like this:
map(function, iterable, ...)
Let's breakdown each element:
- Function:
map()will apply this function to every item.
- Iterable: the objects that you will map.
Note: iterable is an object in a list or another sequence that can be taken for iteration. Iteration refers to the process of taking every object and applying a function to it. Basically, this is what the Python
map()function does.
Using map in Python with different sequences
List
A list in Python is a collection of objects that are put inside brackets ([ ]). Lists are more flexible than sets since they can have duplicate objects.
To show the
map result in a list, insert this line at the end of the code.
print(list(result))
Here is a Python
map() example with a list to complete mathematical calculations.
def mul(n): return n * n numbers = [4, 5, 2, 9] result = map(mul, numbers) print(list(result))
The output will look like this:
[16, 25, 4, 81]
Alternatively, you can use lambda to get the same result if you don't want to define your function first. For example:
numbers = (4, 5, 2, 9) result = map(lambda x: x * x, numbers) print(list(result))
Note: lambda is a keyword for building anonymous functions. They help you avoid repetitive code when you need to use the same function multiple times in your program. Additionally, lambda is frequently used for small functions that you do not want to define separately.
Theory is great, but we recommend digging deeper!
Set
A set is an unordered sequence of items that you have to put inside braces ({ }). In a set, you can't have duplicate items.
Here is how you print the results of
map when using sets:
print(set(result))
Let's say we want to change
'g', 'b', 'e', 'b', 'g' to uppercase, and eliminate duplicate letters from the sequence. The following code performs this action:
def change_upper_case(s): return str(s).upper() chars = {'g', 'b', 'e', 'b', 'g'} result = map(change_upper_case, chars) print(set(result))
This is how you can do the same with lambda:
chars = ['g', 'b', 'e', 'b', 'g'] result = list(map(lambda s: str(s).upper(), chars)) print(set(result))
These are the remaining items in the result. However, since a set is unordered, the sequence will change every time you run the program.
{'E', 'G', 'B',}
Tuple
A tuple is a collection of immutable objects. Unlike a list, it can't be changed, and you have to surround it with parentheses [( )].
To get a tuple result, change your print function to look like this.
print(tuple(result))
We'll use the same uppercase argument with a tuple. This is how the code would look like:
def change_upper_case(s): return str(s).upper() char = (5, 'n', 'ghk') result = map(change_upper_case, char) print(tuple(result))
With the lambda expression, the code looks like this:
char = (5, 'n', 'ghk') result = tuple(map(lambda s: str(s).upper(), char)) print(tuple(result))
The output of both examples is as follows:
('5', 'N', 'GHK')
Using multiple map arguments
What if you need to use a
map() function with more than one argument? Have a look at the following example:
def addition(x, y): return x + y numbers1 = [5, 6, 2, 8] numbers2 = [7, 1, 4, 9] result = map(addition, numbers1, numbers2) print(list(result))
Here is the code in the lambda expression:
numbers1 = [5, 6, 2, 8] numbers2 = [7, 1, 4, 9] result = map(lambda x, y: x + y, numbers1, numbers2) print(list(result))
Both will return the following result:
[12, 7, 6, 17]
You can replace
print(list) with
print(set) or
print(tuple) to change the output.
Python map: useful tips
- To write the Python
mapfunction faster, we suggest that you use lambda expressions. However, if you want to reuse the
deffunction, it is important to use non-lambda expressions.
- Displaying results in a set can help you delete multiple duplicate items from a large database. | https://www.bitdegree.org/learn/python-map | CC-MAIN-2020-16 | refinedweb | 704 | 61.16 |
From: Gary Powell (Gary.Powell_at_[hidden])
Date: 2000-05-30 14:08:40
Looks neat.
I request that the use of
#include <boost/...>
be changed to
#include "boost/..."
I was recently informed that <> was reserved for standard libraries only.
Since boost isn't quite there yet, lets use ""
Also for insert_unique and insert_equal that take a range, would it be worth
it to call m_impl.reserve(m_impl.size() + distance(first,last) ); ? I know
it puts another restriction on the implementation, (requires a "reserve(N)"
fn) but the added efficiency of allocating the additional space in one call
could be worth it.
-Gary-
gary.powell_at_[hidden]
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/05/3205.php | CC-MAIN-2019-13 | refinedweb | 127 | 61.73 |
- NAME
- SYNOPSIS
- DESCRIPTION
- PLATFORMS
- USAGE
- NOTES ON THE WINDOWS AND LINUX/UNIX PRINT SPOOLERS
- BUGS
- AUTHORS
- TODO
- Changelog
NAME
Printer.pm - a low-level, platform independent printing interface (curently Linux and MS Win32. other UNIXES should also work.)
This version includes working support for Windows 95 and some changes to make it work with windows 2000 and XP.
SYNOPSIS);
Special options for print_command under Windows
$prn->print_command('MSWin32' => {'type' => 'command', 'command' => MS_ie});
Under Windows, the print_command method accepts the options MS_ie, MS_word and MS_excel to print data using Internet Explorer, Word and Excel.
DESCRIPTION.
PLATFORMS.
USAGE
Open a printer handle
.
Define a printer command to use
pipe - the specified print command accepts data on a pipe.
file - the specified print command works on a file. The Printer module replaces $spoolfile with a temporary filename which contains the data to be printed
command
This specifies the command to be used.
Select the default printer
).
List available printers
%printers = list_printers().
This returns a hash of arrays listing all available printers. The hash keys are:
%hash{name} - printer names
%hash{port} - printer ports
$printer->print($data); $printer->print(@pling);
Print a scalar value or an array onto the print server through a pipe (like Linux)
List queued jobs
).
NOTES ON THE WINDOWS AND LINUX/UNIX PRINT SPOOLERS
.
BUGS
list_jobs needs writing for win32
AUTHORS
Stephen Patterson (steve@patter.mine.nu)
David W Phillips (ss0300@dfa.state.ny.us)
TODO
Make list_jobs work on windows.
Port to MacOS. Any volunteers?
Changelog
0.98
use_default adjusted for Windows XP to pick the first available printer.
Added windows subroutines for MS IE, Word and Excel.
Basic windows printing (ASCII text) migrated from a collection of crufty code which was depending on backwards compatibility features removed in windows 2000/XP to use Edgars Binans Win32::Printer module. You can call $printer->print_orig() to use the pre 0.98 printing routines should you need to.
0.97a, 0.97b, 0.97c, 0.97d
Sequential fixes to work with 'use strict' and '-w'
list_printers and use_default updated to look at the right parts of the registry on windows 2000.
Printing an array actually works now.
0.97
Bug which produced: Can't modify constant item in scalar assignment at line 224 fixed.
Unix and Win32 specific code split from the general routines.
0.96
Some bugs which generated warnings when using -w fixed thanks to a patch from David Wheeler
0.95c
Author's email address changed
0.95b
Bug when using print_command with the command option on linux fixed.
0.95a
sundry bug fixes
0.95
added support for user defined print commands.
0.94c
removed unwanted dependency on Win32::AdminMisc
added support of user-defined print command
0.94b
added documentation of the array input capabilities of the print() method
windows installation fixed (for a while)
0.94a
glaring typos fixed to pass a syntax check (perl -c)
0.94
uses the first instance of the lp* commands from the user's path
more typos fixed
list_jobs almost entirely rewritten for linux like systems.
0.93b
Checked and modified for dec_osf, solaris and HP/UX thanks to data from David Phillips.
Several quoting errors fixed.
0.93a
list_jobs returns an array of hashes
list_printers exported into main namespace so it can be called without an object handle (which it doesn't need anyway).
0.93
Printing on windows 95 now uses a unique spoolfile which will not overwrite an existing file.
Documentation spruced up to look like a normal linux manpage.
0.92
Carp based error tracking introduced.
0.91
Use the linux routines for all UNIXES.
0.9
Initial release version | https://metacpan.org/pod/Printer | CC-MAIN-2016-30 | refinedweb | 605 | 58.69 |
Hello All,
I am trying to use Fiji on a Windows Amazon Web Services (AWS) instance. I need to use AWS because the analysis I am trying to do goes beyond what my computer can handle, and I am using a Windows instance because it automatically has a GUI so I can see what Fiji is doing.
I set up the instance using the instructions provided here () and was able to download and run Fiji without an issue. However, when I try to run a Python script in Fiji is see the error:
console: Failed to install ‘’: java.nio.charset.UnsupportedCharsetException: cp0.
I see this problem has been reported before (Script editor error when working with python scripts), but when I attempt to implement the proposed solution (adding -Dpython.console.encoding=UTF-8 to the command line when launching) Fiji simply fails to launch (the icon pops up very briefly and then nothing happens).
Further when I attempt to run:
from ij import IJ
imp = IJ.getImage()
print imp
from the tutorial here (), I get the previous error and a new one:
Traceback (most recent call last):
File “New_.py”, line 3, in
at ij.IJ.abort(IJ.java:2266)
at ij.IJ.getImage(IJ.java:1603))
java.lang.RuntimeException: java.lang.RuntimeException: Macro canceled
I am not sure what is causing either of these errors or if they are related, but if anyone can offer me advice on addressing them, or tell me how they got Fiji working in an AWS Windows instance, I would greatly appreciate it. If I can provide anymore information to be helpful please let me know. I am new to both Fiji and AWS so sorry if I missed something.
Thanks,
Sandy | https://forum.image.sc/t/issues-scripting-in-aws-windows/25179 | CC-MAIN-2020-45 | refinedweb | 288 | 59.23 |
I'm programming the A.I. for a 2D soldier that has to shoot another soldier. I use ray casting to calculate the bullet trajectory. For example in diagram A the shooter is at (10,13) and the target is at (5,1) so the bullet will be traveling at -0.4167 units per tick on the X axis and -1.000 units per tick on the Y axis. The algorithm I use to get those number is as follows:
// get the distance on each axis tx = abs(pta.x - ptb.x) ty = abs(pta.y - ptb.y) // make the longer axis 1.0 and the shorter axis a smaller number if (tx == ty) tx = ty = 1.0; elseif (tx > ty) tx = 1.0; ty = ty / tx; else ty = 1.0; tx = tx / ty; endif // invert any axis that is negative if (pta.x > ptb.x) tx = -tx; if (pta.y > ptb.y) ty = -ty;
This is fine, but now I want to skew the soldier's accuracy a little so that each bullet is fired at a slightly different angle, as shown in diagram B. I have two questions:
1) I probably need to change the angle by a few hundredths or so every shot to simulate inaccuracy, but my technique is based on a grid instead of angles. What would be the easiest way to modify the firing angle in this way?
2) How can I retrieve a list of the pink tiles in diagram B? I could run the algorithm on every possible angle but that could be slow and my program needs to be as fast as possible. I need the list because, as I stated earlier, this is an A.I. and the guy needs to know if there is risk of friendly fire.
Thanks in advance. By the way, in case you haven't noticed already, I suck at math. Just saying.
Edited by Nübček Pænus, 16 April 2014 - 09:59 PM. | http://www.gamedev.net/topic/655619-need-help-calculating-imprecise-bullet-trajectories-in-a-2d-plane/ | CC-MAIN-2016-50 | refinedweb | 328 | 83.96 |
Web Services – Java client
January 15, 2012 2 Comments
Robert Mac asked some time ago for Java client for my old post here: Web Services in Ruby, Python and Java. So here it is (sorry for the delay). Simplest possible solution, no jars or IDE needed. Just plain Java 6 JDK.
First we have to generate proxy classes for our Web Service (you need to pass WSDL location, as URL or path to file):
wsimport
wsimport is in /bin folder in your JDK.
Now let’s use them:
public class WSClient { public static void main(String[] args) { Music music = new Music(); String[] artists = music.listArtists(); for (String artist : artists) { System.out.println(artist); Song[] songs = music.listSongs(artist); for (Song song : songs) { System.out.format("\t%s : %s : %d%s\n", new Object[]{song.getFileName(), song.getArtist(), song.getSize(), "MB"}); } } } }
Now compile it and execute with classes generated by wsimport in classpath.
This is all. Simple, isn’t it?
Very interesting. I will surely try to implement something of a nature similar to this in my next project. Great post mate!
Pingback: JavaPins | https://jdevel.wordpress.com/2012/01/15/web-services-java-client/ | CC-MAIN-2017-43 | refinedweb | 183 | 68.87 |
C Programming/string.h/strcspn
< C Programming | string.h(Redirected from C Programming/C Reference/string.h/strcspn)
strcspn is the function from the C standard library (header file string.h).
It searches the string for certain set of characters.
The strcspn() function calculates the length of initial segment of string 1 which does not contain any character from string 2.
Return Value[edit]
This function returns the index of first character in string 1 that matches with any character of string2.
Syntax[edit]
#include <string.h> size_t strcspn( const char *str1, const char *str2 );
Example[edit]
#include <stdio.h> #include <string.h> int main(){ char s[20] = "wikireader007", t[11] = "0123456789"; printf("The first decimal digit of s is at position: %d.\n", '''strcspn'''(s, t)); return 0; }
Output:
The first decimal digit of s is at position: 10 | https://en.wikibooks.org/wiki/C_Programming/C_Reference/string.h/strcspn | CC-MAIN-2018-17 | refinedweb | 140 | 60.51 |
Details
Description
Take the following Entity:
public class TemporalEntity {
@Id
private Integer id;
@Temporal(TemporalType.TIMESTAMP)
private java.util.Date testDate;
.....
Take this row in the DB (Timestamp is used in the DB):
ID TESTDATE
1 2010-01-01 12:00:00.687701
Using a Date, I can not directly get the fractional seconds (i.e. .687701). However, I can use a formatter as follows and the milliseconds will be printed:
TemporalEntity t = em.find(TemporalEntity.class, 1);
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
System.out.println("sdf.format(t.getTestDate()) = " + sdf.format(t.getTestDate()).toString());
E.g:
sdf.format(t.getTestDate()) = 2010-01-01 12:00:00.688
Notice that the milliseconds are rounded. OpenJPA rounds the milliseconds by default. This rounding was not desirable by some users. For example, take the insert of this data:
INSERT INTO TemporalEntity (Id, testDate) VALUES(1, '9999-12-31 23:59:59.9999')
The Date inserted into the DB is .1 milliseconds from the next day of the next year. When a query is performed on this row, OpenJPA rounds the Date to the nearest millisecond which means the Date seen by the user represents the next day and year. A while back we added a system property ('roundTimeToMillisec', via
OPENJPA-2159) to OpenJPA to allow the milliseconds to be stripped off and thus avoid the rounding. So we avoid rounding, but in doing so the milliseconds are completely removed when the property is set. For example, when roundTimeToMillisec=false, the above date of '9999-12-31 23:59:59.9999' will not be rounded up. HOWEVER, the .999 will be stripped off. So, take our String format example above, the output would be:
sdf.format(t.getTestDate()) = 2010-01-01 12:00:00.000
A user may find it desirable to avoid the rounding, but may not like the fact that .688 is removed.
This JIRA will be used to allow a user to avoid rounding of milliseconds, but will allow the milliseconds to be retained.
Thanks,
Heath Thomann
Activity
- All
- Work Log
- History
- Activity
- Transitions
Commit 1564121 from Jody Grassel in branch 'openjpa/branches/2.3.x'
[ ]
OPENJPA-2453: Add support to retain milliseconds of 'un-rounded' Date field.
Commit 1548277 from Jody Grassel in branch 'openjpa/branches/2.2.x'
[ ]
OPENJPA-2453: Add support to retain milliseconds of 'un-rounded' Date field
Commit 1548261 from Jody Grassel in branch 'openjpa/branches/2.2.1.x'
[ ]
OPENJPA-2453: Add support to retain milliseconds of 'un-rounded' Date field
Commit 1548248 from Jody Grassel in branch 'openjpa/branches/2.1.x'
[ ]
OPENJPA-2453: Add support to retain milliseconds of 'un-rounded' Date field
Commit 1566872 from Heath Thomann in branch 'openjpa/trunk'
[ ]
OPENJPA-2453: Add support to retain milliseconds of 'un-rounded' Date field. | https://issues.apache.org/jira/browse/OPENJPA-2453 | CC-MAIN-2017-43 | refinedweb | 468 | 58.58 |
On Fri, Apr 6, 2018 at 8:22 PM, Peter Geoghegan <p...@bowt.ie> wrote:
> On Fri, Apr 6, 2018 at 10:20 AM, Teodor Sigaev <teo...@sigaev.ru> wrote: > > As far I can see, there is no any on-disk representation differece for > > *existing* indexes. So, pg_upgrade is not need here and there isn't any > new > > code to support "on-fly" modification. Am I right? > > Yes. > > I'm going to look at this again today, and will post something within > 12 hours. Please hold off on committing until then. > Thank you. Thinking about that again, I found that we should relax our requirements to "minus infinity" items, because pg_upgraded indexes doesn't have any special bits set for those items. What do you think about applying following patch on the top of v14? diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c index 44605fb5a4..53dc47ff82 100644 --- a/src/backend/access/nbtree/nbtsearch.c +++ b/src/backend/access/nbtree/nbtsearch.c @@ -2000,8 +2000,12 @@ _bt_check_natts(Relation index, Page page, OffsetNumber offnum) } else if (!P_ISLEAF(opaque) && offnum == P_FIRSTDATAKEY(opaque)) { - /* Leftmost tuples on non-leaf pages have no attributes */ - return (BTreeTupGetNAtts(itup, index) == 0); + /* + * Leftmost tuples on non-leaf pages have no attributes, or haven't + * INDEX_ALT_TID_MASK set in pg_upgraded indexes. + */ + return (BTreeTupGetNAtts(itup, index) == 0 || + ((itup->t_info & INDEX_ALT_TID_MASK) == 0)); } else { ------ Alexander Korotkov Postgres Professional: The Russian Postgres Company | https://www.mail-archive.com/pgsql-hackers@lists.postgresql.org/msg11220.html | CC-MAIN-2018-43 | refinedweb | 236 | 58.69 |
OTPSkipper
I think just keeping the iPad active will be enough. I don't really need it to run in the background. Thanks!.
OTPSkipper
Got it. There is setter code in the library and I was overriding it. Here is the fixed code:
from scene import Scene, SpriteNode, run, EffectNode class pt(SpriteNode): def __init__(self, **kargs): SpriteNode.__init__(self, 'plf:HudX', **kargs) self.scale = 0.5 class plot (SpriteNode): def __init__(self, **kargs): SpriteNode.__init__(self, **kargs) self.anchor_point = (0, 0) self.color = '#8989ff' class test(Scene): def setup(self): clip = EffectNode(parent=self) clip.crop_rect = (0, 0, self.size.x / 2, self.size.y / 2) pane = plot(parent=clip) pane.size = (self.size.x / 2, self.size.y / 2) clip.position = (50, 50) for x in (50, 100, 150, 200, 250, 300, 350, 400): p = pt(parent=pane) p.position = (x, x) if __name__ == '__main__': tst = test() run(tst) ```
OTPSkipper
Thanks. That is working for me.
OTPSkipper
It was something stupid. Spelled the reference to the file Handel wrong.
Thanks!!
OTPSkipper
Here is my test case:
from scene import Scene, SpriteNode, run, Point class pt(SpriteNode): def __init__(self, **kargs): SpriteNode.__init__(self, 'plf:HudX', **kargs) scale_x = 0.5 scale_y = 0.5 class plot (SpriteNode): anchor_point = Point(0,0) color = '#8989ff' class test(Scene): def setup(self): pane = plot(parent=self) pane.size = (self.size.x / 2, self.size.y / 2) pane.position = (50,50) for x in (50, 100, 150, 200, 250, 300): p = pt(parent=pane) p.position = (x, x) if __name__ == '__main__': tst = test() run(tst)
Questions:
- Why doesn't the color work on my plot class?
- Why isn't the plot 50 px from the lower left corner?
- Why are the x gifs not half size?
- Why doesn't the plot Sprite clip its children?
OTPSkipper
Simple enough. Logfile_out is set to "log.txt"
if self.logfile_out is not None: self._of = open(self.logfile_out, mode='w', encoding='utf-8')
It has to be something stupid.
OTPSkipper
Hmmm. Tried that and it didn't seem to work. No file showed up.
OTPSkipper
How can I save a log file to my pythonista file system?
OTPSkipper
I am writing an app that receives data over a tcp socket and displays the data graphically with some user interaction too.
I can't just call "run" for scene support because I need to poll the socket too.
Will threads work? | https://forum.omz-software.com/user/otpskipper | CC-MAIN-2021-39 | refinedweb | 403 | 80.38 |
Great stuff! The names are much better now, as is the substance.
I need to study SAX2 more carefully to understand how the ContentHandler
interfaces work, but meanwhile, some comments on the rest:
You've put getAssociatedStylesheet() as a method on Processor. I can't see
what the "given document" is..
OutputProperties: on the methods for handling the list of CDATA elements, I
can't see how namespace prefixes are supposed to be handled. Again we could
say that the names are expressed as uri^local-name.
Transformation: setURIResolver() here can only specify the resolver for
document(), it's too late for xsl:import and xsl:include. Need a separate
setURIResolver() on Processor (presumably) to define the one used for the
stylesheet, this could also act as a default for the Transformation.
I'm not sure whether it's right that Transformation.setOutputProperties()
should override xsl:output, or whether it should just provide defaults for
unspecified properties. The conformance rules don't constrain us, it's more
a matter of principle. Shame that xsl:output properties aren't AVT's. My
instinct would be to say it's equivalent to importing another xsl:output
statement with lower import precedence than those in the stylesheet proper.
I hate passing such decisions to the user, but perhaps there's a case here
for both options, for the properties to have either highest precedence or
lowest precedence.
I'm still confused about why setEncoding() is on both Result and
OutputProperties, and how they relate.
If you use setContentHandler() on Result, are the OutputProperties ignored,
or can they be passed to the ContentHandler? I think it's important that a
user-written output method should have access to the OutputProperties, just
as the system-supplied methods do.
Typos:
in Result, some of the javadoc comments are wrong.
in Result, there is an instance variable filename which is unused.
> Rather, it should be a Transformation API.
> Therefore I am proposing renaming it to be the "Simple API for
> Transformations" (SAT for now)
Funny that, I was going to propose TAX, Transformation API for XML.
Mike | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200002.mbox/%3C93CB64052F94D211BC5D0010A800133101FDEA1B@wwmess3.bra01.icl.co.uk%3E | CC-MAIN-2015-40 | refinedweb | 350 | 63.39 |
Instance Constructors (C# Programming Guide)
Instance constructors are used to create and initialize any instance member variables when you use the new expression to create an object of a class. To initialize a static class, or static variables in a non-static class, you must define a static constructor. For more information, see Static Constructors (C# Programming Guide).
The following example shows an instance constructor:
This instance constructor is called whenever an object based on the CoOrds class is created. A constructor like this one, which takes no arguments, is called a default constructor. However, it is often useful to provide additional constructors. For example, we can add a constructor to the CoOrds class that allows us to specify the initial values for the data members:
This allows CoOrd objects to be created with default or specific initial values, like this:
If a class does not have a constructor, a default constructor is automatically generated and default values are used to initialize the object fields. For example, an int is initialized to 0. For more information on default values, see Default Values Table (C# Reference). Therefore, because the CoOrds class default constructor initializes all data members to zero, it can be removed altogether without changing how the class works. A complete example using multiple constructors is provided in Example 1 later in this topic, and an example of an automatically generated constructor is provided in Example 2.
Instance constructors can also be used to call the instance constructors of base classes. The class constructor can invoke the constructor of the base class through the initializer, as follows:
In this example, the Circle class passes values representing radius and height to the constructor provided by Shape from which Circle is derived. A complete example using Shape and Circle appears in this topic as Example 3.
The following example demonstrates a class with two class constructors, one without arguments and one with two arguments.
class CoOrds { public int x, y; // Default constructor: public CoOrds() { x = 0; y = 0; } // tcA constructor with two arguments: public CoOrds(int x, int y) { this.x = x; this.y = y; } // Override the ToString method: public override string ToString() { return (String.Format("({0},{1})", x, y)); } } class MainClass { static void Main() { CoOrds p1 = new CoOrds(); CoOrds p2 = new CoOrds(5, 3); // Display the results using the overriden ToString method: Console.WriteLine("CoOrds #1 at {0}", p1); Console.WriteLine("CoOrds #2 at {0}", p2); Console.ReadKey(); } } /* Output: CoOrds #1 at (0,0) CoOrds #2 at (5,3) */
In this example, the class Person does not have any constructors, in which case, a default constructor is automatically provided and the fields are initialized to their default values.
public class Person { public int age; public string name; } class TestPerson { static void Main() { Person person = new Person(); Console.WriteLine("Name: {0}, Age: {1}", person.name, person.age); // Keep the console window open in debug mode. Console.WriteLine("Press any key to exit."); Console.ReadKey(); } } // Output: Name: , Age: 0
Notice that the default value of age is 0 and the default value of name is null. For more information on default values, see Default Values Table (C# Reference).
The following example demonstrates using the base class initializer. The Circle class is derived from the general class Shape, and the Cylinder class is derived from the Circle class. The constructor on each derived class is using its base class initializer.
abstract class Shape { public const double pi = Math.PI; protected double x, y; public Shape(double x, double y) { this.x = x; this.y = y; } public abstract double Area(); } class Circle : Shape { public Circle(double radius) : base(radius, 0) { } public override double Area() { return pi * x * x; } } class Cylinder : Circle { public Cylinder(double radius, double height) : base(radius) { y = height; } public override double Area() { return (2 * base.Area()) + (2 * pi * x * y); } } class TestShapes { static void Main() { double radius = 2.5; double height = 3.0; Circle ring = new Circle(radius); Cylinder tube = new Cylinder(radius, height); Console.WriteLine("Area of the circle = {0:F2}", ring.Area()); Console.WriteLine("Area of the cylinder = {0:F2}", tube.Area()); // Keep the console window open in debug mode. Console.WriteLine("Press any key to exit."); Console.ReadKey(); } } /* Output: Area of the circle = 19.63 Area of the cylinder = 86.39 */
For more examples on invoking the base class constructors, see virtual (C# Reference), override (C# Reference), and base (C# Reference). | https://msdn.microsoft.com/en-US/library/k6sa6h87(v=vs.110).aspx | CC-MAIN-2018-17 | refinedweb | 732 | 56.05 |
From: Martin Schürch (mschuerch_at_[hidden])
Date: 2000-03-07 09:27:42
Thanks for feedback.
When I read the other mails about the intervall library, I think it would be
hard to combine the actual main
idea of intervall with the folloeing ones. But anyway here they are
Jens Maurer wrote:
> Martin Schürch wrote:
> > What I found in both areas often very useful are comparison operators
> > only with a real. e.g. [a,b] < 3.14.
>
> This is implemented. Real numbers automatically convert to intervals,
> so [3,4] < 5 says "true" and [3,4] < 3.5 says "false". Just to show
> the usefulness of <boost/operators.hpp> and save a few clock cycles, I
> also implemented these comparisons directly, so no conversion is
> necessary.
But are they exactly the same?
e.g. [1,2] <2.0 is false with the direct implementation:
template<class RealType, class Traits>
inline bool operator<(const interval<RealType, Traits>& x, double y)
{
return x.upper() < y;
}
but is true when written like [1,2] < interval(2), because the constructor
introduces some tolerance.
I think when intervall should be used in other areas this lost of exact control
is one of the main problems.
>
>
> > Concept
> > 1: [1,100] < [20,60] <-> comparison is "set" - like
>
> Use the provided "interval::contains" member function for that.
ok
>
>
> > 2: [0,1] <[3,7] <-> Like your interval "<"
>
> I intended my interval class to be a drop-in replacement for "double"
> and give sensible results most of the time. That means, my interval "<"
> is defined so that all numbers of the first interval must be lower than
> the second.
>
> Please speak up what else you need. I'm most interested in people
> actually trying to *use* this.
1) scale relative to a given value
intervall scale( const intervall& rng, double mirrorPt, double factor )
{
return intervall(factor*(rng.lower()-mirrorPt), factor*(rng.upper()-mirrorPt)
);
}
2) special case for scale-point == median()
intervall symmetric_scale( const intervall& rng, double factor )
{ return scale(rng, centre(rng), factor); }
3) bool do_overlap( I1, I2);
4) intervall combine(const intervall& r1, const intervall& r2);
precondition : do_overlap(r1,r2)
5) intervall intersect(const intervall& r1, const intervall& r2);
precondition : do_overlap(r1,r2)
6) Not directly related:
What I found a very big help in my work, is to have a concept of a uniform
intervall intersection.
I have attached (not boost compatible) the code.
e.g.
intervall I(0, 1.0);
UnifIntervallSplit uis = UnifIntervallSplit::create_co(I, 150);
//represents a sequence (N=150) of uniform distance values, where uis[0] =0
(because of the _c) and
// uis[size()] would be1 (not index size()-1 because of the "o" .for(int i=0;
i<uis.size(); ++i) {
//do something
cout << uis[i];
}
_co means closed open. There are also _cc, _oc, _oo
The main idea is to handle the problem if the last (or first) point is on the
intervall border or not
once for ever.
The concept could also be expanded to support input_iterators
>
> > more specific functionality. Especially comparison is one of the often
> > disturbing points. (e.g. in your code : !(I1 < I2) && !(I2
> > < I1) does not imply (I1== I2) can lead to problems)
>
> At least it's documented :-) Anyway, is there a useful definition
> for interval "<" that does not have the above deficiency? The
> "set_less_eq" also has the problem, for example with [1,3] <? [2,4].
Yes it also has the same problems, but propably the user does not expect as
much
from a function called "set_less_eq" as from "<"
>
> Of course we could define a "<" which compares the widths of the
> intervals, but this would probably only come in handy for map<>
> and set<>. And for these, we can specialized std::less.
I think different things are needed in different situations. You are of course
right, that other behavior
can implemented with addidional functions. But the greatest advantage of the
operators are there easy
notation and that's also a great trap.
So why not seperate the different concepts by different namespaces.
e.g.
{
using namespace boost::interval_error_arithmetic;
...
//you behaviour : your <, *=, ...
}
{
using namespace boost::interval_centre_based;
intervall I(0.1,0.5);
I *= 2.0; // [0.1, 0.5] -> [-0.1, 0.7]
I += 7; // shift of the range without introducing any tolerance
}
Martin
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/03/2412.php | CC-MAIN-2020-16 | refinedweb | 725 | 56.66 |
>>.
86 Reader Comments
Yea because obviously there are 2 people, see that guy in the background?
Disclosure: Left there already myself (cancelled my subscription > a month ago) and I don't think there's any "new" content that would be able to draw me back in.
-Erickson
If you knew this was going to happen, even going so far as to label it a classic MMO problem, then why in the hell were you not prepared with transfers or mergers after 4 months?
Where you so certain that this game would grow instead of decline that you had no plan B?
And as far as his lies about subscriptions, you don't give your customers a free month unless something has seriously jumped the shark..
The trick is to release significant/quality content at a fast enough frequency that people stay subscribed in anticipation of the new stuff. That's why Blizzard has been releasing content in "tiers" at fairly regular intervals between expansions. It keeps players better engaged (and subscribed).
Last edited by chordoflife on Tue Apr 24, 2012 11:01 am
Not necessarily. I'm playing less now because I'm splitting my time between SWTOR and Star Trek Online. I might only be on every other night (cutting my server time in half) but I'm not quitting any time soon. I'm still working on the 4 Republic class stories, after which I'll still have the 4 Empire stories to play through.
Last edited by DaveSimmons on Tue Apr 24, 2012 10:56 am
I don't find this article surprising, or how they responded. Did anyone really expect them to own up and admit things aren't going as well as they planned. Making any statement verifying player/sub decline would just add more fuel to the fire. One only has to look at how WAR was handled to be able to predict the future.
Last edited by Incendium on Tue Apr 24, 2012 10:59 am.
Same for me. I hit 50, PVP was most stale and boring PVP ive ever partaken in. And raids were pretty damn boring.
Now. Wheres that Guild Wars 2 coverage?? Beta weekend this weekend!!
People log in less to established games than new ones, that's just how it works. It's a rare person who can keep up 40-60 hour a week playtimes for months on end. The only issue is that server populations were "set" based on the new game "OMG I have to play!" crush at the beginning, and now that there is a more sustainable amount of concurrent users, everyone is too spread out. Transfers, and probably shutting down some of the least populated servers will fix that. They are doing the Australian transfers this week, so more general transfers are not too far away.
They know how many people have unsubscribed, regardless of how much game time they have left.
Also, for an MMO, they sure seem to make it difficult to actually get a group of people into an FP. So and so is on another quest stage, so and so isn't eligible for this quest, umpteen different NPC to talk to before you can start an FP...by the time that is all sorted, quests reset/dropped, half the group has gone to do something else and the other half is annoyed enough that they are considering it as well..
Last edited by Xavin on Tue Apr 24, 2012 11:05 am.
Anarchy Online, if anyone recall that game (it is still running last time i checked), also have such a system. You can play with some dials and the system spits out some missions that you can pick from. Each play out on a random map accessed via a location that may require some travel from where your currently located.
For the majority of the FPs, you just need to talk to the guy at the door, the breadcrumb and framing quests are optional. There are a few exceptions for the first run through specific FPs, but those are well known at this point.
The only FP that I can think of that requires a breadcrumb is the Boarding Party/Foundry sequence and the start of that is by the place where you have to go to travel to the other ship to get to the FP entrance itself and the guy to talk to is just 50' from the entrance. If there's a conflict because someone else left in the middle of a different instance, just resetting works. It sounds like you got tangled up with people who didn't have a clue how any of that stuff works and fumbled around for a while. The worst case scenario is that it takes about a minute to sort out..
The dailies are my biggest complaint. My first character was a healer spec Sage and the dailies were far from painless. From encounters that shredded Qyzen to large encounters, the only way I could do them efficiently, or in some cases at all, was to group with other players. It sucked so bad that I haven't played my Sage in almost two months because of it. I know they adjusted some of them in 1.2 but I haven't been back to check... I'm having too much fun on my Assassin, Marauder, and Operative.
They know who's subscribed and who's not. However there is some amount of inertia when it comes to unsubbing from a game and I suspect that many people don't actually unsubscribe until they realise that they don't play the game for more than a month at a time. Some people I know just keep themselves subscribed even if they don't really play - they log in every couple of weeks to do auctions and read mail or something but they don't really think about how they need to actually unsubscribe.
Also, SW:TOR had tons. and tons, and tons of servers at launch. The distribution was such that there were some really really heavy servers, a lot of medium servers, and a handful of light servers. I think it's annoying that somehow server mergers have been seen as some kind of kiss of death on MMORPGs because they would often solve tons of problems.
Really? The non-heroic daily mobs are pretty much tissue paper on the Imperial side.
>
Really? The non-heroic daily mobs are pretty much tissue paper on the Imperial side.
One of the ones on Belsavis required you to kill some stuff outside, some lone silvers/tough mobs, like within 100yds of where you got the quest, that had a point-blank AOE that would kill Qyzen in two shots (about 8 seconds). It was bad enough that I couldn't do anything but try to keep Qyzen alive while he tried to kill the target. If I screwed up and got agro, we both died because I couldn't keep up with healing the both of us. Usually, Qyzen couldn't kill it fast enough before I ran out of Force. If something happened and I agro'd two of them, it was a run away or die situation. Even me and my friend with his Shadow had a hard time with them. We'd end up with about 20% health each at the end of fighting one of them.
I think we had to kill like 6 of them or something. It's been so long that I can't remember. It's probably THE daily that broke me on that character. I ran the dailies about a half dozen times and basically said "no more dailies". I haven't done any of those on the Imperial side. The only PVE dailies I run are the daily "do a hard mode FP" ones..
Yeup. That plus a pretty interesting system for providing player-written content, including a mechanism for discoveries, reviews, and a "tip jar", make me think it's an MMO that might possibly not be doomed.
And even if people haven't quit yet, they're not logging in. That's the step before they decide its time to quit. Considering a major patch just hit and people still aren't playing? That's an indicator of a downward trend.
These days i wonder why any MMO i set up with a fixed number of servers, rather than using something like Amazon EC2 to spin up or down servers as the login numbers climb and fall.
As for the whole stock split thing, each time i hear about it being used for anything like a indicator of actual business performance i get shivers.
Wait until 1.2 novelty wears off and april subs end...
No discounts for multi month subscriptions also was a bad move. WoW drops to $12 if you pay 6 months at a time, but there was virtually no benefit to paying for more time and got no discounts. EA's greed shines through.
Regardless: first, people leave. Then, people cancel.
You must login or create an account to comment. | http://arstechnica.com/gaming/2012/04/bioware-old-republic-subscribers-not-dropping-despite-lighter-server-loads/?comments=1&post=22790714 | CC-MAIN-2014-52 | refinedweb | 1,528 | 79.5 |
Most top-level lists on netbeans.org (like nbdev, nbusers, etc) are
"discuss" lists, in the list server terminology. Discuss lists are lists to
which you must be subscribed to post messages to. Any messages coming from an
email address that is not subscribed are bounced to moderators for evaluation.
This is a very good way of stopping spam from getting through to the list and
hence all subscribers. It's not worth the effort for a spammer to actually
subscribe to a list before spamming it. The spams are bounced to the moderators
where they can be stopped and junked before being sent on to the list.
However this kind of moderation can also delay legitimate posts from real
NetBeans users. All of the following scenarios will result in a valid post being
bounced to the moderators for approval :
As described in the FAQ, moderation will delay these valid posts.
The more moderators there are, the less delay there will be, and of course,
the less work for other moderators.
If you'd like to help moderate, here's how.
Who Should Apply ?
Moderation is a serious responsibility. Some of the netbeans.org lists are
very large, with thousands of subscribers. Only people that are known on the
lists and long term subscribers can be considered as moderators. This is to
prevent for eg an unknown spammer signing up, spamming all lists with some junk
or a virus, and then using her moderator position to approve all those spams.
If you think you qualify, and you'd like to help moderate, please
get in touch
.
Setting up your mail client
A shared Moderator account where all moderation messages are sent has
been set up for moderators use. By configuring your mail client to access this
mailbox, you can see the queue of messages waiting for moderation. If you
approve or reject those msgs in the queue, and delete them from the mailbox, the
next moderator to log in will only see any new messages that have not yet been
processed.
Following are instructions for setting up Mozilla and Thunderbird mail clients
to access this shared mailbox. If you are behind a firewall, also see
the following section.
Thunderbird 1.0.* / Mozilla 1.x / Netscape 7.x
Type of Acct : Email
Enter your real name and email as normal
Select IMAP
Server name is pop.netbeans.info
Username : moderator
Account Name : whatever you like, eg NetBeans Moderator
Click OK. You should now see the account listed on the left.
When I delete a msg : Move it to Trash
Check Expunge Inbox on Exit
DO NOT CHECK Empty Trash on Exit
Check Show only subscribed folders
UNCheck Server supports folders than contain sub-folders and msgs
Leave Personal / Public / Other Namespaces fields blank
UNCheck Allow server to override these namespaces
When sending msgs : Using the "Sent" on Moderator option will not work, you should check the
"Other" radio button, and via the combo-box choose Moderator -> Inbox -> Sent
Click OK to close the Account panel. You should see your new NetBeans
Moderator account listed under your normal account and folder list on the left.
You're done. See OK, What Next below to start moderating.
If you are behind a corporate firewall it will not be possible to connect
directly to netbeans.info mail server. In this case you may be able to set up a
port forwarding SSH tunnel to the mail server. You will need a login
account on some SSH server, somewhere, it doesn't matter where.
Setting up email client
Steps are almost the same as described in the previous section Setting up your mail client, except :
Setting up the SSH tunnel
ssh -L 4500:pop.netbeans.info:143
user
@
host
You're done. As long as this tunnel connection is open, you will be able to
connect to the netbeans.info mail server. See OK, What Next below
to start moderating.
OK, What Next ?
Once your client is configured, and you can see the contents
of the moderator mailbox, you can start moderating.
%%% Start comment
%%% End comment
Anything you write between these lines will be sent to the
original poster, so you could explain why you are disapproving
the msg and suggest an alternative list for them to post to.
Here's an example I use to reject unsubscription requests, feel free to
copy-paste it as a template :
%%% Start comment
Hello,
I'm a moderator for the netbeans.org mailing lists. I've just rejected the
"unsubscribe" message you posted recently, as such messages should not go
the lists themselves.
There are unsubscribe links on the main lists page :
If you're having trouble, please see the the list FAQ, and the unsubscribe
guide :
If you're still having trouble please get back to me with details.
Thanks, and
Best Regards,
--
%%% End comment
That's it. Thank you for helping! Remember, if in doubt, just leave the
message along, someone else can evaluate it. Any problems or questions
please contact
webmaster@netbeans.org
Spam
It wont take long before you realise the huge volume of spam that moderating
stops getting to the lists. That's good for the lists, but of course more messages
for moderators to review.
More content to come. | https://netbeans.org/community/lists/moderate.html | CC-MAIN-2015-32 | refinedweb | 877 | 63.8 |
Hi]
If you would like to receive an email when updates are made to this post, please register here
RSS
Very cool, and one of the clearest explanations I've seen of this technique.
I noticed that you declare and populate a local variable "stride" but don't seem to actually do anything with it...is that an omission that actually makes a difference or did it just turn out not to be needed?
Nice catch Kevin. The stride value is not needed there. However, it is good to use if you are manipulating smaller portions of the image - or you care about the order the pixels are loaded in. Stride returns the size in bytes of a row of pixels. It also tells you if the pixels are loaded top down or bottom up (positive stride is top down, and negative stride is bottom up IIRC).
are wpf namespaces available on the compact framework, i haven't had time to explore?
in wpf, bitmapsource takes raw data as input and any filters can be applied without having to use unsafe methods. however we can also expose the underlying wic calls via unsafe methods if needed. using the byte array for processing is very fast and rendering in wpf is also high performance as well.
Bruce, WPF isn't available on the Windows CE or Windows Mobile platform(s) and hence the Compact Framework doesn't support it.
Silverlight support for Windows Mobile however is aiming for public preview later this year.
Marshal.Copy is very expensive, why don't you store the pointer in an instance variable instead of copying all of the data.
More posts like this would be much appreciated. Any tricks and improvements that developers don't even realize exist - that's just the kind of thing I need to see. I'll keep subscribing to this RSS feed, just keep coming with the optimized code!
Great article! However, it seems your doing some double check locking on your LockPixels and UnlockPixels methods. Would it not be better to either run synchronized ([MethodImpl(MethodImplOptions.Synchronized)]) or lock(this){...}?
Very informing. thanx
Please post more often here.
I am subscribed for some time now but I was expecting this to be a more active blog!
What's wrong with you people???
:) just kidding
I bet you had a lot of fun working with Assembly :)
Very good article, thanks.
I wonder how this code perdorms compared to the C++ variant. I guess if I could use BitBlt, etc. directly (and the Invert-Operation with it) it would be much faster - Is this image data internally a DDB oder a DIB - if it is the former one could use BitBlts directly?
Hi,
I tried this code out (VS 2005, WinMoPro6 device project)
I have a very simple setup: A PictureBox (docked full in the form) as the image container, and the PictureBox.Image = new Bitmap(screen width,screen height).
It takes about a second to invert.
(both on the emulator, and the actual device)
Any ideas why so slow?
thanks
This class has a few bugs in it. (Mainly using width instead of stride).
Because it got me started I am posting my fixed version. This will work with bitmaps that have a stride that is not exactly = 3 * the width.
public class FastBitmap
{
private Bitmap image;
private BitmapData bitmapData;
private int height;
private int width;
private byte[] rgbValues;
bool locked = false;
public int Height
{
get { return this.height; }
}
public int Width
get { return this.width; }
public FastBitmap(int x, int y)
width = x;
height = y;
image = new Bitmap(x, y);
public FastBitmap(Bitmap bitmap, bool createCopy)
width = bitmap.Width;
height = bitmap.Height;
if (createCopy == true)
image = new Bitmap(bitmap);
else
image = bitmap;
public byte[] GetAllPixels()
return rgbValues;
public void SetAllPixels(byte[] pixels)
rgbValues = pixels;
public Color GetPixel(int x, int y)
int blue = rgbValues[(y * bitmapData.Stride + (x * 3))];
int green = rgbValues[(y * bitmapData.Stride + (x * 3)) + 1];
int red = rgbValues[(y * bitmapData.Stride + (x * 3)) + 2];
return Color.FromArgb(red, green, blue);
public void SetPixel(int x, int y, Color cIn)
rgbValues[(y * bitmapData.Stride + (x * 3))] = cIn.B;
rgbValues[(y * bitmapData.Stride + (x * 3)) + 1] = cIn.G;
rgbValues[(y * bitmapData.Stride + (x * 3)) + 2] = cIn.R;
public static implicit operator Image(FastBitmap bmp)
return bmp.image;
public static implicit operator Bitmap(FastBitmap bmp) numBytes = bitmapData.Stride * image.Height;
rgbValues = new byte[numBytes];
Marshal.Copy(ptr, rgbValues, 0, numBytes);
public void UnlockPixels()
if (!locked)
locked = false;
Marshal.Copy(rgbValues, 0, bitmapData.Scan0, image.Width * image.Height * 3);
image.UnlockBits(bitmapData);
}
Maybe I'm misunderstanding something here, but both versions posted seem to be broken.
Instead of the inversion algorithm, I tried simply creating a new bitmap and setting all pixels to white.
private void clearImage() {
bmp.LockPixels();
byte[] pixels = bmp.GetAllPixels();
for (int i = 0; i < pixels.Length; i++) {
pixels[i] = (byte)(255);
bmp.SetAllPixels(pixels);
bmp.UnlockPixels();
canvas.Image = bmp;
If I set the width of the bitmap to something that isn't a multiple of 4, there is a single almost-full scan line of black pixels at the bottom of the image.
Any fixes?
You are right Klay. I left in one reference to using Width where it should not have been (in the UnlockPixels() method). This caused it to not copy all of the changed pixels back into to image.
Here is the correct version of the class.
Marshal.Copy(rgbValues, 0, bitmapData.Scan0, bitmapData.Stride * image.Height);
Hey people...
I was using both RoshanK and Stephen classes and you both missed one thing.
specially RoshanK, for example:
public void SetPixel(int x, int y, Color cIn)
rgbValues[(y * image.Width + x) * 3] = cIn.B;
rgbValues[(y * image.Width + x) * 3 + 1] = cIn.G;
rgbValues[(y * image.Width + x) * 3 + 2] = cIn.R;
You make three references to image.Width. This is a very very slow process. If you want some optimization you could use:
int w = image.Width;
rgbValues[(y * w + x) * 3] = cIn.B;
rgbValues[(y * w + x) * 3 + 1] = cIn.G;
rgbValues[(y * w + x) * 3 + 2] = cIn.R;
which is much faster, try it! :)
but... you already have a global variable for Width. so you could use instead:
int w = width;
which is even faster.
In Stephen's class he uses bitmapData.Stride which is also slow.. so i create a global variable for that and I define is when we lock the pixels, and use it just like I explained before.
I added some other classes too, in order to access and set the pixels faster, so below is my version of this class.
These are some common optimization stuff and they work very effectively although they aren't very intuitive. Hope it was useful!
using System;
using System.Runtime.InteropServices;
using System.Drawing;
using System.Drawing.Imaging;
private int stride;
int s = stride;
int blue = rgbValues[(y * s + (x * 3))];
int green = rgbValues[(y * s + (x * 3)) + 1];
int red = rgbValues[(y * s + (x * 3)) + 2];
public byte[] GetPixelValues(int x, int y)
int w = width;
byte[] c = new byte[3];
c[2] = rgbValues[(y * s + (x * 3))];
c[1] = rgbValues[(y * s + (x * 3)) + 1];
c[0] = rgbValues[(y * s + (x * 3)) + 2];
return c;
rgbValues[(y * s + (x * 3))] = cIn.B;
rgbValues[(y * s + (x * 3)) + 1] = cIn.G;
rgbValues[(y * s + (x * 3)) + 2] = cIn.R;
public void SetPixel(int x, int y, byte r, byte g, byte b)
rgbValues[(y * s + (x * 3))] = b;
rgbValues[(y * s + (x * 3)) + 1] = g;
rgbValues[(y * s + (x * 3)) + 2] = r;
public void LockBitmap()
LockBitmap(new Rectangle(0, 0, width, height));
private void LockBitmap(Rectangle area)
if (locked) return;
stride = bitmapData.Stride;
int numBytes = bitmapData.Stride * height;
public void UnlockBitmap()
if (!locked) return;
Marshal.Copy(rgbValues, 0, bitmapData.Scan0, bitmapData.Stride * image.Height);
Another thing. I found some months ago a class that was somewhat like this one, which I also changed to become faster and more useful
I've added the implicit operator too (nice touch, RoshanK ;) )
I made this class completely compatible with the previous updated version of RoshanK's class, so you can just change the class type from FastBitmap to UnsafeBitmap and allow unsafe code in the project build parameters, and test it in your program. I think its a bit faster, it doesn't use Marshal, and it uses pointers.
using System.Collections.Generic;
using System.Text;
namespace Utility
public unsafe class UnsafeBitmap
Bitmap bitmap;
int width;
public int Width, Height;
BitmapData bitmapData = null;
Byte* pBase = null;
public UnsafeBitmap(Bitmap bitmap)
{
this.bitmap = new Bitmap(bitmap);
Width = bitmap.Width;
Height = bitmap.Height;
}
public UnsafeBitmap(int width, int height)
this.bitmap = new Bitmap(width, height, PixelFormat.Format24bppRgb);
public void Dispose()
bitmap.Dispose();
public Bitmap Bitmap
get
{
return (bitmap);
}
public struct PixelData
public byte blue;
public byte green;
public byte red;
public static bool operator ==(PixelData a, PixelData b)
// If both are null, or both are same instance, return true.
if (System.Object.ReferenceEquals(a, b))
{
return true;
}
// If one is null, but not both, return false.
if (((object)a == null) || ((object)b == null))
return false;
// Return true if the fields match:
return a.red == b.red && a.green == b.green && a.blue == b.blue;
public static bool operator !=(PixelData a, PixelData b)
return !(a == b);
public override bool Equals(object obj)
return base.Equals(obj);
public override int GetHashCode()
return base.GetHashCode();
private Point PixelSize
Size st = bitmap.Size;
RectangleF bounds = new RectangleF(0, 0, st.Width, st.Height);
return new Point((int)bounds.Width, (int)bounds.Height);
public void LockBitmap()
RectangleF boundsF = new RectangleF(0, 0, bitmap.Width, bitmap.Height);
Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height);
width = (int)boundsF.Width * sizeof(PixelData);
if (width % 4 != 0)
width = 4 * (width / 4 + 1);
bitmapData = bitmap.LockBits(bounds, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
pBase = (Byte*)bitmapData.Scan0.ToPointer();
public PixelData GetPixel(int x, int y)
PixelData returnValue = *PixelAt(x, y);
return returnValue;
public byte[] GetPixelValues(int x, int y)
byte[] c = new byte[3];
PixelData* returnValue = PixelAt(x, y);
c[0] = returnValue->red;
c[1] = returnValue->green;
c[2] = returnValue->blue;
return c;
public void SetPixel(int x, int y, PixelData color)
PixelData* pixel = PixelAt(x, y);
*pixel = color;
public void SetPixel(int x, int y, byte r, byte g, byte b)
pixel->blue = b;
pixel->green = g;
pixel->red = r;
public void UnlockBitmap()
bitmap.UnlockBits(bitmapData);
bitmapData = null;
pBase = null;
public PixelData* PixelAt(int x, int y)
return (PixelData*)(pBase + y * width + x * sizeof(PixelData));
public static implicit operator Image(UnsafeBitmap bmp)
return bmp.bitmap;
public static implicit operator Bitmap(UnsafeBitmap bmp)
guys, i need to draw a random polygon that is semi-transparent. Windows Mobile does not support Color.FromArgb(). is there any other simple workaround to this??
hm, I did this by writing my own rasterizer in unmanaged c++ :)
like Tom said do yo have to take care of everything by yourself. Seme transparent means that the stuff below the polygon can be seen through it, so you can not fill the polygon with a constant color. For this you need to implement your own fill algorithm which evaluates the color of every pixel of the polygon.
You could surely do this with the FastBitmap class and draw a polygon onto the FastImage - still you need to implement the filling and drawing by yourself (as long no one posts something to this) :) - but I think it can be done. If you go this way the polygon can only "see" things that are also drawn on the FastImage - it can not see through.
Hum... weird...
I'm using either version of the class, & in LockPixels (LockBitmap in Paulo Ricca's version), after the initialisation of the bitmapData, for a 49x49 picture the bitmapData.Stride is said to be 148, but it should be 49*49=147, so I have errors in my image. :-\
Any idea why?
Compact framework 2, VS08...
Answer to myself (& to whom it might help) : the stride (number of bytes occupied by a row) HAS TO be a multiple of 4 bytes (don't ask me why), see the MSDN :
"The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary."
That's why sometimes after the first row there is a data shift in some pictures manually created. :)
Hi guys, just a question why i get a ArgumentException when i excute this code :
Bitmap enfin = (Bitmap)image.Clone();
bitmapData = enfin.LockBits(rec, ImageLockMode.ReadOnly,PixelFormat.Format24bppRgb);
Thank you
the only thing that I can see to improve the calculation is to change
...
if ((width & 3) != 0)
width = (width ^ 3) + 4;
since binary operators should be faster than division
I believe Le Sage wonder why groups of 4 bytes and if I remember correctly it
first byte => shades of gray
second byte => shades of red
third byte => shades of green
forth byte => shades of blue
with 0 thru 255 possible shades for each
Great blog! I was able to understand and I got the code to work! Can't beat that.
Keep up the great work.
From an old C programmer.
What Paulo Ricco is very much true
if RoshanK had already declared the global variable then why cant be use this.
This is really faster and good piece of code.
int w = width;
rgbValues[(y * w + x) * 3] = cIn.B;
rgbValues[(y * w + x) * 3 + 1] = cIn.G;
rgbValues[(y * w + x) * 3 + 2] = cIn.R;
Thanks
Yogusmilu
@cybervedaa
I'd look at P/Invoking the AlphaBlend function. See here:
Unfortunatly P/Invoke is slow. You are going to take a big hit for using it.
hi all,
1st of thanks for the lesson and the comments. one questio, how can I merge text to the bitmap? (for intace the date)
Hii all
I thingk about. I have 2 function
public void LockBit()
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, input_image_width, input_image_height);
bmdata = input_image.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); //bmdata = bmp.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int RowsByte = bmdata.Stride;//so byte tren 1 hang
int TotalSize = RowsByte * bmdata.Height;
rgbValues = new byte[TotalSize];
// Get the address of the first line.
IntPtr ptr = bmdata.Scan0;
System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, TotalSize);
// MessageBox.Show(RowsByte.ToString());
//MessageBox.Show(i.ToString());
public void UnLockBit()
int TotalSize = bmdata.Stride * bmdata.Height;
//byte[] rgbValues = new byte[TotalSize];
System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, TotalSize);
input_image.UnlockBits(bmdata);
and after i using
LockBit();
for (int i = 0; i < rgbValues.Length-2; i++)
//rgbValues[i] = (byte)(255 - rgbValues[i]);
rgbValues[i] = (byte)(rgbValues[i]/3+rgbValues[i+1]/3+rgbValues[i+2]/3);
UnLockBit();
pictureBox1.Refresh();
this oke and verry fast | http://blogs.msdn.com/windowsmobile/archive/2008/04/15/faster-c.aspx | crawl-002 | refinedweb | 2,461 | 59.19 |
Eclipse Community Forums - RDF feed Eclipse Community Forums Content Assist <![CDATA[Hi, I am finding it hard to get the content assist working for the following dsl. To be specific, while defining the type for an attribute (Attribute rule), content assist shows the defined Entities and not the Datatypes (string, decimal,...). For some reason, the methods complete_XXDataType() in the Proposal provider are not getting invoked. If I change the Attribute rule to [0], the complete methods get invoked. Not sure what is going wrong. I checked the forum but I couldn't find anything related to my problem. I also had a look at couple of dsl's [1], but it didn't help. Any thoughts/pointers ? NSDeclaration: // namespace and import entity=Entity ; Entity: 'entity' name=ID '{' attributes+=Attribute* '}' ; Attribute: name=ID ':' type=Type ; Type: datatype=DataType | entityRef=EntityReference ; EntityReference: entity=[Entity] ; DataType: StringDataType | DecimalDataType | BooleanDataType | NumberDataType ; StringDataType: 'string' ; DecimalDataType: 'decimal' ; BooleanDataType: 'boolean' ; NumberDataType: 'number' ; ------------------------------------------ [0] Attribute: name=ID ':' (datatype=DataType | entityRef=EntityReference) ; ------------------------------------------- [1] Thanks in advance.]]> Neeraj Bhusare 2013-03-26T04:40:30-00:00 Re: Content Assist <![CDATA[For some reason, the methods complete_XXDataType() in the Proposal provider are not getting invoked. It may be that as your XXDataTypes are Parser rules they are consumed before the later stages of grammar rules, content assist and proposal are run. As literals I would expect they would get proposed automatically. ]]> Ian McDevitt 2013-03-26T20:34:28-00:00 Re: Content Assist <![CDATA[Actually I meant Data Type Rules (no assignments or actions) rather than Parser rules. Also you may think you are in the middle of entering a DataType but the parser/proposer may not know if it's that or an EntityReference so may only be able to call complete_Attribute with a reference of 'type'. I don't know why [0] makes a difference, I'm just suggesting some pointers above, but I have found that simpler grammars work better than many levels of abstraction. ]]> Ian McDevitt 2013-03-27T12:59:34-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=463257&basic=1 | CC-MAIN-2015-18 | refinedweb | 336 | 54.83 |
CONNECT(2) BSD Programmer's Manual CONNECT(2)
connect - initiate a connection on a socket
#include <sys/types.h> #include <sys/socket.h> int connect(int s, const struct sockaddr *name, socklen_t namelen);
The parameter s is a socket. If it is of type SOCK_DGRAM, this call specifies the peer with which the socket is to be associated; this ad- dress. establishing a connection. [EINVAL] A TCP connection with a local broadcast, the all-ones or a multicast address as the peer was attempted. [ECONNREFUSED] The attempt to connect was forcefully rejected. [EHOSTUNREACH] The destination address specified an unreachable host. [EINTR] A connect was interrupted before it succeeded by the delivery of a signal. the socket for writing, and also use getsockopt(2) with SO_ERROR to check for error conditions. [EALREADY] The socket is non-blocking and a previous connection at- tempt has not yet been completed. [EPERM] If TCP MD5SIG is being used and we don't get a properly au- thenticated session up, for example if the peer is not con- figured for TCP MD5SIG or the keys don't match.} characters, or an entire path name exceeded {PATH_MAX} characters. [ENOENT] The named socket does not exist. [EACCES] Search permission is denied for a component of the path prefix. [EACCES] Write access to the named socket is denied. [ELOOP] Too many symbolic links were encountered in translating the pathname. 15,. | http://mirbsd.mirsolutions.de/htman/i386/man2/connect.htm | CC-MAIN-2014-10 | refinedweb | 232 | 58.38 |
Formats in Perl
Formats are the writing templates used in Perl to output the reports. Perl has a mechanism which helps in generating simple reports and charts. Instead of executing, Formats are declared, so they may occur at any point in the program. Formats have their own namespace apart from the other types in Perl i.e. function named “tron” is not same as format named “tron”. However, name of the filehandle in the program is the default name for the format associated with that filehandle.
Defining a Format
Syntax to define a Perl Format:
format FormatName = fieldline value_1, value_2, value_3 fieldline value_1, value_2, value_3 .
– Name of the format is denoted by the FormatName.
– Fieldline is a particular way used to format the data. Fieldline can also hold text or fieldholders.
– Value lines denotes/describes the values which will be entered into the fieldlines.
– Format is ended by a single period (.)
– Fieldholders have the space for the data which will be entered later.
Syntax for Fieldholders:
@<<<<<<< left-justified (with 7 field spaces by counting '@' and '<') @||||||| center-justified @###.#### numeric fieldholder @* multiline fieldholder
Using a Format
Write keyword is used to call on the format declaration.
Write FormatName;
Format name is the name of an open file handle and the write statement sends the output to the same file handle. In order to send the data to STDOUT, format name needs to be associated with the STDOUT file handle.
Note: Use the
select() function for making sure that STDOUT is the selected file handle.
select (STDOUT);
In order to associate format name with STDOUT by setting the new format name with STDOUT, use the variables like $~ or $Format_Name
$~ = "Format_Name";
Note: For writing report in any other file handle apart from the STDOUT, use the
select() function to select that file handle.
Example:
Input: Using STDOUT.
Output :
Input: Using other file handle(Printing the output into a text file.)
Output :
File where data is printed:
Report Header and Footer
Header is something that appears at the top of each page. Instead of defining a template, just define a header and assign it to $^ or $FORMAT_NAME_TOP.
Footer has a fixed size. It can be achieved by checking for variable $-. You can even print the footer by yourself if necessary using the syntax given below,
format FORMAT_NAME_BOTTOM End of Page $%
Example:
Input: Using STDOUT
Output:
Input: Getting output into a text file.
Output:
File where data is printed:
Pagination
Pagination comes into the picture when you have a long report which will not fit in a single page. Use of variables like $% or $FORMAT_PAGE_NUMBER along with the header in the format helps in defining the page number to more than one page. Default number of lines in a page is 60 but it can be set manually too by using the variables $= or $FORMAT_LINES_PER_PAGE.
Example:
Output:
Recommended Posts:
- JavaScript | Date Formats
- Perl Tutorial - Learn Perl With Examples
- Perl | Basic Syntax of a Perl Program
- Perl vs C/C++
- Perl | tr Operator
- Perl | int() function
- Perl | CGI Programming
- Perl | each() Function
- Perl Hash
- Use of print() and say() in Perl
- Perl | sin() Function
- Perl | cos() Function
- Perl | Classes in OOP
- Perl | abs() function
- Perl | given-when. | https://www.geeksforgeeks.org/formats-in-perl/ | CC-MAIN-2020-16 | refinedweb | 530 | 62.27 |
Build an iOS Chat App with Pusher
We thought we’d create a walkthrough of how to easily build an iOS chat app with Pusher. Together we will build a group chat application, leading you through how to use Pusher to send and show realtime messages in your UI.
The app we’re building
The app we’re building is a simple chat application that uses Pusher to send and receive messages.
It has two screens – the “Login” screen where we enter our Twitter username, and the “Chat” screen where we do the messaging.
Setting up our project with XCode
If you haven’t yet, create a new application on XCode. By default the wizard offers you to create a Single View Application for iOS, and that’s perfectly fine. Once you’ve done that, you’ll need to prepare the dependencies for the app. The dependencies you need are Pusher Swift for interaction with Pusher, and AlamofireImage for performing network requests, and loading images over the network.
The easiest way install dependencies is by using CocoaPods. If you don’t have CocoaPods installed you can install them via RubyGems.
gem install cocoapods
Then configure CocoaPods in our application. First initialise the project by running this command in the top-level directory of your XCode project:
pod init
This will create a file called
Podfile. Open it, and make sure to add the following lines specifying your app’s dependencies:
# Uncomment the next line to define a global platform for your project # platform :ios, '9.0' target 'Pusher Chat Sample iOS' do # Comment the next line if you're not using Swift and don't want to use dynamic frameworks use_frameworks! # Pods for Pusher Chat Sample iOS pod 'PusherSwift' pod 'AlamofireImage' //need that for networking end
And then run
pod install to download and install both dependencies.
pod install
CocoaPods will ask you to close XCode if it’s currently running, and open the newly generated
.xcworkspace file. Do this now, and XCode will open with your project configured.
Creating the Login View
For our login feature we’ll just create a simple page with a field to enter a twitter handle and a login button.
First rename our scene to “Login Scene”, and then drag the two elements onto it.
Also rename the
ViewController to
LoginViewController.
Control-drag the each element into the
LoginViewController class to create the IBOutlet (for the TextView) and the IBAction for the button.
Name the IBOutlet
twitterHandle and IBAction
loginButtonClicked.
In your
LoginViewController.swift add the following logic to the
loginButton function:
@IBAction func loginButtonClicked(_ sender: Any) { if(twitterHandle.hasText){ let messagesViewController = self.storyboard?.instantiateViewController(withIdentifier: "chatViewController") as! ChatViewController messagesViewController.twitterHandle = twitterHandle.text! self.present(messagesViewController, animated:true) } else{ print("No text in textfield") } }
This will grab the current text in the
twitterHandle field and set it to the
ChatViewController, and transition to the Chat screen.
Chat View
But the
ChatViewController doesn’t exist yet! Create a new ViewController in the Storyboard and the corresponding
ChatViewController.swift class.
Add to it a TableView, a Text Field, and a Button as in the example.
Listening to messages
We will listen to new messages in realtime by subscribing to the
chatroom channel and listening to events tagged
new_message.
Pusher channels can support unlimited number of message types, but in our case we are only interested the single one.
In
viewDidLoad create your Pusher instance – and copy your setup details from the Pusher Dashboard. It shoud look like this:
pusher = Pusher( key: "abcdefghijklmnopqrstuvwxyz" )
Then subscribe to the
chatroom channel, and bind to the
new_message events, printing their messages to the console. Lastly, connect to Pusher.
let channel = pusher!.subscribe("chatroom") let _ = channel.bind(eventName: "new_message", callback: { (data: Any?) -> Void in if let data = data as? [String: AnyObject] { let text = data["text"] as! String let author = data["name"] as! String print(author + ": " + text) } }) pusher!.connect()
Now that we’ve subscribed and listening to the events, we can send some events to test it out. The easiest way to do this is by using Pusher’s Debug Console – in your app’s Dashboard. Have the application running – Simulator is fine.
Click Show Event Creator button, and change the name of Channel to
chatroom, and change the Event to
new_message – what we’re listening to in the app.
Now change the Data field to something like:
{ "name": "John", "text": "Hello, World!" }
And click Send event. In the XCode’s console, you should see the message printed out:
John: Hello, World!
Presenting messages in a table
Now, let’s show the messages as they arrive in the UITableView.
We will create a Prototype cell in the UITableView in the Storyboard, and specify a class for it.
Create a
MessageCell.swift class and make it extend
UITableViewCell. This will represent a single chat message as a row in our table. Drag the outlets for
authorAvatar,
authorName, and
messageText into the class. This
import Foundation import UIKit class MessageCell: UITableViewCell { @IBOutlet var authorAvatar: UIImageView! @IBOutlet var authorName: UILabel! @IBOutlet var messageText: UILabel! }
Now create a
Message.swift which will hold a struct representing a single Message object. It just needs to hold two strings, for the author and message.
import Foundation struct Message { let author: String let message: String init(author: String, message: String) { self.author = author self.message = message } }
Back in the
ChatViewController.swift, make the class implement the protocols
UITableViewDataSource and
UITableViewDelegate alongside
UIViewController:
class ChatViewController: UIViewController, UITableViewDataSource, UITableViewDelegate {
To make it compile, you’ll need to implement the following methods – first one to let the tableView know how many items it holds:
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return array.count }
And the second one that will create a
MessageCell object:
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "MessageCell", for: indexPath) as! MessageCell return cell }
Then we need to add some logic that will actually present the data in a cell. Add these lines to the second method:
let message = array.object(at: indexPath.row) as! Message cell.authorName.text = message.author cell.messageText.text = message.message let imageUrl = URL(string: "" + message.author + "/profile_image") cell.authorAvatar.af_setImage(withURL: imageUrl!)
First we set up the text in the author and message labels, and lastly we use the AlamofireImage library to load the image from Twitter avatar into the
authorImage field.
Sending messages from the app
Building the serverside component in NodeJS
So far, we’ve created a client that receives items. But what about sending them? We’ll do that next.
First, we’ll need a server-side component that receives messages and sends them back to Pusher.
We prepared a simple NodeJS application that will serve that purpose. You can find it here.
First clone the repository and CD into its directory. Then run
npm install to setup dependencies.
Then open
app.js and change the Pusher initialisation fields there to include your App ID, key and secret. You can copy these from your Pusher Dashboard – the Getting Started tab will have everything you need.
Once you’ve done that you can launch the app by running
node app.js.
If your iOS app is running on your simulator, and your Node app is running the server, you should be able send a test message via the
cURL command:
$ curl -X "POST" "" -H "Content-Type: application/json; charset=utf-8" -d $'{"name": "Pusher","text": "Hello, Node!"}'
If everything works as it should, you should see the new message appear in your app.
Building the app component
The last thing to do is to create the function that triggers and sends the message to us.
First make sure your Text Field and Button have their corresponding outlets in
ChatViewController.swift:
@IBOutlet var message: UITextField! @IBAction func send(_ sender: Any) { if(message.hasText){ postMessage(name: twitterHandle, message: message.text!) } }
Finally, we can implement the
postMessage function that calls our NodeJS endpoint to trigger a new message over Pusher:
func postMessage(name: String, message: String){ let params: Parameters = [ "name": name, "text": message ] Alamofire.request(ChatViewController.MESSAGES_ENDPOINT, method: .post, parameters: params).validate().responseJSON { response in switch response.result { case .success: print("Validation successful") case .failure(let error): print(error) } } }
Try it out!
If you are running the Node server locally XCode might not allow you to make the request. You can get around this by adding
App Transport Security Settings to your
Info.plist file and set
Allow Artibrary Loads to
YES.
Get Pushing
Hopefully you have found this a straightforward example of how to build an iOS chat app with Pusher. There are many ways you can extend this tutorial for an improved application:
- Use Pusher client events to send messages from one client to another. You could use our webhooks to notify your server when messages are sent, allowing you to persist them in your database.
- Use Pusher presence channels to create live user lists and show who’s online in realtime.
- Use our REST API to determine whether a user is connected to Pusher. If they are, go ahead and send them a message as normal. If not, send them a native push notification leading them back to your app.
Even with such a basic app, hopefully I have shown you how easy and few lines of code it is to drop in Pusher to any iPhone app. Feel more than free to let us know what you end up building with Pusher and iPhone! | https://blog.pusher.com/build-an-ios-chat-app-with-pusher/ | CC-MAIN-2018-05 | refinedweb | 1,577 | 57.57 |
Introduction to Python Random Module
Python random module is an inbuilt module of the python that is used to generate the random numbers in python. Modules are a collection of codes or classes or a set of rules that are predefined in python. We just need to use import keywords to use all these classes or codes. Once we import a random module, we can access all the functions or classes that are defined into it. Modules are also called libraries.
There are many inbuilt functions available in the random module. We can use those functions according to our requirements and the type of random number we want to generate.
Syntax:
random.function_name(attr*)
- function_name: function available in the module
- attr* : this parameter that function takes, it might be optional in a few of the random functions.
Examples of Random Module in Python
Lets us discuss Examples:
Example #1
import random
random.random
Output:
This is a very basic example of a random module function. We have used random functions and there is no parameter in the function. Random() generates random floating numbers between 0.0 and 1.0. This function does not take any parameters.
Example #2
import random #3
import random.
Example #4
import random
random.randrange(0,100,20)
Output:
In this example, we have specified start as 0, end as 100 and step as 20. So randrange function will pick random from the range of these start, end and step.
Example #5
import random
random.choice("John")
Output:
In the above program, we have used the choice method of random modules. This function takes a single parameter. This parameter might be String or List or dictionary also and it will return random value from that parameter. We have passed a string in the method and it will random character from the string and return it as an output.
Example #6
import random
random.choice([1,2,3,4,5])
Output:
In the above program, we have passed a list inside the choice method, and the output will be one value out of these list values.
Example #7
import random
list = [1,2,3,4,5] random.shuffle(list)
list
Output:
In the above program, we have used the shuffle method of the random module. This method does the shuffling of the sequence that we passed into it. It takes two parameters, out of which one parameter is optional. The disadvantage of this method is that we lost the original ordering of the sequence or the list.
Example #8
import random
list = [1,2,3,4,5] num=random sample(list,len(list))
num
Output:
import random
list = [1,2,3,4,5] num = random.sample(list,len(list))
list
Output:
We have used a random sample method in random in the above program. This method takes two parameters, the first parameter is a sequence and the second parameter as a length of sequence that we want to return the output. Second parameter is also mandatory, it can be equal to the length of the list or less than the length of the list but couldn’t be greater than the list. This method keeps the original sequence and creates a new sequence. We can store the random sequence into another variable while keeping the original sequence as it is.
Example #9
random.choices(population, weights=None, *,cum_weights=None, k=1)
import random
list = [fruit","vegetable","juice","drink] print random.choices(list, weights = None, k = 10))
Output:
This method takes 3 parameters,all parameters are mandatory. The first parameter will be our sequence, which might be a range or list of values. The second parameter is the weight of the values that need to be accumulated and the third parameter number of items to be returned.
print(random.(list, weights =[6,1,1,1], k = 10))
Output:
If weight is passed as none items are generated randomly. If we pass the weight then weight items should match the count of list items.
Conclusion
Python random module is a very useful module, it provides so many inbuilt functions that can be used to generate random lists and mostly used for generating security token randomly and range of list.
Recommended Articles
This is a guide to Random Module in python. Here we discuss the introduction along with different examples and its code implementation. you may also have a look at the following articles to learn more – | https://www.educba.com/random-module-in-python/ | CC-MAIN-2020-24 | refinedweb | 737 | 65.22 |
Run on host network
For some scenarios, especially those involving the BACNet protocol, workloads need to run on the host network namespace. This is not the default configuration for Kubernetes, so to run a module on the host network use following createOptions:
{ "HostConfig": { "NetworkMode": "host" } }
The
edgeAgent translates these createOptions to setup the module to run in the
host network namespace on Kubernetes. Unlike Docker-based deployments the
NetworkingConfig
section is not required. It will be ignored if specified.
All modules don't need to run in the host network to be able to communicate with each other. For example, a BACNet module running on the host network can connect to
edgeHubmodule running on the internal cluster network. | https://microsoft.github.io/iotedge-k8s-doc/bp/network/hostnetwork.html | CC-MAIN-2022-40 | refinedweb | 118 | 51.89 |
Contributing to the Linux Kernel—The Linux Configuration
The Linux kernel has always been one of the most prized gems in the Open Source community. Based around a philosophy of shared resources through modularity, it somehow is both well-written and written by committee. (Or, at least, by many individuals and teams which argue/agree over features.) One of the methods by which Linux keeps everything neat and modular is the kernel configuration system, often referred to as config, menuconfig and xconfig. These are the scripts that an installer of a source kernel must run in order to set up the kernel options, but you probably know that if you are reading this. On the outside, these look like three very separate programs with completely separate interfaces. In reality though, all three draw from the same fundamental rules that many programmers of the Linux kernel must know in order to spread their work or even to submit their patches to Linus. It is this fundamental system that gives Linux users the options they need to design a Linux system based on their needs.
Since the Linux kernel is an open-source project, it obviously accepts submissions from its users for new features. Often, however, programmers with the desire and the know-how to add features to the Linux kernel choose not to for a variety of reasons. In this article, I hope to clear up some of the mysteries surrounding the kernel configuration system that may be hindering users and keeping them from becoming developers. Every brain counts in open-source efforts, and every programmer who adds his or her changes into the kernel makes the kernel more robust for the rest of us.
To start off with, there has to be a reason you are mucking about in the kernel configuration scripts in the first place. Maybe you are just exploring the system and awaiting the day when you too will be submitting patches to the kernel. Or maybe (and more likely) you have added a particular feature to the kernel that you feel deserves some more widespread use, but you want to have it ready to integrate for Linus or Alan or another kernel-developing guru. For the purposes of this article, I will use a hypothetical patch to make the random device driver a compile-time option, although I should stress that in reality I had absolutely nothing to do with the creation of that driver. (This is the driver that controls the /dev/random and /dev/urandom devices.) Also, I will not be discussing in depth the creation of kernel modules in this text—I will assume you can extrapolate how to do it from this article, especially if you were smart enough to create a modularized driver in the first place.
The first step in modularizing your program should be obvious: you need some name that the C preprocessor can recognize to help it sort out the differences between what changes are yours and what are not. The kernel handles this distinction through the use of preprocessor instructions: the #ifdef ... #else .. #endif constructs throughout the kernel.
The first thing to make sure of when you do this is to be consistent. In a system as complicated as the Linux kernel, a little bit of consistency can save a lot of headaches later. You should look in Documentation/Configure.help for similar options and check to see if they have a common prefix (after CONFIG_, of course). For example, all block device options start with CONFIG_BLK_DEV_. This is relatively easy to change later, of course.
Once the name is selected, you should make #ifdef...#endif blocks around portions of the code that your patch changes (having a Linus tree around when you do this helps, as you can diff it and easily see what you changed). If you removed existing code, you'll need to integrate the #else blocks from the real tree, unless you were smart enough to keep them around while you were writing your patch. (I usually use the construct “#if 0” early in the programming stage.) At this point, compiling the kernel should work, and your option will be correctly disabled and we can continue to the next step. If it doesn't work or portions of your patch are still present, you obviously need to go back and double-check your diffs.
For the purposes of my example, I would choose the name CONFIG_RANDOM for the option. The random devices are character devices, and at the time of this writing, there was no CONFIG_CHAR_( or similar) prefix in common use.
The next step we need to take is to add the new configuration option to the configure system. Fortunately, this is fairly easy with only a couple of warnings. In the directory where you have the majority of your patch (in my example, drivers/char), there will be a file called 'Config.in' which contains the configuration options for the code in that directory. It is possible to put the config option in a different directory's config file, but it obscures the readability a little and may make it difficult to locate your code later. However, if it definitely belongs somewhere other than where you have it (or if the location of your code does not have one of those files), you should use your own best judgment and be prepared to move it later. Browsing through this file, you will see that it contains what appears to be a rough scripting language similar to Bash or another shell script. This scripting language is called, easily enough, “Config Language” and should not be terribly difficult to get your arms around. For the purposes of our non-modular example, we don't need to work with the full vocabulary of the language and can concentrate on only a few keywords (defined below). For a more complete guide to the language, a complete reference is provided with newer kernels (Documentation/kbuild/config-language.txt). There are plenty of examples provided in the actual configuration files, however, and the language is simple enough that a real understanding of the language and syntax is not required for normal maintenance. In general, lines in this file are formatted with one or more keywords (called verbs) followed by some arguments. Here is a partial list of verbs and their meanings:
comment: An unparsed comment except when preceded by the mainmenu_option command which would cause the comment to be used as a heading.
mainmenu_option: A verb that makes the next comment into a heading. I cannot explain enough how odd this seems to me.
bool: Boolean (yes/no) configuration option. This verb is always followed by a question and a configuration variable to put the result (“y” or “n”) in.
tristate: A value similar to a bool but with the additional “make as module” possibility denoted as “m”. This is applicable only to device drivers.
if... fi: A conditional block that is evaluated only if a certain configuration variable is set.
bool 'Random driver support' CONFIG_RANDOM(Please note here that a' is used as the quote character. This can be easy to miss.)
Config File for CONFIG_RANDOM
If your configuration option relies on another to be set, this model becomes more complicated. You will need to surround your options with an if ... fi block that tests the prerequisite option. There are a number of examples in the assorted configuration files to help you with this process; when in doubt, copy. One final word: you should be aware that this file is used not only to generate the selection lists in the configuration processes, but also to generate the include/linux/autoconf.h file. In order to preserve the readability of that file, you should be careful that options you add do not come after other subheadings or the configuration option will not appear in the right place when that file is generated. (Of course, it will still work no matter where it is in the file, but for readability, this is something to consider.) | http://www.linuxjournal.com/article/4082?quicktabs_1=2 | CC-MAIN-2017-30 | refinedweb | 1,347 | 59.03 |
Wiki As An Alife Experiment
In an Alife Experiment you create a sort of playground, e.g. the artificial world.
Next you create an array of a data containers. Every data container is a so-called
critter
(= artificial being) that runs around in the Artificial world.
The critters will interact with the playground (=eat) and with each other (fight and procreate). This all according to all kind of rules. In a sort of way the datacontainers are genes.
This is all very simple.
I wondered what way a
WikiWiki
page is also an ALife experiment. What is the playfield, what are the critters. I just didn't see the answer, until I realized a page is not a gene, nor a playground, but a
meme
(e.g.:
)
So a wikiwiki is an experiment/study in evolvement and change in
culture
.
In a study that I made a few years ago, I noticed a lot of people stay rather short; they discover wiki, next they are very active during let's say, hundred days, producing 20 or up to 1000 actions, next they quit.
All this actions by people is similar to the actions of people in the real world over generations.
Everybody who is very active thinks he or she knows what a Wiki is all about and what way it should look like, but actually we just know it's a whole lot of pages and a way to create and edit pages.
A good way to study what way the pages change is to upload a list of all the wikipages,
, every week and as soon as you have a lot of pages, you can spend some time writing some code to analysing changes in the list.
Moved here from
TimeAgainForWikiMutiny
(posed there as an argument against mutiny):
An Experiment
I think that it is important for everyone to re-realize that
WardsWiki
was and is an experiment. It was never meant to be the end-all, but rather the beginning, the alpha to spur-on innovation. It stands now as still an on-going experiment, a model if you will, that gives rise to the justifications for newer more secure implementations of wiki. To participate here, is to knowingly and willingly be part of an on-going experiment, and to become part of that data.
But there comes a point where people don't like being treated like guinea pigs anymore. Whether they have a "right" to complain or not, it's human nature to not like being treated like a play-thing even if the player is relatively benign.
Yes. But that is also part of the 'experiment' and is thus not a reason to change the setup of the experiment.
Further, by remaining you choose to be a part of the experiment. The experiment is here, independently of your participation. If you choose to stay, you are a part of it. You can always
JustLeave
.
Such all-or-nothing terms are a recipe for problems. I am not necessarily encouraging problems, but being the messenger about human reaction to such situations. People will start demanding a democracy of sorts rather than a lab technician in charge.
In relation to the just above see:
NobleExperiment
PositiveDialogueCommunity
I'm a bit more optimistic. One can
LetHotPagesCool
then
RefactorMercilessly
without risk of stomping on egos. Perhaps allowing a
ThreadMode
, heated argument to take place then having a neutral (but interested) party factor out signal is to
DoTheSimplestThingThatCouldPossiblyWork
in a social wiki. IMO, people that jump into a heated argument and complain about the
ThreadMess
are just irritating hypocrites, participating in the very problem they are denigrating.
DocumentMode
can't be easily forged out of nothing at all - perhaps it
needs
heat, argument, hot air, fire and flame, then eventually ash, for a germ of an idea to grow into something more. In the decades-long view, this is how any wiki gets to be
InsanelyGreat
.
However, there is one thing (ignoring spam and abuse) that needs to be changed:
OnceAndOnlyOnce
needs to apply to flaming arguments and ideas, too, lest people continue to enter the
WikiWiki
and present the same arguments again and again.
To do this, people need to be easily able to find both running arguments and
DocumentMode
pages related to topics of interest.
WikiWiki
, or any successor to it, desperately requires considerably more advanced and accessible search features in order to break down the
WalledGardens
.
WikiWiki
had the right idea, at least, in resisting namespace hierarchies which would do the opposite. Further, to prevent fruitless arguments and repeat arguments, such as those founded in deep misunderstandings (e.g. those falling back so often to
LaynesLaw
, or that arise from not grokking some theory or model), a wiki must enable a much greater and more thorough degree of self-education than is provided by the very limited and somewhat magical "read the page and though shalt comprehend". A wiki as an educational medium does not need to be entirely passive;
DocumentMode
should be supplemented with support for interactive education, tutorials, tests and quizzes that are automatically graded, examples that can be run and modified, et cetera - and a wiki would do well to possess social and cultural engineering (e.g. through 'ideals' like
WikiPedia
's NPOV, and perhaps by making quiz-grades public per user-name, and even by such things as
RealNamesPlease
) to pressure those humans possessing unmerited arrogance (and there will always be a few in any population, more so among the people willing to speak up) to humble themselves before impartial tests of knowledge, comprehension, and skill. (Too often I have wished to slap around those people who refuse to self-educate, and who therefore cannot see their own fallacy or misunderstanding even when it is pointed out to them.)
As a bonus, construction of such tests and tutorials and examples - even if it is just tests of comprehension rather than of 'truth' - would greatly increase the educational value of any wiki AND help solidify and (most importantly)
formalize
the ideas held within, considerably reducing imprecision and ambiguity inherent to natural language.
A couple of points in response:
In my experience, no one goes back in to tidy
ThreadMess
es once they get beyond a certain stage. If the participants don't even bother trying to keep it tidy, it never happens.
Refactoring often happens by the creation of a new page and the deprecation and eventual deletion of an old one. There is no need to 'tidy' thread messes. And forcing participants to 'keep it tidy' works to a degree, but mostly means that ideas are never presented in the first place.
People can already find things if they bother to read to start with. It appears that people don't even bother. I am less optimistic that any tools will help prevent this problem. I believe newbies will continue to be welcomed, and then pee all over the place exactly because they haven't bothered to look at the wisdom that's already here.
I disagree with this. There are too many
WalledGardens
, even for someone who has been on the wiki for a very long time. Tools can help a great deal with this. Even replacing the Wiki
FindPage
with support for the Google engine would be an improvement. Addition of tags or predicates would be another. Rapid access to bayesian clustering would also help. Etc.
I can't even begin to visualise the manner of implementing or presenting "interactive education, tutorials, tests and quizzes". I suspect the result would not be a wiki.
That's a bold statement.
WhatIsaWiki
? Do you believe that provision of a legible document is some sort of ultimate 'service' a wiki ought to provide? Or do you believe (as I do) that 'wiki' is more fundamentally about open and collaborative creation of
DeeplyIntertwingled
services
for which forums for argument and
DocumentMode
pages are simply examples?
WikiDesignPrinciples
certainly don't focus on document construction.
As far as mechanism, one only needs to look towards later generations of this software - e.g. the ability to attach graphical components, the ability to attach code that can generate questions and grade answers (
JavaScript
being a rather inefficient example), etc. I expect
DomainSpecificLanguage
s for creations of quizzes wouldn't be difficult at all to produce. I don't believe these will be applied to this particular
WikiWiki
... simply to the future of wikidom. The pages here and elsewhere, however, will likely migrate.
I wish I had your optimism. I've been here for over 10 years and see nothing to make me believe the decline will even slow.
I've seen a lot of change, and I see a lot of potential - some of which I wish to help realize (e.g. with
WikiIde
). It has
only
been 15 years since the 'Internet' access became a commodity, and it has grown quite a bit; I refuse to judge its future by the hiccups and viral infections it had when it was younger or by a few mood swings in its teenage years. As far as Internet technology is concerned, I can't but help feel incredible optimism.
What should be of interest to all, after reading this page, is how that this wiki and its effect on its community, stands as a good model of why open communes don't work very well. This wiki is kind of like the old hippy communes of the past, and the only ones that seem to have really survived are the closed religious ones. However, a cyber commune like this one, is ideal for a continuing experiment in social interaction. It is important to give the community a task, the task itself is not that important, but rather what happens when they try to accomplish that task. I have mainly stayed around all of these many, many, years, to observe and interact in this most intriguing experiment. One can learn much here.
Ye'old
TheCathedralAndTheBazaar
argument, again? I don't think that way at all; this page is just a few people voicing a few complaints, and doesn't at all represent what has and has not "worked very well".
No signed-edit security: yet ANOTHER reason to mutiny :-)
Yes, I note the smiley, but nonetheless, people here used to take care over ethics, even though not enforced by the technology. Clearly now they don't bother.
Jaded, are you? Most people do care about ethics, but it is naive to assume that
everyone
cares. In any
open
system, principles and ideals of the community should be both aided, encouraged, and either enforced or
made
more enforceable by technology (depending on the degree to which technology can help). Otherwise just a few bad apples - the spammers, the abusers, the people who make viruses, etc. - can (and invariably
will
) spoil the bunch. It would be nice if everyone was mature, educated, open-minded, humble enough to recognize her limits, assertive enough to state her views, arrogant enough to know things can be done better and
JustDoIt
, was motivated by the right reasons, etc. But that isn't the world we live in. We have children. We have
WikiPuppy
s. We have people who thumb their nose at education. We have people with primary agendas to sell products. We have people who find change stressful and who would rather dig in their heels to resist it. We have people who talk but never act. We
especially
have the latter - the wiki, after all, currently doesn't provide significant service to support actions other than discussion and meta-discussion. Anyhow, faith in humanity should be accompanied by realistic expectations - among those, the optimistic faith and not-unrealistic expectation that
most
(but not
all
) people will rise to meet any reasonable expectations.
Yes, I'm jaded. I remember when the vast majority did care, and the few who didn't were either assisted in become members of the community, sharing the culture of constructive creation of documents, or had the damage they inflicted simply edited away. I acknowledge that we can't do that any more, and I regret its passing. You call it naive, but I remember when it worked. I am saddened that most people now here never knew, and will never know, this wiki at its best. You assert that most people do care about ethics, but I see no evidence of that. All I see these days is a majority wanting their opinion to be heard, and insisting that they place it everywhere. Still, I'm just a GrumpyOldMan
?
, and my time has been and gone. I've made one last attempt to help preserve the culture, but it's clearly too late.
When you came here ten years ago,
WikiWiki
wasn't an open community. It was, de-facto, a closed community... because it was an unknown community. For most people, the first they ever hear of 'Wiki' is
WikiPedia
. And the only way to ever 'preserve' a culture is to put it in stasis. The best you can do is to attempt to grow a culture to your preference... and, in an open community where it becomes impossible to keep up with demand for attention, the only way to do this is by combination of education (e.g. making expectations clear, teaching more people to promote your values), technological support (for enforcing and enabling ideologies - e.g. protecting signatures from tampering, keeping history to allow reversal of malicious changes, etc), and cultural engineering (designing a situation, both social and technological, where the behavioral paths of least resistance and greatest reward are those that better match one's ideals). If there are any problems with
this
Wiki, it is that it attempts to meet certain
WikiDesignPrinciples
(such as being
open
) without recognizing certain environmental realities (such as spam and zealotry). But I don't see these problems as fundamental to wiki. Still, I expect it is easy to be discouraged if you've spent time performing 'last ditch efforts' to stop what seems to be a tide, especially if you've been doing so by yourself.
Didn't this wiki used to be populated by OO GOF pattern zealots? They had a similar interest and a similar goal. The change may be partly due to a more varied audience with different pet technologies and interests.
"but it's clearly too late"
I doubt that this is true. The fact that the person who made the statement still attempts to exert some influence and can clearly express the concerns about what this wiki, (in the current contributory cycle) has become, is an indication that it is not too late. One must remember that what has been contributed from the past has for the most part remained.
This is not just a place for current topic discussion and debate, but rather a wiki which has existed to demonstrate what wikis can do for collaboration. The fact that some participants may more want to be heard about their own interests, and have little interest in listening to what others may have to say should not override the fact that many others have found it possible to engage in meaningful collaboration and dialogue, and to have it preserved for others to read and perhaps further interact. --
DonaldNoyes
.20080429.20110320
"... and the band played on"
while the Titanic sank, so it clearly wasn't too late ...
Every member of that band was making the best of a bad situation, and were doing what they could give some comfort and strength. They knew full well that the end was near and as it became imminent switched to songs of eternal faith. One can realize what is really happening and still have hope. They were not being delusional and ignoring the reality, but were in fact embracing that reality.
Last Song
Whatever the truth of the
Titanic's
"last song" legend, Eaton and Haas neatly sum up its significance: One irrefutable fact, however, remains: the musicians stayed until all hope of rescue was gone. Who can say how many lives their efforts saved? The final moments of how many were cheered or ennobled by their music? 'Songe d'Automne' or 'Autumn.' 'Horbury' or 'Bethany'. What difference? The memory of the bandsmen and their courageous music will never die.
Credits:
Horbury
Bethany
--
DonaldNoyes
.201002180545
CategoryWiki
View edit of
March 20, 2011
or
FindPage
with title or text search | http://c2.com/cgi/wiki?WikiAsAnAlifeExperiment | CC-MAIN-2015-32 | refinedweb | 2,740 | 61.36 |
I’m new to python andI have no idea where I went wrong to get this port scanner using Scapy to work properly. Any help is appreciated.
import logging from scapy.layers.inet import TCP, ICMP, IP logging.getLogger("scapy.runtime").setLevel(logging.ERROR) # Disable the annoying No Route found warning ! from scapy.all import * ip = "10.0.0.3" closed_ports = 0 open_ports = [] def is_up(ip): #""" Tests if host is up """ icmp = IP(dst=ip)/ICMP() resp = sr1(icmp, timeout=10) if resp == None: return False else: return True if __name__ == '__main__': conf.verb = 0 # Disable verbose in sr(), sr1() methods start_time = time.time() ports = range(1, 1024) if is_up(ip): print("Host %s is up, start scanning" % ip) for port in ports: src_port = RandShort() # Getting a random port as source port p = IP(dst=ip)/TCP(sport=src_port, dport=port,": closed += 1 elif resp.haslayer(TCP): if resp.getlayer(TCP).flags == 0x12: send_rst = sr(IP(dst=ip)/TCP(sport=src_port, dport=port, flags='AR'), timeout=1) openp.append(port) elif resp.getlayer(TCP).flags == 0x14: closed += 1 duration = time.time()-start_time print("%s Scan Completed in %fs" % (ip, duration)) if len(openp) != 0: for opp in openp: print("%d open" % pop) print("%d closed ports in %d total port scanned" % (closed, len(ports)) else: print("Host %s is Down" % ip)``` | https://discuss.python.org/t/having-issues-with-a-simple-port-scanner-using-scapy/9424 | CC-MAIN-2021-31 | refinedweb | 221 | 68.87 |
When
[lr/9, lr/3, lr] will have a greater impact on the new LR than
[lr/100, lr/10, lr]). It’s actually an interesting question to study.
Wiki: Lesson 2
When
Here is an image of what happens when overfitting.
Taken from this link
Thanks @emilmelnikov. So calling
lr_find() on the current state of the model is fine? I ask because I don’t understand what the learning rate finder is doing and whether it is changing the state of the model when we execute the
lr_find() method on it.
It shouldn’t change anything, but we can always look at the code:
def lr_find(...): self.save('tmp') ... self.load('tmp')
So, yes, it saves the model, “trains” it in order to find a good LR and then loads the previous state. It’s very nice that the fast.ai’s code is super readable and transparent.
Ah perfect. Definitely will spend more time studying fastai’s source code.
One final set of questions @emilmelnikov (since you are on a roll): to train the full network in step 8, do we simply run
learn.fit(new_lr, 20, cycle_len=1, cycle_mult=2) on the currently unfrozen and precompute=False with the new learning rate until it overfits?
Once we know the epoch at which it overfits, this then becomes the number of epoch we set the final training on the full dataset (where we combine the data in
train and
valid folders). This I guess becomes the final step. Have I understood this step correctly?
I really appreciate your help with this. I’m trying to understand the details of the entire workflow so I can use it for other projects.
I’m not sure that you can mix training and validation sets in such a way. All I know is that you can use k-fold cross-validation if you really want to utilize all available data. You should ask an experienced Kaggler how to do what you want.
Also,
learn.fit(new_lr, 20, cycle_len=1, cycle_mult=2) will train your model for
2^20-1 ≈ 1 million epochs. I guess it’s better to babysit the training process by training the model for, say, 100 epochs, then look at the results and repeat if needed. Alternatively, you can write a simple wrapper loop that trains the model for a bit, looks at the difference between the training and validation losses and decides whether it needs to repeat the training.
Looks like this is permissible:
Oops—I just reviewed cycle_mult and realized this. Thank you for all the suggestions. These are all very helpful.
I am able to set-up the 2018 part 1 instance on AWS also able to run Jupiter notebook from command line …but when I am running the url in the browser it is saying cannot found the page.
Any clue how to resolve the same.
Thanks, I’ve read the link, and they provide quite reasonable explanation. However, I’d still save some data for the test set (with labels), so that you can report a final loss and/or some performance metric of the model (e.g. accuracy). So, is a sense you still should hold off some of your data. Otherwise, how can you measure quality of your model?
I guess an exception can be made for Kaggle competitions, where you don’t have a labeled test set, but can get the performance of the model in a competition leaderboard after submission. However, in this case the real test set (part of it) is held out by Kaggle.
It’d be great if someone more experienced than me explained how to do a proper train/validation/test split, when it’s OK to use parts of validation/test sets in training, and other related topics. It seems like a simple topic, but I fell like it has many quirks that are not obvious. For example, there is a recent reddit discusssion on a “global overfitting” to sample datasets like MNIST, CIFAR10 or ImageNet in research papers.
You may want to check the above Kaggle API to download datasets directly to your machine. You need to simply download the kaggle.json file from your account in kaggle
i
Question: In lesson 2 “Easy steps to train a world class image classifier” why do we perform step 3 if we were to set precompute=false for step4 anyway?
I was wondering if the anyone has tried AWS g3.4xlarge. I wonder if it performs better than the p2.xlarge since it’s newer and the VM is offered with larger RAM and more vcpus.
I got it to work with the .pth file.
Can someone explain this a bit more to me? I understand it has to do with whatever NN model we are usuing (arch=resnext101_64). But can someone elaborate on what exactly this file does?
Did
accuracy work for you? I seem to be getting an error where Torch yells at me as it expects the input arguments to be Torch tensors / datastructures. I was able to get it to work using
accuracy_np() which I noticed was used in lesson1.ipyb
Edit - I was able to only use accuracy by wrapping the 1st argument in
torch.tensorFloat and the second in
torch.tensorLong. I’m not sure if the supported types in pytorch changed or if they stopped auto converting numpy arrays internally, but some of the old code that passes in
np arrays into accuracy doesn’t work any longer.
Is there a way to save intermediate weights for models during training at regular intervals, or for example, at the lowest learning rates when using CLR? That would be very helpful when setting up long training runs as I’d hate to lose all progress during long runs in case I had to stop training or something goes wrong midway.
Yes, I’d like to know this as well ! It gets tiring to have to train from scratch every time I open my Jupyter notebook
what if you break up ur training runs to less epochs, and insert the code to save weights in between ? It would be less elegant/ more code, but it should do the job
Hello, fastai-ers!
I’ve got a question, hope someone could help me out. Thank you in advance!
I have prepared my script for dog breeds example.
Before i do any training, i ran learning rate finder, and it shows this:
Based on that, i choose LR = 1e-1, and run 3 epochs with precompute, and 3 epochs without precompute:
Quite fast my model starts to overfit.
If i re-run it all with LR = 1e-2, results are somewhat better:
Is there anything i’m missing about LR finder method ? | http://forums.fast.ai/t/wiki-lesson-2/9399?page=7 | CC-MAIN-2018-17 | refinedweb | 1,129 | 71.44 |
When I first started working on my own blog, one of the first things I started working on was getting code syntax highlighting for my entries. I even implemented a hacky article on the topic.
The crux of the challenge is extending Markdown to have a syntax that indicates a block should be highlighted. I'm still pretty comfortable with the syntax I chose:
def add(a, b): return a + b
Its nothing fancy, but it gets the job done. However, my first attempt at extending Python-Markdown to render that syntax correctly was kind of horrific. It worked, I mean it worked okay, but damn if it didn't munge the entire Python-Markdown library while it did it.
That is a forgivable sin in some situations, but the implementors of Python-Markdown went out of their way to make it extensible... so I felt a bit dirty about it. As I have been working a lot on my blogging software, I decided that now was the time to fix my previous silliness.
Lets get to work.
Step 1: Get a new copy of Markdown
My old copy of Markdown was crippled and in tears after my first modifications, so I had to get a fresh copy. You'll also want to grab a copy of pygments while you're at it.
easy_install pygments
Step 2: Write the Damn Thing
There is a full-featured example in the Markdown library (search for FOOTNOTE to jump to it), which is a boon. Whenever confusion finds you, go look at it for guidance.
Now we need to make a new module to put our code in. It doesn't (and shouldn't be) in the same file as markdown.py. I named mine code.py, since I have it in a folder named markup. If you are placing yours in a folder with a less suggestive name, you may want to try a better name.
The first thing you need to write is a preprocessor. Preprocessors need to define one function:
def run(self, lines): # do things return lines
The Markdown library splits all the lines on "\n" and then feeds you the result. If you want to operate on the text as a blob, then you have to rejoin it yourself:
blob = u"\n".join(lines)
So our class is going to be called CodeBlockPreprocessor (catchy, I know), and its going to have this run method:
def run (self, lines): new_lines = [] seen_start = False lang = None block = [] for line in lines: if line.startswith("@@") is True and seen_start is False: lang = line.strip("@@ ") seen_start = True elif line.startswith("@@") is True and seen_start is True: lexer = get_lexer_by_name(lang) content = "\n".join(block) highlighted = highlight(content, lexer, HtmlFormatter()) new_lines.append("\n%s\n" % (highlighted)) lang = None block = [] seen_start = False elif seen_start is True: block.append(line) else: new_lines.append(line) return new_lines
We walk through all the lines looking for the start to a code block (represented by two consecutive at symbols (@) at the beginning of a line). If we find one, we ignore text until we find a closing block (if there is no closing block, then everything after the opening block will be discarded... a bit ungraceful, but won't allow any undesirables through either). Then we use Pygments to color the code inbetween the start and end, using the lexer indicated on the opening line of the block (for example @@ ruby uses ruby, and @@ html+django uses html+django).
After we finish the run method, we just have to write some generic code, and soon we'll have a clean extension to Python-Markdown.
First we need to do some imports at the top of our file:
import re from ddmarkup import markdown from pygments import highlight from pygments.formatters import HtmlFormatter from pygments.lexers import get_lexer_by_name
then we need to write a simple class that we'll use to control our new preprocessor.
class CodeExtension : def extendMarkdown(self, md): index = md.preprocessors.index(markdown.HTML_BLOCK_PREPROCESSOR) preprocessor = CodeBlockPreprocessor() preprocessor.md = md md.preprocessors.insert(index, preprocessor)
This is about as simple as classes get. You take an instance of the Markdown class, and then you add an instance of CodeBlockPreprocessor to its list of preprocessors (before the HTML_BLOCK_PREPROCESSOR).
Lastly, we need to make a function to call markdown using our new preprocessor.
def render(text): md = markdown.Markdown() codeExtension = CodeExtension() codeExtension.extendMarkdown(md) md.source = text return unicode(md)
We create an instance of Markdown, add our extension, and then render away. If we want to we can make it accept arguments from the command line as well:
if __name__ == '__main__': print render(file(sys.argv[1]).read())
Although it seem like more effort than it was worth the first time I modified Python-Markdown, its really a well designed library, and a good example of designing libraries so that others can cleanly extend them. Give its code a read sometime. | http://lethain.com/cleanly-extending-python-markdown-syntax-highlight/ | CC-MAIN-2015-35 | refinedweb | 821 | 64.2 |
I would appreciate some education regarding this matter. I have project with the code shown that will be used to run a handheld remote control for controlling a sonoff basic switch. With some help from others on this forum I got the software working but have now run into another issue that I am not sure what to blame it on, hardware or software. I have the same issue when using either of these two devices. An ESP8266-12E NodeMCU or a Wemos D1 mini pro. The issue is the same on either device. The sketch compiles and uploads with no issues. The serial monitor reports connected to wifi and MQTT. And when pushing the button, everything works fine on and off for both the onboard LED and the remote sonoff switch. BUT after 30~ seconds of NO button activity, system goes dead. Pushing the button will change the LED state but the MQTT message is not being sent and of course the remote switch does not work. I have verified that the MQTT messages are not there using MQTTLens app. Simply pushing the reset button on the associated esp device brings operation back, but only if you start pushing the button right away.
Any and all assistance greatly appreciated. I am extremely new to this…
KentM
#include <ESP8266WiFi.h> #include <PubSubClient.h> const char* ssid = "xx"; const char* password = "xxx"; const char* mqttServer = "192.168.1.x"; const int mqttPort = 1883; WiFiClient espClient; PubSubClient client(espClient); void setup() { Serial.begin(115200); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("Connecting to WiFi.."); Serial.println("Connected to "); Serial.print(ssid); } client.setServer(mqttServer, mqttPort); // client.setCallback(callback); while (!client.connected()) { Serial.println("Connecting to MQTT..."); if (client.connect("ESP8266Client" )) { Serial.print("connected"); } else { Serial.print("failed with state "); Serial.print(client.state()); delay(2000); } } client.publish("home/office/sonoff1", "Remote"); client.subscribe("home/office/sonoff1"); pinMode(BUTTON, INPUT); // push button client.publish("home/office/sonoff1", "on"); } else { digitalWrite(LED, LOW); // set the LED off client.publish("home/office/sonoff1", "off"); } } } | https://forum.arduino.cc/t/esp8266xxx-hardware-or-software-issue/574675 | CC-MAIN-2022-21 | refinedweb | 344 | 53.17 |
Today to download some pictures, these pictures were all the wall. To think about using python to download, of course, starting a proxy.
Search, I found that urllib and urllib2 modules are in support of the agent, but to http proxy. And I built with Tunnelier is sock5 agent, then re-use the search python sock5 agents, third-party module that is relevant, called SocksiPy.
After downloading follow the instructions to use, but has been unsuccessful, the sweat, it may be too low perception.
It seems only to find a http proxy, and we should not put forward a nginx http proxy? Toss the next, seemingly without success.
Also think of this blog wrote a sock5 agent of another transfer agent http post, which refers to the software immediately found: Privoxy
Download and install, the final version is
Privoxy 3.0.17
I downloaded this. After installation open the Options-Eidt Main Configuration
Then will use Notepad to open the configuration file, add in the bottom of this file:
forward-socks5 / 127.0.0.1:7070.
Note that the last plane has a point, the best copy.
My Tunnelier open port in 7070 sock5 agent, depending on individual circumstances change.
Save and restart Privoxy.
Http proxy at 127.0.0.1:8118 and then there is, the 8118 is the default port privoxy.
urllib2 to use a proxy:
import urllib2
proxy = '127 .0.0.1:8118 '
opener = urllib2.build_opener (urllib2.ProxyHandler ({'http': proxy}))
urllib2.install_opener (opener)
sContent = urllib2.urlopen (url). read ()
file1 = open (filepath, 'wb')
file1.write (sContent)
file1.close () | http://www.quweiji.com/python-s-urllib2-module-connected-to-the-network-with-a-proxy/ | CC-MAIN-2019-43 | refinedweb | 260 | 76.93 |
SETREUID(2) System Calls SETREUID(2)
setreuid - set real and effective user ID
#include <unistd.h> int setreuid (uid_t ruid, uid_t euid);
The real and effective user IDs of the current process are set accord- ing to the arguments. If ruid or euid is -1, the current uid is filled in by the system. Unprivileged users may change the real user ID to the effective user ID and vice-versa;, the standard setuid function is preferred.
Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error.
EPERM The current process is not the super-user and a change other than changing the effective user-id to the real user-id was specified.
getuid(2), seteuid(2), setuid(2)
The setreuid system call appeared in 4.2BSD. GNO 16 January 1996 SETREUID(2) | http://www.gno.org/gno/man/man2/setreuid.2.html | CC-MAIN-2017-43 | refinedweb | 146 | 56.35 |
Keeping time on Arduino projects isn’t as easy as you might think: once the computer connection isn’t there, your unpowered Arduino simply stops running, including its internal ticker.
In order to keep your Arduino in sync with the world around it, you’re going to need what’s called a “Real Time Clock module”. Here’s how use one.
What’s the point of a Real Time Clock (RTC)?
Your computer most likely syncs its time with the internet, but it still has an internal clock that keeps going even without an Internet connection or the power is turned off. When you use an Arduino plugged into a computer, it has access to accurate time provided by your system clock. That’s pretty useful, but most Arduino projects are designed to be used away from a computer – at which point, any time the power is unplugged, or the Arduino restarted, it’ll have absolutely no idea of what time it is. The internal clock will be reset and start counting from zero again next time it’s powered up.
If your project has anything to do with needing the time – such as my nightlight and sunrise alarm clock Arduino Night Light and Sunrise Alarm Project Arduino Night Light and Sunrise Alarm Project Today, we'll be making a sunrise alarm clock, which will gently and slowly wake you without resorting to an offensive noise-making machine. Read More – this is clearly going to be an issue. In that project, we got around the issue by manually setting the time each night in a rather crude way – the user would press the reset button just before they went to bed, providing a manual time sync. Clearly that’s not an ideal long-time solution.
An RTC module is an additional bit of circuitry, requiring a small coin cell battery, which continues to count the time even when your Arduino is turned off. After being set once – it will keep that time for the life of the battery, usually a good year or so.
TinyRTC
The most popular RTC for Arduino is called TinyRTC and can be bought for around $5-$10 on eBay. You’ll most likely need to supply your own battery (it’s illegal to ship these overseas to many places), and some headers (the pins that slot into the holes, which you’ll need to solder in yourself).
This is the module I have:
It even has a built-in temperature sensor, though the battery will last longer if you’re not using it.
The number of holes on that thing looks pretty scary, but you only need four of them; GND, VCC, SCL and SDA – you can use the relevant pins on either side of the RTC module. You talk to the clock using I2C protocol, which means only two pins are used – one for the “clock” (a serial communications data clock, nothing to do with time) and one for the data. In fact, you even chain up to 121 I2C devices on the same two pins – check out this Adafruit page for a selection of other I2C devices you could add, because there are a lot!
Getting Started
Hook up your TinyRTC module according to the diagram below – the pink DS line is not needed, as that’s for the temperature sensor.
Next, download the Time and DS1307RTC libraries and place the resulting folders in your /libraries folder.
Exit and relaunch the Arduino environment to load in the libraries and examples.
You’ll find two examples in the DS1307RTC menu: upload and run the SetTime example first – this will set the RTC to the correct time. The actual code is not worth going into detail with, just know that you need to run it once to perform the initial time synchronisation.
Next, look at the example usage with ReadTest.
#include <DS1307RTC.h> #include <Time.h> #include <Wire.h> void setup() { Serial.begin(9600); while (!Serial) ; // wait for serial delay(200); Serial.println("DS1307RTC Read Test"); Serial.println("-------------------"); } void loop() { tmElements_t tm;); } void print2digits(int number) { if (number >= 0 && number
Note that we've also included the core Wire.h library - this comes with Arduino and is used for communicating over I2C. Upload the code, open up the serial console at 9600 baud, and watch and your Arduino outputs the current time every second. Marvellous!
The most important code in the example is creating a tmElements_t tm - this a structure that we'll populate with the current time; and the RTC.read(tm) function, which gets the current time from the RTC module, puts it into our tm structure, and returns true if everything went well. Add your debug or logic code inside that "if" statement, such as printing out the time or reacting to it.
Now that you know how to get the right time with Arduino, you could try rewriting the sunrise alarm project or create an LED word clock - the possibilities are endless! What will you make?
Image Credits: Snootlab Via Flickr
If you read the article, he stated "The actual code is not worth going into detail with, just know that you need to run it once to perform the initial time synchronisation".
yeah wheres the rest of the code??
is the code complete? sounds like the end of the code is removed | http://www.makeuseof.com/tag/how-and-why-to-add-a-real-time-clock-to-arduino/ | CC-MAIN-2017-09 | refinedweb | 892 | 68.2 |
#include <hallo.h> Richard Atterer wrote on Sat Sep 01, 2001 um 10:53:15PM: > If we distribute translations separately, we lose an important > property of source packages; that *all* the data related to a package > can be found in *one* place. If some data for a package can be found > here, other data there, the concept of a "package" soon gets very > blurred. I can't really list any practical disadvantages of this, but > it just feels wrong! Yes, you cannot. Other people tried and the number of disadvantages was not enough. Why don't you realise the reality? There are many lazy maintainers, ignoring the translations, OTOH it doesn't make much sense to rebuild and upload a package every time when a new translation came in, or existing one has been improved. You may claim all data which comes in touch with your package to belong to it, but this opinion is not really practical. Also. Gruss/Regards, Eduard. -- Alpträumer! AOL-Surfer! Apfelschorle-Besteller! Pannenhilfe-Anrufer! | https://lists.debian.org/debian-devel/2001/09/msg00080.html | CC-MAIN-2016-44 | refinedweb | 169 | 59.74 |
[ ]
Aeham Abushwashi updated CONNECTORS-1364: ----------------------------------------- Attachment: CONNECTORS-1364.git.v2.patch It’s a fair comment. In my use case, I have a client application that’s talking to manifold through the API so I have to implement this logic either way. I figured it’d be useful for others too but perhaps other advanced users would prefer to use their own bin naming convention. I could see a future use for share and root folder being passed in to the repo connector but I think it’d be better to introduce those as first class citizens, and not optional parameters, should the need for them ever arise. Here’s an updated patch.. > Better bin naming in the Shared Drive Connector > ----------------------------------------------- > > Key: CONNECTORS-1364 > URL: > Project: ManifoldCF > Issue Type: Improvement > Components: JCIFS connector > Affects Versions: ManifoldCF 1.9 > Reporter: Aeham Abushwashi > Assignee: Karl Wright > Fix For: ManifoldCF 2.7 > > Attachments: CONNECTORS-1364.git.patch, CONNECTORS-1364.git.v2.patch > > > Hello and happy new year! > Bin naming in the Shared Drive Connector makes assumptions that are not > always valid. > As I understand it, Manifold uses bins to prevent overloading data sources. > In the SDC, server name is designated as bin name. All jobs created against a > particular server will be treated as one unit when documents are prioritised, > which can severely disadvantage some jobs (e.g. late starters). > Moreover, this is incompatible with some common enterprise server topologies. > In Windows DFS, which is widely used in large enterprises, what the SDC > thinks of as a server name, isn’t actually a physical resource. It’s a > namespace that can span many servers and shares. In this case, it doesn’t > make sense to throttle simply on the root ‘server’ name. In other > environments, a powerful storage server can be more than capable of handling > high crawl load; overzealous throttling can end up limiting/hurting > Manifold’s performance there. > I’m struggling to find a single solution that fits all so I’m leaning towards > passing in to the repo connection config some sort of server topology flag or > throttling depth flag as a hint that ShareDriveConnector#getBinNames can use > to decide whether the bin name should be server, server+share or > server+share+root_folder. Share and root_folder would need to be explicitly > passed in the repo config too or extracted from the documentIdentifier arg in > getBinNames (assuming it's reliable). > Thoughts? -- This message was sent by Atlassian JIRA (v6.3.4#6332) | https://www.mail-archive.com/dev@manifoldcf.apache.org/msg11243.html | CC-MAIN-2017-43 | refinedweb | 411 | 54.22 |
User talk:Imrealized
From Uncyclopedia, the content-free encyclopedia
edit Talk Archives
edit Eh up
Nice to see you around the old homestead....hopefully we'll see some more of your magnificent contributions soon... -- Sir Mhaille
(talk to me)
- You complete me. Imrealized ...hmm? 05:47, April 16, 2011 (UTC)
edit You are very strange., 12 May 2011
- My friends and I go Glay every Tuesday at 8. It beats watching the show. - Imrealized ...hmm? 19:52, May 12, 2011 (UTC)
- By how much? I mean, how is the show? :15, 13 May 2011
- I've never actually seen the show. Cuddle parties. *awkward grin* - Imrealized ...hmm? 03:36, May 13, 2011 (UTC)
edit Thanks for Review
-)
edit Thanks for the piss, man!
Thanks for reviewing Hypnotist, dude! I can't wait to get started on it. Thanks)
- Hmm... you lost a whole day you say? Well here, drink this. Marvin gave it to me, he said it'll help you remember things better. Maybe it will, —The preceding unsigned comment was added by Imrealized (talk • contribs)
- Oh wow, you look really sick now. Here, drink:28, May 20, 2011 (UTC)
- ht the urge to... GODDAMMIT! - Imrealized ...hmm? 05:54, May 22, 2011 (UTC)
edit User:Imrealized/UnNews:Large Hadron Collider creates new mimetic metal alloy:35, 26 May 2011
edit Tom Paine
Imrealized, you wrote a great article with "Tom Paine." I nominated it to be highlighted on the main page, and I hope it makes it! —The preceding unsigned comment was)
edit Shame on you!)
edit Thanks
- Another yay! --Imrealized ...hmm? 04:58, June 5, 2011 (UTC)
edit Chicago Eight)
edit You have to realize that I must thank you for voting for my article.
- Imrealized everything, always. Err, sometimes. Okay, okay... occasionally. Once or twice, maybe? On second thought, let's not evaluate my realizations. Thank you for the thank you, though. --Imrealized ...hmm? 16:11, June 23, 2011 (UTC)
edit Much Thanks for making Padmé
edit Thank you...
..)
edit Whoa)
edit Funnybony chop
That's fantastic, how did you do that??? Thank you! Is it possible to put back Oscar, that's kind of a traditional image for the award. The one you used, is it that great Jesus touch-up that that woman did thinking she was helping the church? That's one of the most famous paintings in the world now, and if I were that church I'd hire security to protect it from being stolen. Great job, and that kind of chop is still a mystery to me. Ah, just thought of something. Can you, if you have the time (hours? days?) do mine with just "Aleister" instead of "Aleister in Chains"? Damn, thanks again! Yay Funnybony! Al 22:23 8-2-'13
- Well, first I took a photograph of my computer screen to get the old picture. Then I uploaded that photo and made the award as big as I could. Then I put masking tape over where it said your name, MMX or whatever, and also taped over the year you won, Aleister in Chains. Then I wrote the new name and date in with a fine, white Sharpie. Then I did some other stuff that I can't really talk about. Finally, I took another photograph of my computer screen and uploaded that photo to this site. Bango! New award!
- It is that Jesus touch-up. I thought it was funny, though had no intentions of actually making you use that one. We need Oscar proper on the award. And yes, I can make you a new version after I eat a big plate of spaghetti. I like to 'chop while I digest. --Imrealized ...hmm? 22:39, February 8, 2013 (UTC)
- Thank you!!! Check out [[user:Funnybony|Funnybony's user page, I put it at 1000px for him to find. It looks really good! You are a God incarnate. Aleister 23:44 8-2-'13
- No probs. Check your mail, duder. --Imrealized ...hmm? 01:16, February 9, 2013 (UTC)
- Whoo! You have indeed freed me from my chains, which have rusted and rattled. Thanks. I know that was a lot of work, very appreciated. Onward and outwardA! Aleister 14:57 9-2-'13
- Hmm, that picture isn't the one I did. After checking the piccy history, I see someone else dropped a new version about four hours after I uploaded mine. That's cool, though I preferred the numerical year. Why? Because the Roman numerals really seem to drown out the winning writer's name. Also, people have names that look like Roman numerals (around here), but not so much numerical years. No matter, I think the other version is still there somewhere. --Imrealized ...hmm? 10:58, February 13, 2013 (UTC)
- Now Funnybony is using the first one you did, with oscar's head tilted to the side, or the pic of that woman painting over Jesus's face, whatever it is. He likes it better! He thanks you on his talk page but not here, so I'm thanking you here again using his name. You did good! not Funnybony 11:58 13-2-13
edit Thanks
Thank you for voting for Fred Basset. --Equilateralperil 02:36, February 9, 2013 (UTC)
- Thank you for writing it. --Imrealized ...hmm? 10:41, February 13, 2013 (UTC)
edit Unquotable:History of Unquotable on VFD
Please do not "fight for Some user" but vote on the quality of the article in question. To be persuasive, your vote should address the claims made against the article. Also note (in the Comments) that a consensus was starting to emerge to keep this article but move it to Some user's userspace. I supported this as the article is more a personal diary than of general interest. It would be helpful if you would see fit to add your voice to that. Cheers. Spıke Ѧ 02:11 23-Feb-13
- It's a pleasure to meet you, too. Thanks for dropping by! -- Imrealized ...hmm? 04:18, February 23, 2013 (UTC)
- I stand with you in standing with Some user, and standing for history itself. I am blessed and cursed by being a historian at heart, and see in that page a very detailed and personal account of the beginning of a wiki and even of an internet meme. Keeping things like this hurts nobody and enhances the history of the wiki. Thank you for your valiant and principled stand even though some user is sitting down somewhere. a random historian at heart 8:36 23-2-'13
- If it took a bunch of idiots six years to ruin Uncyclopedia, then I bet these three smart guys will be able to finish it off in less than a year. Let's watch the magic unfold! -- Imrealized ...hmm? 09:16, February 24, 2013 (UTC)
- I'll take that bet, and raise you a trip to the fabulous Bahamas and a side of organically grown beef from the killing fields of Kansas City. Trying to get rid of an entire category of pages has a certain "wha?" about it, and the wholesale deletion of unused images Edit:I was wrong here, that user does look at the pictures before deleting them, something one of the fork founders didn't do when deleting thousands of pictures here. Carry on by a user who doesn't see images on his computer (the blind leading the mime) give one pause, but the people active here now are the best of the best (probably an accurate assessment) and this wiki would have surely sunk without them. The other wiki is fine, as it goes now, but can be sunk with the flick of a switch if one person wants to do so. Better to have two ships afloat than one with a self-destruct key on the captains table. Aye, me bucko. And this wiki has Funnybony, so it has at least one perfect user name. Aye. Aleister a user nickname 11:50 24-2-'13
- It's just always so sad to witness primates go crazy with power, even if it is the fake internet kind. Attempting to bury the history of Uncyclopedia is one thing, actively writing little passive-aggressive votes about Some user over and over is another. It's a little gay. Coming to my talk page and scolding me for funny voting is also a little gay, then reading your response and correcting you on your talk page is a pussy move. Also, telling Claudius Prime to namespace his article is retarded. So, apart from being run by partially gay, retarded pussies mad with internet power, you're right... this place is the tits. Think I'll stick around for a long time, contributing many worthwhile edits. Like this one. -- Imrealized ...hmm? 08:22, February 25, 2013 (UTC)
edit WTF happened to you?
I weep for this place. Salty tears trickle down my face. Oh wait, that's just jizz. Damn orgies. -- Imrealized ...hmm? 10:42, October 8, 2014 (UTC) | http://uncyclopedia.wikia.com/wiki/User_talk:Imrealized | CC-MAIN-2015-32 | refinedweb | 1,504 | 74.59 |
javascript
ItemList = [
{
"checked":false,
"title": "title0"
},
{
"checked":false,
"title": "title2"
},
{
"checked":true,
"title": "title2"
},
{
"checked":false,
"title": "title3"
}
];
$scope.app.params["ItemList"] = ItemList;
I made 2D Widget
Left Panel
+ repeater
+ checkbox
i bind ItemList to repeater
checkbox Label is {{item.title}}
it's well display title
i bind ItemList to checkbox Value
and How to write Add Filter?
Can I Change CheckBox Value By DataList
Filter Body
return ( HowToget( item.checked ) === true ? true : false )
I do not think that this is possible in this context (or at least it not trivial )
In your case you will set a list (array of json’s) directly to the repeater.
You can put in the text field of the label. {{item.<propertyName>}} where <propertyName> is the name of the field in the array that is to be displayed in the label. In this case you will see the value in the field but unfortunately, I did not find a way to access this value on runtime when the table is created and also it was not possible to access this in a filter
The filter is text which is evaluated later in a filter function. The problem only the value is passed to the filter and there is no possible to access variables outside (e.g. $scope in not valid there)
For example I have an simple project with 2 repeater.
The one repeater with widget name “repeater-1” use a data from a TWX service
The second repeater used data from an app parameter (your example) containing a Jason list (array)
So when we can see the difference:
... $scope.$on('$ionicView.afterEnter', function() { $scope.app.params["ItemList"] = ItemList; console.warn($scope.view.wdg['repeater-1']) console.warn($scope.view.wdg['repeater-2']) ... });
So in the one data set we do not have an info about the current data set and could used it directly to assigneed a value to a repeater element.
Only the syntax {{item.<propertyName>}} is working on the fly to to replace the text with a value of a property when the repeat widget is displayed.
In a filter we have only access to the "value" variable which means here the whole list.
For example:
the follwoing filter definiton:
{ let obj= value; let nameIn='title'; let nameOut='checked'; let val= 'title0'; // this will return false //let val= 'title2'; // if this line is used - returns true for (var i = 0; i < obj.length; i++){ // look for the entry with a matching `code` value if (obj[i][nameIn] == val){ return obj[i][nameOut];} } };
this filter will work fine and will return false
If we replace the line :
let val= 'title0'; by let val= 'title2';
the checkbox wil be selected .
Unfortunately we can not use a syntax like:
val= $scope.view.wdg['repeater-1'].text; or let val= {{item.title}} ; or let val= item.title ; and etc.
it always will lead to error. Also the $scope is not defined inside the filter.
Also if we try to use a binding of other field which is set using the syntax {{item.<propertyName>}} - there is also no success:
What should be here the possible solution/ workaround:
- to use instead of list from javaScript a twx service. So you can simple define such service which should send json fille to the External data:
Where the twx service is defined as:
var data= [ { id_num: 0, display: "France", value: "Paris", checked:false }, { display: "Italy", value: "Rome" }, { display: "Spain", value: "Madrid"}, { display: "UK", value: "London"}, { display: "Germany", value: "Berlin"}, { display: "Norway", value: "Oslo"}, { display: "Switzerland", value: "Bern"}, { display: "Greece", value: "Athens"}, { display: "France", value: "Paris"} ]; //get the first row for the dataShape defintion var FirstRowJson =data[0]; //create an empty Info Table var resInfoTable = { dataShape: { fieldDefinitions : {} }, rows: [] }; //defines the dataShape for(var prop in FirstRowJson) { if(FirstRowJson.hasOwnProperty(prop)) { if(prop == "id_num") resInfoTable.dataShape.fieldDefinitions[prop] = { name:prop, baseType: 'INTEGER' }; else if(prop == "checked") resInfoTable.dataShape.fieldDefinitions[prop] = { name:prop, baseType: 'BOOLEAN' }; else resInfoTable.dataShape.fieldDefinitions[prop] = { name:prop, baseType: 'STRING' }; } } //add the rows to the InfoTables for(i in data){ resInfoTable.rows[i]=data[i]; //copy one to one resInfoTable.rows[i].id_num=i; resInfoTable.rows[i].checked=false; } // result = resInfoTable;
I used such service here for the repeater-1 and it worked fine.
Of course, this will work only if you have a TWX instance which allow access of the TWX database.
Otherwise we can try to simulate the creation of such TWX service only in Studio angular environment without having TWX or we can try to listen to the a repeater row event - but unfortunately I did not find a way how to do this yet.
Thank you for the fast answer.
I think , I found an option where we can solve the original problem using a filter but it is not very 'clean' way to do this , but it worked fine in this case.
So the solution is based on a global variable (window.my_variableX) and I docount every filter call and respectively will increment the variable and will compore with the list size. The value of the checkbox here is the list Parameter ItemList :
ItemList = [ { "checked":false, "title": "title0" }, { "checked":false, "title": "title1" }, { "checked":true, "title": "title2" }, { "checked":false, "title": "title3" } ]; //====== set the json to the parameter ItemList after ViewStart ==== $scope.$on('$ionicView.afterEnter', function() { $scope.app.params["ItemList"] = ItemList; })
Now I will set a binding between Studio parameter "ItemList" and the value of the checkbox with a filter:
And here the following definition of the filter:
if(!window.my_filter) window.my_filter=1; else { if(window.my_filter>= value.length) window.my_filter=1; else {window.my_filter++;} } console.log("Filter i="+(window.my_filter-1)+" max length="+value.length) console.log( "title:=" +value[window.my_filter-1]['title'] +" chcked:="+value[window.my_filter-1]['checked']); return(value[window.my_filter-1]['checked']);
there the console.log was only for debugging and should be removed from the real filter. Also for each filter the global variable should be different (because we can face the problem that more different filters will increment the same global variable in the same time and we will have a very erroneous results )
So, when I test it in preview mode: | https://community.ptc.com/t5/Vuforia-Studio/how-to-use-add-data-filter/m-p/633726/highlight/true | CC-MAIN-2019-51 | refinedweb | 1,025 | 53.21 |
.
C++
// C++ program to count all distinct binary strings // without two consecutive 1's #include <iostream> using namespace std; int countStrings(int n) { int a[n], b[n]; a[0] = b[0] = 1; for (int i = 1; i < n; i++) { a[i] = a[i-1] + b[i-1]; b[i] = a[i-1]; } return a[n-1] + b[n-1]; } // Driver program to test above functions int main() { cout << countStrings(3) << endl; return 0; }
Java
class Subset_sum { static int countStrings(int n) { int a[] = new int [n]; int b[] = new int [n]; a[0] = b[0] = 1; for (int i = 1; i < n; i++) { a[i] = a[i-1] + b[i-1]; b[i] = a[i-1]; } return a[n-1] + b[n-1]; } /* Driver program to test above function */ public static void main (String args[]) { System.out.println(countStrings(3)); } }/* This code is contributed by Rajat Mishra */
Output:
5
Source:
courses.csail.mit.edu/6.006/oldquizzes/solutions/q2-f2009-sol.pdf also using the method 5 here.
This article is contributed by Rahul Jain. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above | http://www.geeksforgeeks.org/count-number-binary-strings-without-consecutive-1s/ | CC-MAIN-2017-17 | refinedweb | 196 | 58.55 |
One of Ionic's strengths is in the services that it offers on top of the framework. This includes services for authenticating users of your app, push notifications, and analytics. In this series, we'll be learning about those three services by creating an app which uses each of them.
The first service we're going to look at is the Auth service. This allows us to implement authentication in an Ionic app without writing a single line of back-end code. Or if you already have an existing authentication system, you can also use that. The service supports the following authentication methods:
- Email/Password: user is registered by supplying their email and password.
- Social Login: user is registered using their social media profile. This currently includes Facebook, Google, Twitter, Instagram, LinkedIn, and GitHub.
- Custom: user is registered by making use of an existing authentication system.
In this tutorial, we're only going to cover email/password and social login with Facebook.
What You'll Be Creating
Before we proceed, it's always good to have a general idea of what we're going to create, and what the flow of the app will be like. The app will have the following pages:
- user page
The home page is the default page of the app where the user can log in with their email/password or their Facebook account.
When the user clicks on the Login with Facebook button, a screen similar to the following is displayed, and once the user agrees, the user is logged in to the app:
The signup page is where the user can register by entering their email and password. Facebook login doesn't require any signup because the user info is supplied by the Facebook API.
User Page
The final page is the user page, which can be seen only when the user has already logged in.
Bootstrap a New Ionic App
Now that you know what we're making, let's get started building our app!
First, we bootstrap a new Ionic app using the blank starter template:
ionic start authApp blank
Navigate inside the newly created authApp folder. This serves as the root directory of the project.
To quickly get set up with the UI of the app, I've created a GitHub repo where you can find the starter source files. Download the repo, navigate inside the starter folder, and copy the src folder over to the root of the Ionic project that you just created. This contains the template files for each of the pages of the app. I'll explain to you in more detail what each of those files does in a later section.
Serve the project so you can immediately see your changes while developing the app:
ionic serve
Create an Ionic Account
Since we'll be using Ionic's back-end to handle user authentication, we need a way to manage the users of the app. This is where the Ionic account comes in. It allows you to manage your Ionic apps and the services that they use. This includes managing the Auth service. You can create an Ionic account by visiting the Ionic.io signup page.
Connect the App to Ionic Services
Next, navigate to the root directory of the project and install the Ionic Cloud plugin:
npm install @ionic/cloud-angular --save
This plugin will allow the app to easily interact with Ionic services.
After that, you can initialize the app to use Ionic services:
ionic io init
This prompts you to log in with your Ionic account. Once you've entered the correct login details, the command-line tool will automatically create a new app record under your Ionic account. This record is connected to the app that you're developing.
You can verify that this step has worked by opening the .io-config.json file and the ionic.config.json file at the root of your project. The
app_id should be the same as the app ID assigned to the newly created app in your Ionic dashboard.
Navigate inside the src/pages/home directory to see the files for the home page. Open the home.html file and you'll see the following:
<ion-header> <ion-navbar> <ion-title> Ionic2 Auth </ion-title> </ion-navbar> </ion-header> <ion-content padding> <ion-list> <ion-item> <ion-label fixed>Email</ion-label> <ion-input</ion-input> </ion-item> <ion-item> <ion-label fixed>Password</ion-label> <ion-input</ion-input> </ion-item> </ion-list> <button ion-button full outline (click)='login("email");'>Login</button> <button ion-button full icon-left (click)='login("fb");'> <ion-icon</ion-icon>Login with Facebook </button> <button ion-button clear (click)='signup();'>Don't have an account?</button> </ion-content>
This page will ask the user for their email and password or to log in with their Facebook account. If the user has no account yet, they can click on the signup button to access the signup page. We'll go back to the specifics of this page later on as we move on to the login part. I'm just showing it to you so you can see the code for navigating to the signup page.
Next, open the home.ts file. For now, it only contains some boilerplate code for navigating to the signup and user page. Later on, we're going to go back to this page to add the code for logging the user in.
User Sign Up
The layout of the signup page is found in src/pages/signup-page/signup-page.html. Take a look at this file and you'll find a simple form with an email field and a password field.
Next, let's take a look at the signup-page.ts file.
Let's break this down. First, it imports the controllers for creating alerts and loaders:
import { AlertController, LoadingController } from 'ionic-angular';
Then, it imports the classes needed from the Cloud Client:
import { Auth, UserDetails, IDetailedError } from '@ionic/cloud-angular';
- The
Authservice which deals with user registration, login, and sign-out.
UserDetailsis the type used for defining the user details when registering or logging in a user.
IDetailedErroris used for determining the exact reason for the error that occurred. This allows us to provide user-friendly error messages to the user whenever an error occurs.
Declare the variables to be used for storing the email and password input by the user. This should be the same as the name you've given to the
value and
ngModel attributes in the layout file.
export class SignupPage { email: string; password: string; constructor(public auth: Auth, public alertCtrl: AlertController, public loadingCtrl: LoadingController) { } register() { ... } }
Next is the
register method, which is called when the Register button is pressed. Let's code this method together.
First it fires up a loader, and then makes it automatically close after five seconds (so that in case something goes wrong, the user isn't left with a loading animation that is spinning forever).
register() { let loader = this.loadingCtrl.create({ content: "Signing you up..." }); loader.present(); setTimeout(() => { loader.dismiss(); }, 5000);
Next, let's create an object to store the user details:
let details: UserDetails = { 'email': this.email, 'password': this.password };
Finally, we'll call the
Auth service and supply the user details as the argument. This returns a promise, which we unwrap with
then(). Once a success response is received from the back-end, the first function that you pass to
then() will get executed; otherwise, the second function will get executed.
this.auth.signup(details).then((res) => { loader.dismiss(); //hide the loader let alert = this.alertCtrl.create({ title: "You're registered!", subTitle: 'You can now login.', buttons: ['OK'] }); alert.present(); //show alert box }, (err: IDetailedError<string[]>) => { ... });
If an error response is received from Ionic Auth, we'll loop through the array of errors and construct an error message based on the type of error received. Here you can find the list of Auth signup errors that can occur.
loader.dismiss(); var error_message = ''; for (let e of err.details) { if (e === 'conflict_email') { error_message += "Email already exists. <br />"; } else { error_message += "Invalid credentials. <br />"; } } let alert = this.alertCtrl.create({ title: error_message, subTitle: 'Please try again.', buttons: ['OK'] }); alert.present(); }
Once that's done, you can try the app in your browser. The email/password login doesn't have any plugin or hardware dependencies, so you should be able to test it out in the browser. You can then find the newly registered user in the Auth tab of your Ionic app dashboard.
Setting Up Facebook App
The next step is to set up the app so that it can handle native Facebook logins. First, you need to create a Facebook app. You can do that by logging in to your Facebook account and then going to the Facebook Developer Site. From there, create a new app:
Once the app is created, click on the Add Product link on the sidebar and select Facebook Login. This will open the Quickstart screen by default. We don't really need that, so go ahead and click on the Settings link right below the Facebook Login. That should show you the following screen:
Here you need to enable the Embedded Browser OAuth Login setting and add as the value for the Valid OAuth redirect URIs. Save the changes once that's done.
Next, you need to connect Ionic Auth to the Facebook app that you've just created. Go to your Ionic dashboard and select the app that was created earlier (see the "Connect the App to Ionic Services" section). Click on the Settings tab and then User Auth. Under the Social Providers, click on the Setup button next to Facebook:
Enter the App ID and App Secret of the Facebook app that you created earlier and hit Enable.
Install the Facebook Plugin
Next, install the Facebook plugin for Cordova. Unlike most plugins, this requires you to supply a bit of information: the Facebook App ID and App Name. You can just copy this information from the Facebook app dashboard.
cordova plugin add cordova-plugin-facebook4 --save --variable APP_ID="YOUR FACEBOOK APP ID" --variable APP_NAME="YOUR FACEBOOK APP NAME"
Configure Facebook Service
Once that's done, the last thing that you need to do is to go back to your project, open the src/app/app.module.ts file, and add the
CloudSettings and
CloudModule services from the
cloud-angular package:
import { CloudSettings, CloudModule } from '@ionic/cloud-angular';
Declare the
cloudSettings object. This contains the
app_id of your Ionic app and any additional permissions (scope) that you want to ask from the users of your app. By default, this already asks for the
public_profile.
const cloudSettings: CloudSettings = { 'core': { 'app_id': 'YOUR IONIC APP ID' }, 'auth': { 'facebook': { 'scope': [] } } };
If you want to ask for more data from your users, you can find a list of permissions on this page: Facebook Login Permissions.
Next, let Ionic know of the
cloudSettings you've just added:
@NgModule({ declarations: [ MyApp, HomePage, SignupPage ], imports: [ BrowserModule, IonicModule.forRoot(MyApp), CloudModule.forRoot(cloudSettings) // <--add this ], ...
Later on, when you add other social providers to your app, a similar process is followed.
Logging the User In
Now it's time to go back to the home page and make some changes. The HTML template already has everything we need, so we only need to update the script. Go ahead and open the src/pages/home/home.ts file. At the top of the file, import the following in addition to what you already have earlier:
import { NavController, AlertController, LoadingController } from 'ionic-angular'; import { Auth, FacebookAuth, User, IDetailedError } from '@ionic/cloud-angular'; import { UserPage } from '../user-page/user-page';
Inside the constructor, determine if a user is currently logged in or not. If a user is already logged in, we automatically navigate to the User Page.
export class HomePage { //declare variables for storing the user and email inputted by the user email: string; password: string; constructor(public navCtrl: NavController, public auth: Auth, public facebookAuth: FacebookAuth, public user: User, public alertCtrl: AlertController, public loadingCtrl: LoadingController) { if (this.auth.isAuthenticated()) { this.navCtrl.push(UserPage); } } ... }
Next, when the Login button is pressed, we start by displaying a loading animation.
login(type) { let loader = this.loadingCtrl.create({ content: "Logging in..." }); loader.present(); setTimeout(() => { loader.dismiss(); }, 5000); ... }
As you saw in the src/pages/home/home.html file earlier, a string that represents which login button has been pressed (either the email/password login button or the Facebook login button) is passed to the
login() function. This allows us to determine which login code to execute. If the type is
'fb', it means that the Facebook login button was pressed, so we call the
login() method of the
FacebookAuth service.
if(type == 'fb'){ this.facebookAuth.login().then((res) => { loader.dismiss(); this.navCtrl.push(UserPage); }, (err) => { //hide the loader and navigate to the user page loader.dismiss(); let alert = this.alertCtrl.create({ title: "Error while logging in to Facebook.", subTitle: 'Please try again.', buttons: ['OK'] }); alert.present(); }); }
Otherwise, the email/password login button was pressed, and we should log the user in with the details entered in the login form.
else{ let details: UserDetails = { 'email': this.email, 'password': this.password }; this.auth.login('basic', details).then((res) => { loader.dismiss(); this.email = ''; this.password = ''; this.navCtrl.push(UserPage); }, (err) => { loader.dismiss(); this.email = ''; this.password = ''; let alert = this.alertCtrl.create({ title: "Invalid Credentials.", subTitle: 'Please try again.', buttons: ['OK'] }); alert.present(); });
Take a look at the final version of the home.ts file to see how it should all look.
User Page
The last page is the User page.
The layout, in src/pages/user-page/user-page.html, displays the profile photo of the user and their username. If the user signed up with their email/password, the username will be the email address of the user and the profile photo will be the default profile photo assigned by Ionic. On the other hand, if the user signed up with Facebook, their profile photo will be their Facebook profile photo and their username will be their full name.
Next, look at the user-page.ts file.
Under the
ionic-angular package, we're importing the
Platform service aside from
NavController. This is used to get information about the current device. It also has methods for listening to hardware events such as when the hardware back button in Android is pressed.
import { NavController, Platform } from 'ionic-angular';
And for the
cloud-angular package, we need the
Auth,
FacebookAuth, and
User services:
import { Auth, FacebookAuth, User } from '@ionic/cloud-angular';
Inside the class constructor, determine if the user logged in with their email/password user or their Facebook account. Fill in the
username and
photo based on that. Then, below that, assign a function to be executed when the hardware back button is pressed. The
registerBackButtonAction() accepts two arguments: the function to be executed and the priority. If there are more than one of these in the app, only the highest priority will be executed. But since we only need this in this screen, we just put in
1.
export class UserPage { public username; public photo; constructor(public navCtrl: NavController, public auth: Auth, public facebookAuth: FacebookAuth, public user: User, public platform: Platform) { if(this.user.details.hasOwnProperty('email')){ this.username = this.user.details.email; this.photo = this.user.details.image; }else{ this.username = this.user.social.facebook.data.full_name; this.photo = this.user.social.facebook.data.profile_picture; } this.platform.registerBackButtonAction(() => { this.logoutUser.call(this); }, 1); } }
The
logoutUser() method contains the logic for logging the user out. The first thing it does is to determine if a user is actually logged in. If a user is logged in, we determine whether the user is a Facebook user or an email/password user.
This can be done by checking the
user.details object. If this property exists, that means that the user is an email/password user. So if it's otherwise, we assume that it's a Facebook user. Calling the
logout() method in
Auth and
FacebookAuth clears out the current user of the app.
logoutUser() { if (this.auth.isAuthenticated()) { if(this.user.details.hasOwnProperty('email')){ this.auth.logout(); }else{ this.facebookAuth.logout(); } } this.navCtrl.pop(); //go back to home page }
Running the App on a Device
Now we can try out our app! First, set up the platform and build the debug apk:
ionic platform add android ionic build android
For the Facebook login to work, you need to supply the hash of the apk file to the Facebook app. You can determine the hash by executing the following command:
keytool -list -printcert -jarfile [path_to_your_apk] | grep -Po "(?<=SHA1:) .*" | xxd -r -p | openssl base64
Next, go to your Facebook app basic settings page and click on the Add Platform button in the bottom part of the screen. Select Android as the platform. You'll then see the following form:
Fill out the Google Play Package Name and Key Hashes. You can put anything you want as the value for the Google Play Package Name as long as it follows the same format as the apps in Google Play (e.g.
com.ionicframework.authapp316678). For the Key Hashes, you need to put in the hash returned from earlier. Don't forget to hit Save Changes when you're done.
Once that's done, you can now copy the android-debug.apk from the platforms/android/build/outputs/apk folder to your device, install it, and then run.
Conclusion and Next Steps
That's it! In this tutorial, you've learned how to make use of the Ionic Auth service to easily implement authentication in your Ionic app. We've used email/password authentication and Facebook login in this tutorial, but there are other options, and it should be easy for you to add those to your app as well.
Here are some next steps you could try on your own that would take your app to the next level.
- Store additional user information—aside from the email and password, you can also store additional information for your users.
- Use other social login providers—as mentioned at the beginning of the article, you can also implement social login with the following services: Google, Twitter, Instagram, LinkedIn, and GitHub.
- Add a password reset functionality—password resets can be implemented using Ionic's password reset forms, or you can create your own.
- Custom authentication—if you already have an existing back-end which handles the user authentication for your service, you might need to implement custom authentication.
That's all for now. Stay tuned for more articles on using Ionic services! And in the meantime, check out some of our other great posts on cross-platform mobile app development.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/get-started-with-ionic-services-auth--cms-28705 | CC-MAIN-2020-05 | refinedweb | 3,141 | 56.25 |
6 year old hotwires car-heads to highway 185
D3 writes "Who knew how easy it could be to hotwire a kiddie car? This 6 year old had no problem. " Heh-I needed to read something like this. Kids-they're gonna take everything over. Thanks to modnar for a more detailed story.
looking forward... (Score:1)
On the other hand, maybe the kid has a little more brainpower (at least, a means to an end) than most 6 year olds who are just worried about nappy-time...
Now, I'm not saying this is the smartest kid in the bunch, but he's not the dumbest. Heck, if it took him a mile to get caught ON THE HIGHWAY, no less, then I'd say it's the motorist's fault for not getting a clue at that point.
Shut up! (Score:1)
Last Day... Logan 5.
There... is... no... sanctuary.
Re:this is too funny (Score:1)
big deal! i've got sperm with their own websites!
j.
Re:I am ashamed of you... (Score.
Alright in that vein, we are to believe that six year olds are not capable of understanding the basics of electricity. Well, that might have flown for truth when I was younger, but we're dealing with a plugged in society these days. The fact that kids are able to mimic gestures such as plugging in a wire does not impress me.
.
Now as far as your flame bait goes, I'll bite. Wow, you are 3 years older than me. How do I figure? Because I hacked my radio (an old single speaker tape player and AM/FM thing) when the thing broke. I hacked my walkie talkies and found out how to make my voice control my radio controlled car. You want to flame, you better know your target.
Btw, my parents weren't rich. My dad had a computer not made by IBM or Apple. I used CP/M. You are right, computers were and are expensive. That's probably why I used that until 1989. It's only now, another 10 years later that my dad upgraded from his 386. I used a 486 until last year (a graduation present from my H.S. graduation in 1994). I figured I better get a better one before I entered the industry.
Thanks for the flame.
CNN's website has a picture of the kid (Score:1)
Note the nasty brutish expression, the strong jaw, the aggressive eyes... It must've taken the photographer ages to get a photo like that.
Re:Capsela? Absolutely. (Score:1)
Re:And your point is? (Score:1)
What is this supposed to mean? Since you could not have possibly really seperated salt into its elements in any real sence of the word, the only interperatation I can come up with is that you disolved it and poured salt water on your friend's sandwich. This is hardly an impressive feat.
Oh, and in english, we call it sodium.
Re:What about the day care center? (Score:1)
Exactly.. (Score:1)
6 year old kid "hotwired" a car (meaning plugged in the battery) and drove for ONE MILE.. can you imagine how long it would take for a Power Wheels to go one mile, especially without being stopped by a driver???? Unless he is in the boonies or people drive Power Wheels' as a true means of transportation, there is definitely some exaggeration there.. well unless the kid IS a genius and rigged the car to go say 60/mph
Hmm (Score:1)
do good PR for Stalin... then you've got a deal.
Better Article In The Enquirer (Score:1)
Male Instinct (Score:1)
Re:Details? (Score:1)
The kid in the story probably knew the way to Toys 'r Us or something.
My son knows landmarks and streets. I can easily imagine him driving a car down a familiar street (no fear involved) to get to the zoo, or home, or the airport. His powers of memorization are astounding. He also knows the Mac three fingered salute to bail himself out of a toddler game lockup -- he learned that at age 2 1/2. And he's memorized some key scenes from A Bug's Life, including "...now *that's* funny." Wish I was burning neural pathways that quickly.
And if his daycare didn't know he was gone for an hour, I'd pull him and his sister out of there faster than oatmeal dries in a Disney(tm) bowl!
Young Anakin! (Score:1)
This kid has the raw materials to become a (jedi knight) hacker.
This is the classic story of a fallen hacker. He took advantage of the lax security at the day care. He took advantage of the lax security at reruns for wee ones. He went out exploring. He was seeing the world from a new perspective, and even though he wasn't the only one to blame he was the first to get picked up by the pigs.
As long as his parents don't come down too hard on him for this, and destroy his creative and exploratory nature he has the potential to become a hacker.
And I thought that I was smart for figuring out how to negate the "child proof" medicine bottles, and prevent my mother from locking out the windows on her '86 LeBaron.
LK
Grand theft auto . . . (Score:1)
(I'm sick of all this liberal coddling.)
Re:Good for him, but hardly shocking (Score:2)
+--
Given infinite time, 100 monkeys could type out the complete works of Shakespeare.
Yow! (Score:1)
-adr
this is too funny (Score:1)
Definately has the hacker nature. (Score:3)
--
Re:Slashdot poll idea! (Score:1)
Give me a break. "Hotwired"? (Score:1)
Why is everyone acting like the kid must have been a child prodigy to know how to reconnect the wires? I'd guess the "electronics" under the hood consist of nothing more than battery-to-gas_pedal-to-motor, and the store owner probably did nothing more than disconnect one of the wires from the battery.
But that doesn't make much of a story, so unfortunately, lots of people are going to picture this boy genius defeating some kind of security system.
Re:care and feeding of the young proto-hacker (Score:1)
He'll be up to grand theft auto by 9... (Score:2)
This kid is obviously intelligent and independent.
We must stop him before he becomes a threat to our stable and predictable society.
-CJ
Re:this is too funny (Score:1)
Amazing (Score:1)
------- CAIMLAS
Re:And your point is? (Score:1)
Stan "Myconid" Brinkerhoff
Re:this is too funny (Score:4)
No, not really. I have a 6 yr old and he has a powerwheel vehicle probably similar to this. What it sound like is that the 6v battery connector was simply disconnected to keep shoppers' kids fom driving around (dumb, should have taken the battery out).
It is like plugging in a wall socket to reconnect the battery. My 6 yr old does this all the time since you have to unplug the battery to connect it to the charger, and then connect it back up when done charging. If this kid had ever used one of these, he would have no problem.
What is unnerving about all this is how he walked away from daycare and decided to go for a drive. I can't imagine my son even thinking to do something like this.
Re:And your point is? (Score:1)
Just goes to show ya... managment CAN be done by a 6-year-old.
:)
Re:And your point is? (Score:1)
Programming in BASIC, for one thing. But i admit that the Slashdot community is not exactly representative of the general population.
Still, how many people here remember a toy called Capsela? It was a building set consisting of capsules with gears and stuff in them... one of them had a motor, another held batteries; you hooked the battery to a switch and the switch to a motor. It was aimed at young children, according to the pictures on the box.
Any kid who's ever stuck a 9-volt battery on his or her tongue understands that batteries have two contacts, and that they start working when you connect something to both ends. And it's common sense that if there's a loose wire in a device, it's not going to work.
Plus, i'd imagine the battery in the toy car would be replacable, so there's probably some kind of snap-connector, like the battery in an RC car or a cordless phone. I'd be very surprised if any six-year-old couldn't look at the loose connector at the end of the wire and the socket/connector on the battery and plug them together.
The square block goes in the square hole, the round block in the round hole, and the paper clip goes in the electrical socket. All kids know that long before they turn six.
Slashdot poll idea! (Score:2)
I think we need a new poll:
How old are you?
1) 0-6
2) 6-12
3) 12-18
4) 18-22
5) 22-30
6) 30-40
7) Old fart
I am ashamed of you... (Score:1)
Let's consider that at six, children are entering first grade. They are expected to be able to do things that are simple tasks. Plugging a wire into a receptacle is a simple task.
This kid was no prodigy. Most hackers will attest (at least those who had the benefit of a nearby computer) that at six, they were doing simple toy programs in BASIC considering that was the language available. We all know programming these programs requires simple logic. I was throwing IF-THEN statements into my programs at six. The kid simply did this in engineering terms.
Get over the fact that kids are smarter than they let on.
The only amazing aspect of this story lies in the incompetence of the daycare and the store next to it, as well as the fact that (I'm assuming) the kid wasn't hurt. We all played Pole Position as kids, so his driving isn't surprising. What is surprising is that no one else on the road hit him, considering that adults tend to be far worse drivers than kids. You have seen them try to play those video games, haven't you?
Re:this is too funny (Score:1)
Of course, which is more likely... that the 6 year old figured out how to hotwire the toy or the guy at the store didn't *really* unhook the battery and was hoping to avoid the inevitable lawsuit?
;)
ok I AM cynical.
Re:And your point is? (Score:1)
Shocking, maybe, but hardly uncommon (Score:1)
The stories I've heard have convinced me to stay away from day care for my kids. For those who care, one good way to find a decent day care center is to:
Re:lmao (Score:1)
Re:But is spanking == beating? (Score:1)
Have you ever seen the phone booths in London? Fully half the hooker's cards over there are either about spanking or getting spanked. That stuff bends you for life.
Re:On Dixie highway ? No Way !! (Score:1)
_
"Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
Re:this is too funny (Score:1)
Mac OS (Score:1)
_
"Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
If that wasn't the truth... I'd be happy (Score:1)
------- CAIMLAS
Re:On Dixie highway ? No Way !! (Score:1)
the trucks would slow down, dumbass... no one is going to drive over a kid
No one wants to drive over a kid, but a truck is not going to be able to stop, or slow down, in time, even if the driver is literialy standing on the brake with both feet, and has engaged the air brakes. A truck needs more distance and time to stop than a car going at the same speed.
Over here in W.Va., there are many two-lane roads that are considered as major highways. Most, and I mean most drivers are careful, but there are a few dumbasses around that think that they don't have to be careful because the road they're on is a major highway, even though there are houses on either side of the road, it's only two lanes wide, and both lanes have traffic going in oppisite dirictions of each other.
Re:Definately has the hacker nature. - NOT! (Score:2)
Re:Yow! (Score:1)
jason
We all need to escape every now and then... (Score:1)
--
Dave Brooks (db@amorphous.org)
Re:What about the day care center? (Score:2)
I know many people fault parents for not spending time with their children, but in the modern world where both parents need to work to make ends meet, day care is often the only solution the have for supervision during work hours. It is a shame that while society is forced to have to trust in other people for the well-being of their children that often those who are put in charge don't keep up with their duties as they should and then the industry itself gets a bad name. Inadequate training is often a contributing factor to the low quality of child care. I happen to know many of the workers in centers are often high school students who are working during the summer, or graduates who have no official training or certification.
I don't know about you, but if I'm going to entrust the life and health of a child to someone for the duration of 8+ hours then they'd better have a piece of paper to show me they know how.
-- Shadowcat
Re:Slashdot poll idea! (Score:1)
--
Dave Brooks (db@amorphous.org)
True (plus local story) (Score:1)
Come on, that's a better ratio than most daycares and no one noticed! (and I used to look after kids myself, I know what it's like to be around them...you never let them out of your sight)
Re:Drug him up!!! (Score:1)
This boy needs Gleemonex!
Anakin Skywalker (Score:1)
Moped, trailbike, maybe learn some small engine
repair. Earn cash fixing lawn mowers.
Then maybe the Jr. NASCAR competition. [sscracing.com]
Chuck
Re:I am ashamed of you... (Score:1)
Re:Amazing (not at all) (Score:1)
Buy your kids a lot of lego. Buy your kids constructions sets - you will find them grow and prosper.
Re:What about the day care center? (Score:1)
I really do not see the problem with this. When I was
Of course it was painful - but it was a learning session. I didn't take chances with that again.
Another time a branch of a tree I was climbing broke - and I fell 2meters down
(no wonder I've become what I've become?
.. My point is that you learn from that kind of things. Worst case is that a kid breaks an arm - who cares? It's painful, but it'll be all right in a month or so. Okay, worst case IS that someone dies, but accidents will always happen. it's better that kids play and learn and get sharpend, instead of getting dull and non-intelligent, since they can't explore and play around.
Re:What about the day care center? (Score:1)
"...we had day care workers leave kids out on the playground unsupervised..."
That right there is my point... the word "unsupervised". All children should be allowed to explore and play, yes, but when people are shelling out money for their children to be watched, the day care workers are being paid to keep an eye on the kids. Leaving a child outside unsupervised is just an accident waiting to happen. It's one thing for a parent to make that decision. It's another for someone unrelated to the child whose job it is to look out for their well-being while their parents are unable to.
-- Shadowcat
Re:And your point is? (Score:1)
When my parents moved out of their house, we moved everything out of my room --under everything there were either legos, tinkertoys, or capsella...
(i still have most of my legos in the attic, if i ever have a kid) (hell I still play with them sometimes, they're fun)
birthdates at time_t's. (Score:1)
Give or take 3600 seconds - I should probably check my birth certificate a more exact time. Or is figuring your birthday as a time_t just too geeky?
Capsela? Absolutely. (Score:1)
Much more fun than construx or lincoln logs. YMMV.
Why wasn't he arrested (Score:1)
Re:looking forward... (Score:1)
Bottom line is, it's the day care's job to make sure that it's safe for kids to be kids.
To punish the kid for testing limits, exploring and discovering would be like punishing nerds for doing the same thing with computers. The drive to learn and discover isn't what's "bad". What's bad is that the kid wasn't in a safe environment to do so.
Tomorrow's Headline (Score:1)
In a follow-up to yesterday's story where little John T. Carpenter hotwired and piloted a mini Monster truck away from the Kiddie Kampus day-care center; today Johnny "Woz" Carpenter wired his parents old Tandy computer to his speak-n-spell. By combining these two devices JC's "computer" can decrypt the popular Barney cartoon to reveal a hidden meaning:
Die Microsoft Die!
good ole Buckeye Intelligence (Score:1)
The envy here is getting incredible (Score:1)
This is just a smart kid people, get over it. He showed a lot more originality than almost anyone here ever did at that age, and so we all become defensive. Calm down. Nobody's going to come and take your Big Brain trophy away.
Next time I hear somebody on this group praising noncomformity, I'm going to know what really to think about it. You people are no more tolerant of unusual thought than the police who arrested him.
Sheesh.
There was a pursuit (Score:5)
The pursuit stopped when one officer shouted, "Bang! I shot you! You're dead!" The child responded with, "Did not!" The officer then replied "Did too!" This went on for several minutes....
care and feeding of the young proto-hacker (Score:1)
At home I had this nifty 150-in-1 (or something like that) electronics kit from Radio Shack. It was a piece of cardboard with all these components mounted on it, connected to springs for terminals and numbered. You could hook up different projects from the book that came with it, just by making the numbered connections (connect terminal 1 to 55, 23 to 62, and so on), and it would explain a little about how it worked. A great toy for the young hacker, wonder if they still make 'em?
When I was a wee lad, sometimes my dad would take me into work with him, show me the big machines with the blinking lights in their specially air-conditioned room, and let me play with the card punch machine. I had an old TI programmable calculator (with red LED display) when I was nine or ten; didn't see BASIC until I was eleven - and that was on a PDP-11, hardly a PC.
Re:I am ashamed of you... (Score:1)
>Let's consider that at six, children are entering first grade. They are expected to be able to do things that are simple tasks. Plugging a wire into a receptacle is a simple task.
What you don't seem to understand is that it.
This is what impresses us. The fact that he used logic to get what he wanted. Granted it was something simple that only a child would want, but he figured out how to get it.
>Most hackers will attest (at least those who had the benefit of a nearby computer) that at six, they were doing simple toy programs in BASIC considering that was the language available..
LK
Re:what a driver! (Score:1)
Amazing? Not really. (Score:1)
Re:Good for him, but hardly shocking (Score:3)
My mom took me shopping for my fith birthday so I could choose my present. Radio Shack was popular at the time and I found the box for an electronic project kit bigger and more colorful than the box of a flashlight. It went over, because my dad had an interest in electronics. They helped me build a single transistor radio. I remember picking up my first country radio station (that was back when country music was real!) For xmas of that year, I asked Santa for nuts, bolts, and wires. Twenty five years later, I work at a wire and cable specialty manufacturing plant as the sole senior technicican on my shift. My dream came true in the grandest sense.
Re:Drug him up!!! (Score:1)
Using Microsoft software is like having unprotect sex.
Re:The envy here is getting incredible (Score:1)
kcin
Not surprised (Score:1)
When I was BARELY 2 years old(2 years, couple weeks), I taught myself how to feed the tapes on my parents' reel-to-reel tape player so I could listen to cheezy christmas songs.
A 6 year-old being able to reconnect the battery isn't funny. His joy-ride is funny. The lack of response from the daycare center is disturbing. His hacker tendences are a godsend (WOOHOO!!!!). The amazement we all have that a 6 year-old hotwired a kiddie-kar? That's sad.
astonishing (Score:1)
I thought the picture looked sort of "Anakin-ish" (Score:1)
Re:I am ashamed of you... (Score:1)
Turns out he has one of those trucks at home...
Hmm, wonder if he picked up the fact about plugging wires in somewhere?
Now a final statement. How many wires are attached to a console game system on average? Kids from age 3 are working these systems today, and I'm going to bet that they might have to reconnect wires at times. Wow a whole slew of genii! Actually, have you ever tried to put the wrong block in the wrong hole? Most of them don't fit, and those that do get stuck. But show the kid once how the system is configured, and s/he will be able to recreate the scene.
I'm sorry if my lack of amazement for the kid bothers you, but hacking is about productiveness. The kid showed no productivity in his wiring of the truck. He holds no higher understanding about the truck as he had before the "hotwiring". Thus he hacked nothing.
wonder of all things (Score:1)
West Chester!! (Score:1)
Jacob
Re:And your point is? (Score:1)
A few months back I saw a show about scientific "illiteracy". They went to the graduation of some Ivy League college, grabbed a couple of new grads and gave 'em a little test: given a battery, a flashlight bulb, an a piece of wire a few inches long, make the bulb light up. Something like 80 or 90 percent couldn't figure it out.
Re:He'll be up to grand theft auto by 9... (Score:1)
Put him in American public schools for a few years, let him know that cleverness, intuition, curiosity and bravery are not in keeping with the community values propagated therein.
Anyone interested in separating School and State (and especially anyone who thinks the gub'mint should hold the market captive when it comes to education) should look into the Separation movement. Read a bit at sepschool.org, and think about the principles that make Free software so good and so powerful -- why not apply the same logic to the topic of education?
When you question reality, use sodium pentathol.
timothy
Re:What about the day care center? (Score:1)
>of a child to someone for the duration of 8+ hours then they'd better
>have a piece of paper to show me they know how
Whereas it's perfectly okay to entrust a child to someone 24/7 with no proof at all of their competence... provided they're fertile.
Re:Details? (Score:1)
1. Talk to kid about event.
2. Find new day care by tonight.
3. Call attorney.
But, anyway -- scary but funny story. Anyone who has a toddler is shaking off the heebies right now.
Re:Better Article In The Enquirer (Score:1)
- The boy "walked more than a mile in 90-degree heat." It's the middle of July, that's the way it's supposed to be. And you know what, in January it'll be cold. Yet, it's considered news to have a "reporter" stand outside and say, "Yep, it's 100 degrees here" on the local news. And the always sage advice, "Don't go outside if you don't have to." Damn, my whole evening was going to be filled with needlessly going outside and coming back inside over and over. The media pisses me off. I won't even go to the amazing point that the kid was able to walk a mile; the country's full of lazy bastards that think that's a feat in itself.
- "dodging traffic" Yeah, I picture the kid weaving lane to lane passing cars, flippin people off. I'm sure he dodged the traffic, not the other way around.
- "Monster truck" Yeah, sure. Even if you scale it down to a 6 yr old's height, it is hardly equal to what a monster truck is to adults.
- Trisha Taylor, the store owner says "he reconnected the wires without anyone seeing him," like he did it ninja-style, camoflaged with the environment, rapelling down a rope from the ceiling, takes out two guards, then breaks the impenetrable defense of unhooking the wires. All the meanwhile, constantly looking around the sides of the car watching for passers-by, hiding underneath the car when someone approaches and getting back out when it's clear. The store did all it could do, but the kid was just too smart, yeah that's it.
- She continues, "I was just floored. I couldn't believe it. This kid is only 6, and he had to have lifted up that hood and knew which wires to put together." Again, an impossible to penetrate defense. What more could the store have done?? Six year olds aren't supposed to know how to even read in public schools until about 8th grade, let alone lift a plastic hood or connect two wires. See what liberal education has produced? We expect kids to be morons just like most adults.
- "One frightened motorist..." It's sad how many unfrightened motorists went by. She continues, "he just about got hit...I about wrecked." No mention if she stopped, probably not. The world's too freakin busy to pull over, stop the kid, make sure he's safe and then call police. Besides, the parents would sue you if you did that; he has a right to drive around, you can't discriminate against him like that.
- The Kiddie Kampus day care, "did not know he was gone." So many people surrender their kids to people who aren't even aware of who is in their building. Yes, this is a better situation for kids today than staying home and being a PARENT yourself. The story doesn't mention the day care's defense system. My guess is it involves a door... Obviously, no system would have kept this young Einstein inside.
- Finally, at the end, the store co-owner says, "The next time I get one, I'll have to chain it up out there, I guess." Her doubt of what to do amazes me that anything is kept on the store's grounds without being stolen, not just by six year olds.
In the words of Eric Cartman, "Liberals piss me off!" (liberals, media, same thing essentially
Re:care and feeding of the young proto-hacker (Score:1)
All through school I too remember having to teach my teachers how to use the equipment in class.
"No Mrs. Watson*, you're trying to plug the projector in upside-down, you do it like this."
Children do often have a greater understanding of the technlogy than the adults who depend on it.
I think radio crap is up to 400-in-1 kits now. I remember I was digging through my grandparent's garage and I found my uncle's only 75-in-1(or whatever it was) kit. I was enthralled. I probably gained about 10 pounds that summer because I never left the house, between my (once again mentioned) atari, and building solar powered lie detecting, light activated, alarm sounding am radios and the like I almost never saw the light of day.
I haven't looked in years, but I think that they still make them. That is unless they've been forced to stop by anti-terrorism activism, after all teaching kids how technology works will only give them the power to make better bombs.
LK
Re:I am ashamed of you Lord Kano (Score:1)
>>So why aren't you out distrusting all authority? Time's a-wastin' LK. Get on the ball!
You just don't get it huh? Political activism is similar to hacking in this respect.
This kid is a great example of what I mean. It's a part of your nature. It's something that you do all of the time whether you're aware of it or not.
LK
Re:And your point is? (Score:1)
Must've been Penn.
Re:And your point is? (Score:1)
aaargh! get your facts straight! (Score:1)
Butler County has about 350,000 people, and is one of the four Ohio counties that compose the Cincinnati metropolitan region (which also extends into southeastern Indiana and Northern Kentucky...but anyway.) The road itself is actually an important roadway in that area.
Re:This is sad (Score:1)
If they weren't so interested in having a roof over the kid's head, maybe they'd realize that mom would do fine entertaining him in their cardboard box.
Kiddie Kampus - issues degrees in Auto Engineering (Score:1)
"Forget Montessori, Mommy and Daddy are sending you to DeVry PreSchool"
Plus they taught him proper U.S. driving etiquette:
"I told him he was going to get hurt, he'd better get out of the road - and he told me to shut up."
Wonder if he flipped her the bird?
#include "disclaim.h"
"All the best people in life seem to like LINUX." - Steve Wozniak
Re:what a driver! (Score:1)
---
Strange (Score:1)
lmao (Score:1)
Re:Give me a break. "Hotwired"? (Score:1)
What about the day care center? (Score:1)
Was anyone else worried that it took the day care center over an hour to notice that he was gone?
---
Donald Roeber
Has anyone ever considered... (Score:1)
'Course, it's just a thought...
It's Obvious people! (Score:2)
Re:The envy here is getting incredible (Score:2)
He's really Mini Me (Score:4)
Re:Definately has the hacker nature. (Score:2)
tongue firmly in cheek...
warp eight bot
Re:What about the day care center? (Score:2)
If I were his parents, I'd be checking out a new daycare center. For that matter, if I were one of the other kids' parents.
Re:There was a pursuit (Score:2)
One officer was heard to comment, "I almost had him, until he told me that cops can't fly too. Man, I was bummed."
\//
what a driver! (Score:2)
btw, its a very busy road. five lanes of 35 mph traffic
actually, the funniest thing i heard about this story was when a motorist (female) got out of her car to say "honey, you shouldn't be in the road, its dangerous (or something to that effect)" -- the kid replied "Shut Up!" ahhhh road rage, at such an early age. | https://slashdot.org/story/99/07/13/1634255/6-year-old-hotwires-car-heads-to-highway | CC-MAIN-2017-04 | refinedweb | 5,415 | 82.24 |
This C Program calculates the mean, variance & standard deviation. The formula which is used in this program are mean = average of the numbers. variance = (summation( ( Xi – average of numbers) * ( Xi – average of numbers)) ) / Total no of elements. where i = 1 to N here N is the total no of elements. Standard deviation = Squareroot of the variance.
Here is source code of the C program to calculate the mean, variance & standard deviation. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C program to input real numbers and find the mean, variance
* and standard deviation
*/
#include <stdio.h>
#include <math.h>
#define MAXSIZE 10
void main()
{
float x[MAXSIZE];
int i, n;
float average, variance, std_deviation, sum = 0, sum1 = 0;
printf("Enter the value of N \n");
scanf("%d", &n);
printf("Enter %d real numbers \n", n);
for (i = 0; i < n; i++)
{
scanf("%f", &x[i]);
}
/* Compute the sum of all elements */
for (i = 0; i < n; i++)
{
sum = sum + x[i];
}
average = sum / (float)n;
/* Compute variance and standard deviation */
for (i = 0; i < n; i++)
{
sum1 = sum1 + pow((x[i] - average), 2);
}
variance = sum1 / (float)n;
std_deviation = sqrt(variance);
printf("Average of all elements = %.2f\n", average);
printf("variance of all elements = %.2f\n", variance);
printf("Standard deviation = %.2f\n", std_deviation);
}
$ cc pgm23.c -lm $ a.out Enter the value of N 5 Enter 5 real numbers 34 88 32 12 10 Average of all elements = 35.20 variance of all elements = 794.56 Standard deviation = 28.19. | https://www.sanfoundry.com/c-program-mean-variance-standard-deviation/ | CC-MAIN-2018-13 | refinedweb | 260 | 58.18 |
.NET Core 2.0 Angular 4, and MySQL Part 3 Logging with NLog
.NET Core 2.0 Angular 4, and MySQL Part 3 Logging with NLog
Logging messages are a great way to find out what went wrong and where errors are in your code. Learn how to implement this in .NET Core.
Join the DZone community and get the full member experience.Join For Free
Deploy code to production now. Release to users when ready. Learn how to separate code deployment from user-facing feature releases with LaunchDarkly.
Why is logging so important during application development? Well, while your application is in the developing stage, it is very easy to debug the code and to find out what went wrong. But, can you debug in the production environment?
Of course not.
That is why logging messages are a great way to find out what went wrong and where errors happened in your code in the production environment. .NET Core has its own implementation of logging messages, but in all my projects, I prefer to create my own custom logger service.
This is what I am going to show you in this post.
If you want to see all the basic instructions and complete navigation for this series, please follow the following link: Introduction page for this tutorial.
For the previous part check out: Part 2 - Creating .NET Core WebApi project - Basic code preparations
Source code to download is available at this link: .NET Core, Angular 4 and MySQL. Part 3 - Source Code
This post is divided into several sections:
Creating Required Projects
Let's create two new projects. Name the first one Contracts. You are going to store interfaces inside this project. Name the second one LoggerService. You are going to use it for logger logic.
To create a new project, right click on the solution window, choose Add and then NewProject. Under .NET Core, choose Class Library (.NET Core) and name it Contracts.
Do the same thing for the second project, just name it LoggerService.
With these two projects in place, you need to reference them to your main project. In the main project inside solution explorer, right click on Dependencies and choose AddReference. Under Projects, click Solution and check both projects.
Also, add a reference from the Contracts project to the LoggerService project.
Creating Interface and Installing NLog
Our logger service will contain four methods of logging:
- info messages
- debug messages
- warning messages
- error messages
Consequently, you will create an interface, ILoggerManager, inside the Contracts project, containing those four method definitions.
Add the following code to the ILoggerManager interface:
namespace Contracts { public interface ILoggerManager { void LogInfo(string message); void LogWarn(string message); void LogDebug(string message); void LogError(string message); } }
Before you implement this interface inside the LoggerService project, you need to install the NLog library in our LoggerService project. NLog is a logging platform for .NET which will help us create and log our massages.
I'll show you two ways of how to add an NLog library into your project.
Example 1:
Example 2:
After a couple of seconds, NLog is up and running in your app.
Implementing Interface and nlog.config file
In the LoggerService project, create a new class,
LoggerManager.
Add the following implementation:
using Contracts; using NLog; using System; namespace LoggerService { public class LoggerManager : ILoggerManager { private static ILogger logger = LogManager.GetCurrentClassLogger(); public LoggerManager() { } public void LogDebug(string message) { logger.Debug(message); } public void LogError(string message) { logger.Error(message); } public void LogInfo(string message) { logger.Info(message); } public void LogWarn(string message) { logger.Warn(message); } } }
Now, we need to configure it and inject it into the
Startup class in the ConfigureServices method.
NLog needs to have information about the folder to create log files in it, what the name of these files will be, and what a minimum level of logging is. Therefore, you need to create one text file in the main project with the name
nlog.config and populate it as in the example below. You need to change the path of the internal log and filename parameters to your paths.
<?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="" xmlns: <extensions> <add assembly="NLog.Extended" /> </extensions> <targets> <target name="logfile" xsi: </targets> <rules> <logger name="*" minlevel="Debug" writeTo="logfile" /> </rules> </nlog>
Configuring Logger Service
Setting up the configuration for a logger service is quite easy.
First, you must update the constructor in the
Startup class with following code:
public Startup(IConfiguration configuration, ILoggerFactory loggerFactory) { loggerFactory.ConfigureNLog(String.Concat(Directory.GetCurrentDirectory(), "/nlog.config")); Configuration = configuration; }
ILoggerFactory has its implementation inside .NET Core and it serves the purpose of configuring the logging system inside it. With the extension method
ConfigureNLog, we are forcing
ILoggerFactory to apply the configuration of our new logger from the XML file.
Secondly, you need to add the logger service inside .NET Core's IOC container. There are three ways to do that:
- By calling services.AddSingleton which will create the service the first time you request it and then every subsequent request is calling the same instance of the service. This means that all components are sharing the same service every time they need it. You are always using the same instance.
- By calling services.AddScoped which will create the service once per request. That means whenever we send the HTTP request towards the application, a new instance of the service is created.
- By calling services.AddTransient which will create the service each time the application requests it. This means that if during one request towards our application, multiple components need the service, this service will be created again for every single component that needs it.
So, in the
ServiceExtensions class, add the following method:
public static void ConfigureLoggerService(this IServiceCollection services) { services.AddSingleton<ILoggerManager, LoggerManager>(); }
Lastly, in ConfigureServices method invoke this extension method, right above services.AddMvc() with this code:
services.ConfigureLoggerService();
Every time you want to use a logger service, all you need to do is to inject it into the constructor of the class that is going to use that service. .NET Core will serve you that service from the IOC container and all of its features will be available to use. This type of injecting of objects is called Dependency Injection.
DI, IOC, and Logger Service Testing
What is exactly Dependency Injection(DI) and what is IOC (Inversion of Control)?
Dependency injection is a technique for achieving loose coupling between objects and their dependencies. It means that rather than instantiating objects every time it is needed, we can instantiate it once and then serve it in a class, most often, through a constructor method. This specific approach is also known as Constructor Injection.
In a system that is designed to use DI, you may find many classes requesting their dependencies via their constructor. In that case, it is helpful to have a class which will provide all instances to classes through the constructor. These classes are referred to as containers or, more specifically, Inversion of Control containers. An IOC container is essentially a factory that is responsible for providing instances of types that are requested from it.
For the testing purpose of the logger service, you could use
ValuesController. You will find it in the main project in Controllers folder.
In Solution Explorer, open the Controllers folder and open the
ValuesController class. Update it like this:
[Route("api/[controller]")] public class ValuesController : Controller { private ILoggerManager _logger; public ValuesController(ILoggerManager logger) { _logger = logger; } // GET api/values [HttpGet] public IEnumerable<string> Get() { _logger.LogInfo("Here is info message from our values controller."); _logger.LogDebug("Here is debug message from our values controller."); _logger.LogWarn("Here is warn message from our values controller."); _logger.LogError("Here is error message from our values controller."); return new string[] { "value1", "value2" }; } }
After that, start the application, and browse to. As a result, you will see an array of two strings. Now go to the folder that you have specified in
nlog.config file, and check out the result. You should see four lines logged inside that file.
Conclusion
Now you have the working logger service, you will use through the entire development process.
By reading this post you've learned:
- Why logging is important.
- How to create an external service.
- How to use an NLog library and how to configure it.
- What DI and IOC are.
- How to use DI and IOC for service injection.
Thank you for reading and see you soon in the next blog post where we will talk about repository pattern with entity framework core.
If you have any suggestions or questions please don't hesitate to leave the comment in the comments section below.
Deploy code to production now. Release to users when ready. Learn how to separate code deployment from user-facing feature releases with LaunchDarkly.
Published at DZone with permission of Marinko Spasojevic . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/net-core-20-angular-4-and-mysql-part-3 | CC-MAIN-2018-47 | refinedweb | 1,497 | 58.38 |
Note: For Visual Studio 2008 support and other improvements, please see this update by nelveticus.
Josh Beach has written a nice little tool to manage the "Recent projects" list in Visual Studio. It allows to edit the "recent projects" list for the start page of Visual Studio. I added the following features:
I wanted to submit this as an update to Josh's article, but unfortunately he doesn't seem to be active on this site anymore.
Don't look at it, the stuff I added is horrible.
This is a straightforward C# WinForms application, with all the logic in the Forms class. The list of projects is read from the registry, you can change the position in the list and add and remove items, and write back to the registry (all Josh's code). The dropdown selects the registry from which the information is read and can easily be configured.
Forms
The only class that might be of slight interest on an otherwise boring Sunday afternoon is TaggedString which provides the Text/Tag separation for ListBox and ComboBox items, too.
TaggedString
ListBox
ComboBox
The good citizen rule for feeding UI controls is to keep representation separate from the data. WinForms offers in most places a Tag member where the caller can attach a reference to the actual data.
Tag
Unlike most WinForms controls, the ListBox and ComboBox don't offer a separate Tag for its items. Rather, it accepts a collection of objects, and displays the result of ToString() as item text. (This looks like sloppiness to me, but who knows?)
Tag
ToString()
TaggedString associates a string (to be displayed) with a Tag (the object associated with the listbox / combobox item) like so:
TaggedString
string
listbox
combobox
public class TaggedString<T>
{
string m_title;
T m_tag;
public TaggedString(string title, T tag)
{
m_title = title;
m_tag = tag;
}
public override string ToString() { return m_title; }
public T Tag { get { return m_tag; } }
}
The Project List Editor stores the registry path in the tag, so it's using a TaggedString<string> like this:
TaggedString<string>
cbDevStudioVersion.Items.Add(
new PH.Util.TaggedString<string>(
"2005", // the string to display
@"Software\Microsoft\VisualStudio\8.0\ProjectMRUList")); // the associated data
This is by far the shortest time I spent on an article - I hope you like it anyway.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
cbDevStudioVersion.Items.Add(new PH.Util.TaggedString<string>("2005 Express", @"Software\Microsoft\VCSExpress\8.0\ProjectMRUL
peterchen wrote:Would a "Remove Zombies and Projects" button do the trick?
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/18479/Visual-Studio-Project-MRU-List-Editor-II | CC-MAIN-2015-35 | refinedweb | 456 | 60.75 |
Created on 2007-03-10 22:11 by sonderblade, last changed 2010-09-16 22:19 by BreamoreBoy.
This patch is a merger of #664020 (telnetlib option subnegotiation fix) and #1520081 (telnetlib.py change to ease option handling) which are both outdated.
The purpose of it is to replace the set_option_negotiation_callback with a handle_option method that subclasses can reimplement. This should make it much easier to implement custom option negotiation handling. The patch also extends the documentation somewhat to make it clearer how to to implement custom option subnegotiation.
It breaks compatibility with earlier Pythons because it removes set_option_negotiation_callback. But I think it should be good to apply it for Python 3.0. See the referenced patches for many more details.
Closed the other two patches as "superseded by this one".
(1) In the example code, why is parms backquoted? My first thought was that it was for repr, which won't work in Py3, particularly since it isn't quoted on the previous line.
+ parms = self.read_sb_data()
+ self.msg('IAC SB %d %s IAC SE', ord(opt), `parms`)
(2) There are advantages to the old callback way. It is easier to change things per-instance, at runtime, and even mid-connection. It might be worth showing an example that restores the current model.
option_callback=None # class default
def set_option_negotiation_callback(self, callback):
"""Provide a callback function called after each receipt of a telnet option."""
self.option_callback = callback
def handle_option(self, command, option):
if self.option_callback is not None:
return self.option_callback(self, command, option)
assigning all open telnetlib items to myself | http://bugs.python.org/issue1678077 | crawl-003 | refinedweb | 261 | 50.33 |
Validating XML with attributes in namespace "xml"
2013-01-04 12:23:35 GMT
I have a question regarding namespaces for attributes, and how to declare such attributes in an XML schema.
The reason that I want to use namespaces for an attribute is that I use XML DSig,
and the signature parts should be specified with an "ID" attribute.
The name of the attribute that specify the identification could have any name, and some possible choices are "ID", "Id" or "id".
However, there seems to be a recommendation to use "xml:id", rather than anything else such as "ID".
This is for example described in the following page:
This works fine for DSig signatures.
I am using "xmlsec1" for signing/verification in my C++ application, and I am able to sign and verify the signature.
In the XML files, the id is specified like this:
<elm:MyElement xml:
Note that "xml:id" is an attribute in a namespace, which is rarely used although legal XML.
However, the XML files should also be validated against a schema with Xerces, and it is there I have some problems.
The problem is probably that I do not specify the schema correctly, rather than a problem in Xerces-C.
I have tried to specify the attribute name as "xml:id" in the scheme like this:
==========================================
<xs:attribute
<xs:simpleType>
<xs:restriction
<xs:enumeration
</xs:restriction>
</xs:simpleType>
</xs:attribute>
==========================================
However, when I try to validate the document, I get the following error message:
==================================
Error at file XMLParserInput, line 1, char 441
Message: attribute '{}id' is not declared for element 'Routing'
==================================
My XML document has an "xml:id" attribute, but the Xerces validator does not seem to think
that this attribute is declared according to the schema.
My question now is how I should write the XML schema to make it accept "xml:id"?
I have searched on the net, and I have got some hints, for example this:
But my XML schema is more complicated, and consists of several xsd files and multiple namespaces,
so I have not been able to make it validate with Xerces.
My XML file declares something similar to the following:
===============================
<soap:Envelope xmlns:soap="
elope" xmlns:xsi="" xmlns:qwerty="
.se/qwert" xmlns:
===============================
The schema file(s) declares information similar to this:
===============================
<xs:schema xmlns:abcd="" xmlns:xs="
a" targetNamespace="" elementFormDefault="qualified" attributeFor
===============================
Can you give some hints how I should declare the XML schema to validate the XML file with xml:id?
Regards
/Mikael | http://blog.gmane.org/gmane.text.xml.general | CC-MAIN-2013-20 | refinedweb | 416 | 51.92 |
Neutron Distributed Virtual Router implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur without hiting the Network Node. (East-West Routing)
Also Neutron Distributed Virtual Router implements the Floating IP namespace on every Compute Node where the VMs are located. In this case the VMs with FloatingIPs can forward the traffic to the External Network without reaching the Network Node. (North-South Routing)
Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.
Today Neutron L3 Routers are deployed in specific Nodes (Network Nodes) where all the Compute traffic will flow through.
Problem 1: Intra VM traffic flows through the Network Node
In this case even VMs traffic that belong to the same tenant on a different subnet has to hit the Network Node to get routed between the subnets. This would affect Performance.
Problem 2: VMs with FloatingIP also receive and send packets through the Network Node Routers
Today FloatingIP (DNAT) translation done at the Network Node and also the external network gateway port is available only at the Network Node. So any traffic that is intended for the External Network from the VM will have to go through the Network Node.
In this case the Network Node becomes a single point of failure and also the traffic load will be heavy in the Network Node. This would affect the performance and scalability.
This would also help the Neutron networks to be on par with the Nova parity.
The proposal is to distribute the L3 Routers across the Compute Nodes when required by the VMs.
In this case there will be Enhanced L3 Agents running on each and every compute node ( This is not a new agent, this is an updated version of the existing L3 Agent). Based on the configuration in the L3 Agent.ini file, the enhanced L3 Agent will behave in legacy(centralized router) mode or as a distributed router mode.
Also the FloatingIP will have a new namespace created on the specific compute Node where the VM is located Each Compute Node will have one new namespace for FIP, per external network that will be shared among the tenants. An External Gateway port will be created on the Compute Node for the External Traffic to Flow through.
Default SNAT functionality will still be centralized and will be running on a Service Node.
The Metadata agent will be distributed as well and will be hosted on all compute nodes and the Metadata Proxy will be hosted on all the Distributed Routers.
The existing DHCP server will still run on the Service Node. There are future plans to distributed the DHCP. ( This will be addressed in a different blueprint)
This implementation is specific to ML2 with OVS driver.
An alternative is to use a Kernel Module. But we did not pursue this since there was a dependency for the Kernel Module to be part of the upstream linux distribution before we push this patch.
There are couple of minor data model changes that will be addressed by this blueprint.
+----------------+--------------+------+-----+---------+ | Field | Type | Null | Key | Default | +----------------+--------------+------+-----+---------+ | tenant_id | string(256) | Yes | | NULL | | id | string(36) | NO | PRI | | | name | string(256) | YES | | NULL | | status | string(16) | YES | | NULL | | admin_state_up | boolean | YES | | NULL | | gw_port_id | string(36) | YES | MUL | NULL | | enable_snat | boolean | NO | | | | distributed | boolean | YES | | NULL | +----------------+--------------+------+-----+---------+
Add “distributed” flag to the router object data model. This will enable the agent to take necessary action based on the router model.
A new table for the service node enhanced L3 agent to track the SNAT service on each node.
+------------------+--------------+------+-----+---------+ | Field | Type | Null | Key | Default | +------------------+--------------+------+-----+---------+ | id | string(36) | NO | PRI | | | router_id | string(36) | YES | MUL | NULL | | host_id | string(255) | YES | | NULL | | l3_agent_id | string(36) | YES | MUL | NULL | +------------------+--------------+------+-----+---------+
A new table that is used to hold port bindings for DVR router interfaces only. This is similar to the portbindings table, but this table will hold bindings only for dvr router interfaces.
The original portbindings table will also hold one-binding row for a dvr router interface, but that won’t hold binding information. That binding row is held there, only to ensure transparency of dvr presence to the tenants themselves.
Some of the significant fields in the above are: port_id - This refers to the port id of the DVR Router interface for which this binding is applied to. The port-id will refer to id field of the port table. host - This holds the host on which the DVR interface is bound. router_id - This field indicates for which router interface, this binding belongs. status - This field represents the status of the dvr interface port on the host, which is represented by this binding.
The status field value of the single-binding row for dvr router interface in the original portbindings table will now be an ORed result of the above status field of all such bindings available in the above table for dvr router interfaces.
A new table that is used to hold Unique DVR Base mac assigned to OVS L2 agent that is running in DVR Mode.
For any given host where an OVS L2 Agent is running, only one MAC Address from the DVR Base Mac pool is allocated to that OVS L2 Agent. This allocation rpc cycle, completes during init() of the OVS L2 Agent.
In order to make OVS L2 Agent run in DVR Mode, enable_distributed_routing flag must be set to True in the [agent] section of ml2 ini file (ml2_conf.ini).
Similarly, the DVR Base Mac Address which represents start of the pool, need to be defined in neutron.conf
router-create Create a router for a given tenant.
router-create --name another_router --distributed=true
Admin can only set this attribute. The tenants need not be aware about this attribute in the router table. So it is not visible to the tenant.
Request
POST /v2.0/routers Accept: application/json { "router":{ "name":"another_router", "admin_state_up":true, "distributed":true} }
Response
{ "router":{ "status":"ACTIVE", "external_gateway_info":null, "name":"another_router", "admin_state_up":true, "distributed":true, "tenant_id":"6b96ff0cb17a4b859e1e575d221683d3", "id":"8604a0de-7f6b-409a-a47c-a1cc7bc77b2e"} }
router-show Show information of a given router.
Request
GET /v2.0/routers/a9254bdb-2613-4a13-ac4c-adc581fba50d Accept: application/json
Response
{ "routers":[{ "status":"ACTIVE", "external_gateway_info":{ "network_id":"" }, "name":"router1", "admin_state_up":true, "distributed":true, "tenant_id":"33a40233088643acb66ff6eb0ebea679", "id":"a9254bdb-2613-4a13-ac4c-adc581fba50d"}] }
router-update Create a router for a given tenant.
Admin can only update a centralized router to a distributed router.
Note: Admin can only update a centralized router to a distributed router and not the other way around. For the first release we are targeting only from centralized to distributed.
Admin only context:
neutron router-update router1 --distributed=True
Admin only CLI commands:
l3-agent-list-hosting-snat List L3 agents hosting a snat service.
This command will list the agent with the router-id and SNAT IP.
l3-agent-snat-add Associate a snat namespace to an L3 agent.
This command will allow an admin to associate a SNAT namespace to an agent. This command will take the router ID as an argument.
l3-agent-snat-remove Remove snat association from an L3 agent.
This command will allow an admin to remove or disassociate a SNAT service from the agent.
Need to make sure the existing FWaaS and the Security Group Rules are not affected by the DVR.
Yes this change will have some impact on the python-neutronclient
The Admin level API proposed above will have to be implemented in the CLI.
Also there is an impact with Horizon to address the admin level API mentioned above.
Inter VM traffic between the tenant’s subnet need not reach the router in the Network node to get routed and will be routed locally from the Compute Node. This would increase the performance substantially.
Also the Floating IP traffic for a VM from a Compute Node will directly hit the external network from the compute node, instead of going through the router on the network node.
Global Configuration to enable Distributed Virtual Router.
#neutron.conf [default] # To enable distributed routing this flag need to be enabled. # It can be either True or False. # If False it will work in a legacy mode. # If True it will work in a DVR mode. #router_distributed = True # ovs_neutron_plugin.ini # This flag need to be enabled for the L2 Agent to address # DVR rules #enable_distributed_routing = True # l3_agent.ini # # This flag is required by the L3 Agent as well to run the L3 # agent in a Distributed Mode. # #distributed_agent = True #
This will be disabled by default.
NOTE: This is for backward compatibility. For migration the admin might have to run the db-migration script and also re-start the agents with the right configuration to take effect.
If Cloud admin wanted to enable the feature this can be configured.
It currently uses the existing OVS binary in Linux Distribution. So there should not be any new binaries.
Primary assignee:
Other contributors:
Yes. Since we are implementing the Distributed Nature of routers, there need to be multinode setup for testing this feature so that the rules and actual namespace creation for the routers can be validated.
Single node infrastructure to test the feature may still be possible, but we need to validate.
Continuous integration testing to test the dvr at the gate will be considered.
Yes. There will be documentation impact and so documentation has to be modified to address the new deployment scenario. | http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html | CC-MAIN-2017-09 | refinedweb | 1,584 | 62.58 |
Maintained by Jo Albright.
Simple library to create an assignment that works like a ternary operator mutated with a switch statement
To run the example project, clone the repo, and run
pod install from the Example directory first.
enum AgeGroup: Int { case Baby, Toddler, Kid, Preteen, Teen, Adult } enum LifeStatus: Int { case Alive, Dead, Zombie } let life: LifeStatus? = .Zombie
Now that our variables are ready, we can play with the features.
First I want to show you how I wrote a switch assignment before. It was ok, but I don't like to settle for ok.
// embedded ternary operators ... old way of writing it let _ = life == .Alive ? UIColor.yellowColor() : life == .Dead ? UIColor.redColor() : life == .Zombie ? UIColor.grayColor() : UIColor.greenColor()
The inline Switchary assignment makes this much more readable.
// Switchary assignment inline // ??? starts the switch // ||| seperates the cases // *** is our default value let _ = life ??? .Alive --> UIColor.yellowColor() ||| .Dead --> UIColor.redColor() ||| .Zombie --> UIColor.grayColor() *** UIColor.greenColor()
// Switchary Range let _ = 21 ??? (0...3) --> AgeGroup.Baby ||| (4...12) --> AgeGroup.Kid ||| (13...19) --> AgeGroup.Teen *** AgeGroup.Adult
Currently I only support ranges, enums and basic types for the inline assignment. But I wanted to support all types of pattern matching. This closure assignment allows you to pass a value to match against and returns a value to be assigned.
// Switchary closure let _ = life ??? { switch $0 { case .Alive: return UIColor.greenColor() case .Dead: return UIColor.redColor() case .Zombie: return UIColor.grayColor() } } let _ = 12 ??? { switch $0 { case 0..<10: return UIColor.clearColor() case let x where x < 20: return UIColor.yellowColor() case let x where x < 30: return UIColor.orangeColor() case let x where x < 40: return UIColor.redColor() default: return UIColor.whiteColor() } }
Lastly there is an initializer protocol SwitchInit that takes a value and closure within the init. This allows for simple custom initialization based on the value pased in.
// Switchary Initalizer extension UIView: SwitchInit { } let button = UIButton (life) { switch $0 { case .Alive : $1.setTitle("Eat Lunch", forState: .Normal) case .Dead : $1.setTitle("Eat Dirt", forState: .Normal) case .Zombie : $1.setTitle("Eat Brains", forState: .Normal) } }
TBH : I have not found a good use for this feature yet.
Switchary is available through CocoaPods. To install it, simply add the following line to your Podfile:
pod "Switchary"
Switchary is also available through Swift Package Manager. Please take a look at the link to learn more about how to use SPM.
import PackageDescription let package = Package( name: "YOUR_PACKAGE_NAME", dependencies: [ .Package(url: "", majorVersion: 0) ] )
Switchary is available under the MIT license. See the LICENSE file for more info. | https://cocoapods.org/pods/Switchary | CC-MAIN-2019-13 | refinedweb | 423 | 61.53 |
In SAP you can add product images and show them on the product
pages in the Sana web store. Images can be added to the materials
using attachments in SAP.
You can equally use the procedure
which Sana provides out-of-the-box and SAP to add product
images.
The SAP user must have the necessary permissions to be able to
add attachments to the materials. If you can't add attachments in
SAP, please ask the SAP administrator to give you the necessary
permissions.
If the SAP user has the required permissions to add attachments,
then the button Services for Object will be
available in the material master data.
To manage the user in SAP, use the transaction code
SU01. The parameter SD_SWU_ACTIVE
must be added to the user on the Parameters tab to
allow the user to add attachments in SAP.
Step 1: Open material master data.
Step 2: Click on the Services for
Object button to add the image to the material. Then
click: Create > Create
Attachment.
Step 3: Find the necessary picture and add it
to the material. You can add multiple images to each material. All
images added to the material in SAP will be shown on the product
details page in the Sana web store.
Step 4: Click Attachment list,
if you need to view the list of all images added to the material.
You can open the attachment list only when at least one attachment
has been added to the material. In the Service: Attachment
list window you can also add and remove images.
In SAP you can choose which product images you want to show in
your Sana webstore.
In the main
menu of the Sana add-on
(/n/sanaecom/webstore), click Attachments
Overview (/n/sanaecom/att_ovrvew).
Step 1: Enter the Webstore Id
and select Product Images. You can use
Input Parameters as a filter to narrow search
results and show only those product images that you need. Click
"Execute".
Step 2: In the Attachments and Product
Images window you can see the list of materials and all
images attached to them. If you select the
Visibility checkbox, then your product images will
be shown on the product pages in the Sana webstore.
At the top of the window you can see the buttons Select
All, Unselect All and Update
Visibility which can be useful for quick managing of
product images visibility.
Step 1: When images are added to the materials
in SAP, run the Product import task in Sana Admin.
Open Sana Admin and click: Tools >
Scheduled tasks.
The Product import task retrieves material
information from the database to build or update the catalog of
your web store.
Step 2: When the Product
import task is completed, start the Product image
import task.
Before running the task make sure that it is configured. Click
Edit on the Product image
import task and enable the setting Import
product images from ERP. In the Separator
field enter the symbol that will be used to separate product image
information in the product image filename, like item number and
order number.
For more information, see "Product
Image Import".
The Product image import task will retrieve all
product images from SAP, rename them according to the image file
name format supported by the Sana web store, web store.
For more information about product image and image sizes, see
"Product
Images".
When the Product image import task is
completed, product images from SAP will be shown in the Sana web
store. | https://help.sana-commerce.com/sana-commerce-93/erp-user-guide/sap/product-images | CC-MAIN-2019-35 | refinedweb | 584 | 63.29 |
I get Assets/Scriptshas/EnemyAI.cs(29,71): error CS1525: Unexpected symbol `'
using UnityEngine;
using System.Collections;
public class EnemyAI : MonoBehaviour {
public Transform target;
public int moveSpeed;
public int roatationSpeed;
private Transform myTransform;
void Awake(){
myTransform = transform;
}
//);
}
}
Problem is between myTransform.rotation = Quaternion.Slerp(myTransform.rotation, Quaternion.LookRotation(target.position - myTransform.position), rotationSpeed * Time.deltaTime); Which is line 29 in my script, thank you for your time!
Can't say from this limited amount of information. Edit the question to include the rest of the script along with the actual error message. And place a comment on the line where the compiler indicates there is an error.
Did you mean to mark a variable as internal, but it has a capital I ins$$anonymous$$d of lower case internal?
I edited.
It appears you have some funky characters in your line:29, i would delete the highlighted parts in the image and type them out again, rebuild the scripts and see if that helps.
I retyped the line and got 3 new errors.
Assets/Scriptshas/EnemyAI.cs(29,144): error CS0103: The name rotationSpeed' does not exist in the current context Assets/Scriptshas/EnemyAI.cs(29,51): error CS1502: The best overloaded method match for UnityEngine.Quaternion.Slerp(UnityEngine.Quaternion, UnityEngine.Quaternion, float)' has some invalid arguments
rotationSpeed' does not exist in the current context Assets/Scriptshas/EnemyAI.cs(29,51): error CS1502: The best overloaded method match for
Assets/Scriptshas/EnemyAI.cs(29,51): error CS1503: Argument #3' cannot convert object' expression to type `float'
#3' cannot convert
Answer by robertbu
·
Mar 11, 2013 at 06:01 PM
If you past your code into an word processor or editor that shows symbols, or you do a hexdump, you will find a symbol between 'target' and '.position' on that line. Hex value is AC, and probably inserted by a word processor. The fix is to delete the line and retype it.
It worked for me.
The problem was that I copied an line from an PDF file and some special symbols may have come together.
Thank you!
Answer by Statement
·
Mar 11, 2013 at 04:31 PM
MSDN: Compiler Error CS1525
Perhaps you named a variable "internal" like this?
// error CS1525: Unexpected symbol `internal',
// expecting `.', `?', `[', `<operator>', or `identifier'
float internal = 4;
internal is a reserved keyword. Consider renaming your symbol to something else in such case.
internal
Answer by tataygames
·
Jul 31, 2019 at 02:17 AM
DAMN, I spend hours, searching for answers, this f*ng copy paste ruin my time. if you copy your code and paste it in other, and copy it back again to your code. this f* error will come.
How to fix Script? error cs1525 Unexpected symbol 'vector3'
0
Answers
Identifier help
1
Answer
Have null errors
1
Answer
C# Errors CS1525,CS8025
2
Answers
ArgumentOutOfRangeException: Argument is out of range. Parameter name: index System.
0
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/414792/how-to-fix-script-error-cs1525-unexpected-symbol-.html | CC-MAIN-2021-17 | refinedweb | 486 | 57.67 |
Having the ability to set the orientation of a listview, if I need it to scroll up/down or left/right
using Xamarin.Forms; var List = new ListView(); List.Orientation = ListViewOrientation.Horizontal;
The most common use case for this one is to have multiple elements like App Store or Google Play store
Wouldn't this just be a
CarouselView?
hey @PierceBoggan , Correct me if i'm wrong but CarouselView just displays one item at the time, and the intend for this one is to display as many items as the screen width can just like ListView does but horizontally.
@MiguelCervantes , You can use Syncfusion ListView. It has support for both Horizontal and Vertical Orientation. By simply setting the property, Orientation="Horizontal".
Note: Syncfusion controls is available for free through the community license program if you qualify (less than 1 million USD in revenue).
The Telerik RadLIstview also provides this feature.
Depending or not if you require virtualisation, you could create a bindable StackLayout with orientation set to horizontal.
Hello @MiguelCervantes, you might want something like Instagram horizontal ListView?
Here it is something very similar to it. I hope it can help you.
Also, I encourage you to follow my Xamarin Flipboard Magazine, where I post this kind of features:
Have all a nice day!
You can rotate your ListView and then your ListView items (ViewCells) as well, it's a workaround, but it gets the job done.
I vote for this feature plus RTL direction support. And maybe this will open the issue with layout flow direction as well. As I understand, XF does not support yet RTL in shared project level.
IMO Listview is one of the most used views in XF, I think this one should be out of the box instead of having a third party for that
@MiguelCervantes
Pudiste resolverlo?, que utilizaste?
There are several workarounds but I agree with you, this should be directly in Xamarin Forms
can you try the sample ...HorizontalListViewMVVM
How would I press like on this thing?
| https://forums.xamarin.com/discussion/comment/245341/ | CC-MAIN-2019-43 | refinedweb | 336 | 64.1 |
HOUSTON (ICIS) -- The US Army has flown a UH-60 Black Hawk helicopter on a 50/50 blend of ATJ-8 fuel from biomass-derived isobutanol producer Gevo and petroleum-based fuel, the company said on Monday. The helicopter is the first Army aircraft to fly using the isobutanol alcohol-to-jet (ATJ) fuel. ?xml:namespace>
The Army is conducting flight tests with renewable fuels as part of a mandate that it certify 100% of its air plaforms on alternative or renewable fuels by 2016. Flight tests with the Gevo isobutanol blend took place over the week of 11 November.
Gevo had previously signed a contract to supply over 16,0000 gallons of fuel to the US Army to be used in testing. Flight testing at the Aviation Flight Test Directorate in Alabama are expected to be completed by March 2014.
The US Navy conducted tests with ATJ fuel from Gevo in 2012, completing a successful test flight with an A-10 Thunder Bolt jet aircraft in June 2012. | http://www.icis.com/resources/news/2013/12/23/9738386/us-army-tests-isobutanol-fuel-blend-in-helicopter/ | CC-MAIN-2015-06 | refinedweb | 169 | 56.89 |
Learn to build secure applications from the mindset of the hacker and avoid being exploited.
When a developer lands in the games industry he has to change his state of mind about performances. In this industry we have to perform a lot of operations in less than 33 milliseconds (30 FPS, frames per second), possibly tuning the logic and the art assets to achieve 60 FPS on standalone (Windows/Linux/Mac) and consoles (Xbox One/PS4) and that means rendering the scene content, computing physics and game logic in no more than 16 milliseconds! Not really an easy task, that's why in our industry every CPU tick counts really a lot.
So, what about the foreach statement? Well, this one is really bad, killing hundreds CPU ticks just to allow the programmer to write less code! You think I'm exaggerating here? Let's have a look to some code to give definitive proof.
Let's open Visual Studio (originally tested in VS2008 Professional, then VS2010 Professional, then VS2015 Enterprise and tested again with VS2017 Enterprise with .Net 4.6.2, to produce the compiled code below) and let's create a simple C# console app, and in that let's write the following very simple code:
public class DemoRefType { public List<Object> intList = new List<Object>(); public void Costly() { Object a = 0; foreach (int x in intList) a = x; } public void Cheap() { Object a = 0; for (int i = 0; i < intList.Count; i++) a = intList[i]; } }
That's an easy one, right? these two methods perform the same job, but one costs a lot in term of CPU ticks... let's see why. I use ILSpy () to look into the compiled code, so let's analyze the IL (intermediate language) I get after Visual Studio builds it (result unchanged over the years!).
Let's start with the Cheap method:
.method public hidebysig instance void Cheap () cil managed { // Method begins at RVA 0x2140 // Code size 36 (0x24) .maxstack 2 .locals init ( [0] int32 i ) IL_0000: ldc.i4.0 IL_0001: stloc.0 IL_0002: br.s IL_0015 // loop start (head: IL_0015) IL_0004: ldarg.0 IL_0005: ldfld class [mscorlib]System.Collections.Generic.List`1<object> performanceDemo.DemoRefType::intList IL_000a: ldloc.0 IL_000b: callvirt instance !0 class [mscorlib]System.Collections.Generic.List`1<object>::get_Item(int32) IL_0010: pop IL_0011: ldloc.0 IL_0012: ldc.i4.1 IL_0013: add IL_0014: stloc.0 IL_0015: ldloc.0 IL_0016: ldarg.0 IL_0017: ldfld class [mscorlib]System.Collections.Generic.List`1<object> performanceDemo.DemoRefType::intList IL_001c: callvirt instance int32 class [mscorlib]System.Collections.Generic.List`1<object>::get_Count() IL_0021: blt.s IL_0004 // end loop IL_0023: ret } // end of method DemoRefType::Cheap
So, nothing odd in the above, it's pretty much what I would expect: a simple loop and a straight move of reference value, nothing more.
Now let's have a look to what we get in IL from the Costly method:
.method public hidebysig instance void Costly () cil managed { // Method begins at RVA 0x20ec // Code size 53 (0x35) .maxstack 1 .locals init ( [0] valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<object> ) IL_0000: ldarg.0 IL_0001: ldfld class [mscorlib]System.Collections.Generic.List`1<object> performanceDemo.DemoRefType::intList IL_0006: callvirt instance valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<!0> class [mscorlib]System.Collections.Generic.List`1<object>::GetEnumerator() IL_000b: stloc.0 .try { IL_000c: br.s IL_001b // loop start (head: IL_001b) IL_000e: ldloca.s 0 IL_0010: call instance !0 valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<object>::get_Current() IL_0015: unbox.any [mscorlib]System.Int32 IL_001a: pop IL_001b: ldloca.s 0 IL_001d: call instance bool valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<object>::MoveNext() IL_0022: brtrue.s IL_000e // end loop IL_0024: leave.s IL_0034 } // end .try finally { IL_0026: ldloca.s 0 IL_0028: constrained. valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<object> IL_002e: callvirt instance void [mscorlib]System.IDisposable::Dispose() IL_0033: endfinally } // end handler IL_0034: ret } // end of method DemoRefType::Costly
Well, well, well... it's many lines longer and it contains some quite nasty logic. As we can see it allocates a generic enumerator (IL_0006) that gets disposed finally (IL_0028 to IL_002e), and that obviously is creating load on the GC (Garbage Collector). Is that it? Not really! We can also see (IL_0015) the nasty unbox operation, one of the most costly and slow in the framework! Please also note how the loop end is caught by the finally clause in case something happens (mostly an invalid casting), not really code we would write in the first place... and still we get it just using a foreach.
So, imagine to have a few of these in your game logic executing at every frame... obviously it's never simple code like in this example, so it will be way nastier than the result shown in this above.
We struggle already so much to keep our games above 30FPS while presenting beautiful artwork (really costly to render), and a lot of nice VFX (visual effects, definitely costly) and we all love to rely on the underlying physics engine to improve the overall gaming experience: all that costs quite a lot... so when it comes to the game logic we have to write, every clock cycle and CPU tick are so valuable... we cannot possibly waste any of them, so let's remember two rule of thumbs:
In the game industry we are all aiming at improving gamers' experiences, making it immersive as much as technically possible: gamers are quite demanding, so let's make sure that we always keep performance testing at the top of our coding practice, because losing even one frame in a second can be a failure factor from a market perspective. | https://www.experts-exchange.com/articles/28883/Did-you-know-that-C-foreach-statement-is-your-enemy-in-games-development.html | CC-MAIN-2018-43 | refinedweb | 948 | 58.99 |
Double check the actual xml of your .csproj file and make sure that the reference is in there.
It should be something similar to
<Reference Include="EPiServer.UI, Version=6.0.530.0, Culture=neutral, PublicKeyToken=8fe83dea738b45b7, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>bin\EPiServer.UI.dll</HintPath>
</Reference>
Microsoft Chart Controls did the trick. Install it from
This thing have been bothering me for a couple of days now.
When I try to build my project I get the errormessage: "Error 106 The type or namespace name 'UI' does not exist in the namespace 'EPiServer' (are you missing an assembly reference?)" and the I loose the intellisence for EPiserver.UI. The weird thing is that if I remove either Episerver.dll or Episerver.UI.dll and then ad them again to my References I get Intellisense back and my project finds the EPiServer.UI-namespace. That is, until I try to build it again.
The only co-worker (who wrote the code) I can ask about this problem has been sick all week | https://world.optimizely.com/forum/legacy-forums/Episerver-CMS-6-CTP-2/Thread-Container/2010/11/Gettng-build-errors-when-using-both-EPiServerdll-and-EPiServerUIdll-in-the-same-project/ | CC-MAIN-2022-40 | refinedweb | 174 | 68.16 |
This post follows on somewhat from my recent posts on running async startup tasks in ASP.NET Core. Rather than discuss a general approach to running startup tasks, this post discusses an example of a startup task that was suggested by Ruben Bartelink. It describes an interesting way to try to reduce the latencies seen by apps when they've just started, by pre-building all the singletons registered with the DI container.
The latency hit on first request
The ASP.NET Core framework is really fast, there's no doubt about that. Throughout its development there's been a huge focus on performance, even driving the development of new high-performance .NET types like
Span<T> and
System.IO.Pipelines.
However, you can't just have framework code in your applications. Inevitably, developers have to put some actual functionality in their apps, and if performance isn't a primary focus, things can start to slow down. As the app gets bigger, more and more services are registered with the DI container, you pull in data from multiple locations, and you add extra features where they're needed.
The first request after an app starts up is particularly susceptible to slowing down. There's lots of work that has to be done before a response can be sent. However this work often only has to be done once; subsequent requests have much less work to do, so they complete faster.
I decided to do a quick test of a very simple app, to see the difference between that first request and subsequent requests. I created the default ASP.NET Core web template with individual authentication using the .NET Core 2.2 SDK:
dotnet new webapp --auth Individual --name test
For simplicity, I tweaked the logging in appsettings.json to write request durations to the console in the
Production environment:
{ "Logging": { "LogLevel": { "Default": "Warning", "Microsoft.AspNetCore.Hosting.Internal.WebHost": "Information" } } }
I then built the app in
Release mode, and published it to a local folder. I navigated to the output folder and ran the app:
> dotnet publish -c Release -o ..\..\dist > cd ..\..\dist > dotnet test.dll Hosting environment: Production Now listening on: Now listening on: Application started. Press Ctrl+C to shut down.
Next I hit the home page of the app and recorded the duration for the first request logged to the console. I hit
Ctrl+C to close the app, started it again, and recorded another duration for the "first request".
Obviously this isn't very scientific, It's not a proper benchmark, but I just wanted a feel for it. For those interested, I'm using a Dell XPS 15" 9560, w block has an i7-7700 and 32GB RAM.
I ran the "first request" test 20 times, and got the mean results shown below. I also recorded the times for the second and third requests
After the 3rd request, all subsequent requests took a similar amount of time.
As you can see, there's a big difference between the first request and the second request. I didn't dive too much into where all this comes from, but some quick tests show that the vast majority of the initial hit is due to rendering Razor. As a quick test, I added a simple API controller to the app:
public class ValuesController : Controller { [HttpGet("/warmup")] public string Index() => "OK"; }
Hitting this controller for the first request instead of the default Razor
Index page drops the first request time to ~90ms. Removing the MVC middleware entirely (and responding with a 404) drops it to ~45ms.
Pre-creating singleton services before the first request
So where is all this latency coming from for the first request? And is there a way we can reduce it so the first user to hit the site after a deploy isn't penalised as heavily?
To be honest, I didn't dive in too far. For my experiments, I wanted to test one potential mitigation proposed by Ruben Bartelink: instantiating all the singletons registered with the DI container before the first request.
Services registered as singletons are only created once in the lifetime of the app. If they're used by the ASP.NET Core framework to handle a request, then they'll need to be created during the first request. If we create all the possible singletons before the first request then that should reduce the duration of the first request.
To test this theory, I created a startup task that would instantiate most of the singletons registered with the DI container before the app starts handling requests properly. The example below uses the "
IServer decorator" approach I described in part 2 of my series on async startup tasks, but that's not important; you could also use the
RunWithTasksAsync approach, or the health checks approach I described in part 4.
The
WarmupServicesStartupTask is shown below. I'll discuss the code shortly.) { foreach (var singleton in GetSingletons(_services)) { // may be registered more than once, so get all at once _provider.GetServices(singleton); } return Task.CompletedTask; } static IEnumerable<Type> GetSingletons(IServiceCollection services) { return services .Where(descriptor => descriptor.Lifetime == ServiceLifetime.Singleton) .Where(descriptor => descriptor.ImplementationType != typeof(WarmupServicesStartupTask)) .Where(descriptor => descriptor.ServiceType.ContainsGenericParameters == false) .Select(descriptor => descriptor.ServiceType) .Distinct(); } }
The
WarmupServicesStartupTask class implements
IStartupTask (from part 2 of my series) which requires that you implement
ExecuteAsync(). This fetches all of the singleton registrations out of the injected
IServiceCollection, and tries to instantiate them with the
IServiceProvider. Note that I call
GetServices() (plural) rather than
GetService() as each service could have more than one implementation. Once all services have been created, the task is complete.
The
IServiceCollectionis where you register you register your implementations and factory functions inside
Starrup.ConfigureServices. The
IServiceProvideris created from the service descriptors in
IServiceCollection, and is responsible for actually instantiating services when they're required.
The
GetSingletons() method is what identifies the services we're going to instantiate. It loops through all the
ServiceDescriptors in the collection, and filters to only singletons. We also exclude the
WarmupServicesStartupTask itself to avoid any potential weird recursion. Next we filter out any services that are open generics (like
ILogger<T>) - trying to instantiate those would be complicated by having to take into account type constraints, so I chose to just ignore them. Finally, we select the type of the service, and get rid of any duplicates.
By default, the
IServiceCollection itself isn't added to the DI container, so we have to add that registration at the same time as registering our
WarmupServicesStartupTask:
public void ConfigureServices(IServiceCollection services) { //Other registrations services .AddStartupTask<WarmupServicesStartupTask>() .TryAddSingleton(services); }
And that's all there is to it. I repeated the test again with the
WarmupServicesStartupTask, and compared the results to the previous attempt:
I know, right! Almost knocked you off your chair. We shaved 26ms off the first request time.
I have to admit, I was a bit underwhelmed. I didn't expect an enormous difference, but still, it was a tad disappointing. On the positive side, it is close to a 10% reduction of the first request duration and required very little effort, so its not all bad.
Just to make myself feel better about it, I did an unpaired t-test between the two apps and found that there was a statistically significant difference between the two samples.
Still, I wondered if we could do better.
Creating all services before the first request
Creating singleton service makes a lot of sense as a way to reduce first request latency. Assuming the services will be required at some point in the lifetime of the app, we may as well take the hit instantiating them before the app starts, instead of in the context of a request. This only gave a marginal improvement for the default template, but larger apps may well see a much bigger improvement.
Instead of just creating the singletons, I wondered if we could just create all of the services our app uses in the startup task; not only the singletons, but the scoped and transient services.
On the face of it, it seems like this shouldn't give any real improvement. Scoped services are created new for each request, and are thrown away at the end (when the scope ends). And transient services are created new every time. But there's always the possibility that creating a scoped service could require additional bootstrapping code that isn't required by singleton services, so I gave it a try.
I updated the
WarmupServicesStartupTask to the following:) { using (var scope = _provider.CreateScope()) { foreach (var singleton in GetServices(_services)) { scope.ServiceProvider.GetServices(singleton); } } return Task.CompletedTask; } static IEnumerable<Type> GetServices(IServiceCollection services) { return services .Where(descriptor => descriptor.ImplementationType != typeof(WarmupServicesStartupTask)) .Where(descriptor => descriptor.ServiceType.ContainsGenericParameters == false) .Select(descriptor => descriptor.ServiceType) .Distinct(); } }
This implementation makes two changes:
GetSingletons()is renamed to
GetServices(), and no long filters the services to singletons only.
ExecuteAsync()creates a new
IServiceScopebefore requesting the services, so that the scoped services are properly disposed at the end of the task.
I ran the test again, and got some slightly surprising results. The table below shows the first request time without using the startup task (top), when using the startup task to only create singletons (middle), and using the startup task to create all the services (bottom)
That's a mean reduction in first request duration of 117ms, or 37%. No need for the t-test to prove significance here! I can only assume that instantiating some of the scoped/transient services triggers some lazy initialization which then doesn't have to be performed when a real request is received. There's possibly JIT times coming in to play too.
Even with the startup task, there's still a big difference between the first request duration, and the second and third requests which are only 4ms and 1ms respectively. It seems very like there's more that could be done here to trigger all the necessary MVC components to initialize themselves, but I couldn't see an obvious way, short of sending a real request to the app.
It's worth remembering that the startup task approach shown here shouldn't only improve the duration of the very first request. As different parts of your app are hit for the firat time, most initialisation should already have happened, hopefully smoothing out the spikes in request duration for your app. But your mileage may vary!
Summary
In this post I showed how to create a startup task that loads all the singletons registered with the DI container on app startup, before the first request is received. I showed that loading all services in particular, not just singletons, gave a large reduction in the duration of the first request. Whether this task will be useful in practice will likely depend on your application, but it's simple to create and add, so it might be worth trying out! Thanks again to Ruben Bartelink for suggesting it. | https://andrewlock.net/reducing-latency-by-pre-building-singletons-in-asp-net-core/ | CC-MAIN-2021-31 | refinedweb | 1,827 | 53.61 |
cuserid(3c) [opensolaris man page]
cuserid(3C) Standard C Library Functions cuserid(3C) NAME
cuserid - get character login name of the user SYNOPSIS
#include <stdio.h> char *cuserid(char *s); DESCRIPTION. RETURN VALUES
If the login name cannot be found, cuserid() returns a null pointer. If s is not a null pointer, the null character ` ' will be placed at s[0]. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |MT-Level |MT-Safe | +-----------------------------+-----------------------------+ SEE ALSO
getlogin(3C), getpwnam(3C), attributes(5) SunOS 5.11 30 Dec 1996 cuserid(3C)
GET)wuid(geteuid()) instead, if that is what you meant. DO NOT USE cuserid(). SEE ALSO
geteuid(2), getuid(2) Linux 1.2.13 1995-09-03 GETLOGIN(3) | https://www.unix.com/man-page/opensolaris/3c/cuserid/ | CC-MAIN-2020-16 | refinedweb | 124 | 68.57 |
How to Create Custom Validators in Angular
How to Create Custom Validators in Angular
In this post, we look at how to create the functionality that tells your user if they've entered in their information correctly. But, you know, in a nice way.
Join the DZone community and get the full member experience.Join For Free
In this blog post, we will learn how to create custom validators in Angular Reactive Forms. If you are new to reactive forms, learn how to create your first Angular reactive form here.
Let's say we have a login form as shown in the code below. Currently, the form controls do not have any validations attached to it.
ngOnInit() { this.loginForm = new FormGroup({ email: new FormControl(null), password: new FormControl(null), age: new FormControl(null) }); }
Here, we are using
FormGroupto create a reactive form. On the component template, you can attach
loginForm as shown in the code below. Using property binding, the
formGroup property of the HTML form element is set to
loginForm and the
formControlName value of these controls is set to the individual
FormControl property of
FormGroup.
This will give you a reactive form in your application:
Using Validators
Angular provides us many useful validators, including required, minLength, maxLength, and pattern. These validators are part of the Validators class, which comes with the @angular/forms package.
Let's assume you want to add a required validation to the email control and a
maxLength validation to the password control. Here's how you do that:
ngOnInit() { this.loginForm = new FormGroup({ email: new FormControl(null, [Validators.required]), password: new FormControl(null, [Validators.required, Validators.maxLength(8)]), age: new FormControl(null) }); }
To work with validators, make sure to import them into the component class:
import { FormGroup, FormControl, Validators } from '@angular/forms';
On the template, you can use validators to show or hide an error message. Essentially, you are reading
formControl using the
get() method and checking whether it has an error or not using the
hasError() method. You are also checking whether
formControl is touched or not using the touched property.
If the user does not enter an email, then the reactive form will show an error as follows:
Custom Validators
Let us say you want the age range to be from 18 to 45. Angular does not provide us range validation; therefore, we will have to write a custom validator for this.
In Angular, creating a custom validator is as simple as creating another function. The only thing you need to keep in mind is that it takes one input parameter of type
AbstractControl and it returns an object of a key-value pair if the validation fails.
Let's create a custom validator called ageRangeValidator, where the user should able to enter an age only if it's in a given range.
The type of the first parameter is
AbstractControl, because it is a base class of
FormControl,
FormArray, and
FormGroup, and it allows you to read the value of the control passed to the custom validator function. The custom validator returns either of the following:
- If the validation fails, it returns an object, which contains a key-value pair. Key is the name of the error and the value is always Booleantrue.
- If the validation does not fail, it returns null.
Now, we can implement the ageRangeValidator custom validator in the below listing:
function ageRangeValidator(control: AbstractControl): { [key: string]: boolean } | null { if (control.value !== undefined && (isNaN(control.value) || control.value < 18 || control.value > 45)) { return { 'ageRange': true }; } return null; }
Here, we are hardcoding the maximum and minimum range in the validator. In the next section, we will see how to pass these parameters.
Now, you can use ageRangeValidator with the age control as shown in the code below. As you see, you need to add the name of the custom validator function in the array:
ngOnInit() { this.loginForm = new FormGroup({ email: new FormControl(null, [Validators.required]), password: new FormControl(null, [Validators.required, Validators.maxLength(8)]), age: new FormControl(null, [ageRangeValidator]) }); }
On the template, the custom validator can be used like any other validator. We are using the ageRange validation to show or hide the error message.
If the user does not enter an age between 18 to 45, then the reactive form will show an error:
Now the age control is working with the custom validator. The only problem with
ageRangeValidator is that the hardcoded age range only validates numbers between 18 and 45. To avoid a fixed range, we need to pass the maximum and minimum age to
ageRangeValidator.
Passing Parameters to a Custom Validator
An Angular custom validator does not directly take extra input parameters aside from the reference of the control. To pass extra parameters, you need to add a custom validator inside a factory function. The factory function will then return a custom validator.
You heard right: in JavaScript, a function can return another function.
Essentially, to pass parameters to a custom validator you need to follow these steps:
- Create a factory function and pass parameters that will be passed to the custom validator to this function.
- The return type of the factory function should be ValidatorFn which is part of @angular/forms
- Return the custom validator from the factory function.
The factory function syntax will be as follows:
Now you can refactor the
ageRangeValidator to accept input parameters as shown in the listing below:
function ageRangeValidator(min: number, max: number): ValidatorFn { return (control: AbstractControl): { [key: string]: boolean } | null => { if (control.value !== undefined && (isNaN(control.value) || control.value < min || control.value > max)) { return { 'ageRange': true }; } return null; }; }
We are using the input parameters max and min to validate age control. Now, you can use
ageRangeValidator with age control and pass the values for max and min as shown in the code below:
min = 10; max = 20; ngOnInit() { this.loginForm = new FormGroup({ email: new FormControl(null, [Validators.required]), password: new FormControl(null, [Validators.required, Validators.maxLength(8)]), age: new FormControl(null, [ageRangeValidator(this.min, this.max)]) }); }
On the template, the custom validator can be used like any other validator. We are using
ageRange validation to show or hide an error message:
In this case, if the user does not enter an age between 10 and 20, the error message will be shown as seen below:
And there you have it: how to create a custom validator for Angular Reactive Forms. }} | https://dzone.com/articles/how-to-create-custom-validators-in-angular | CC-MAIN-2020-05 | refinedweb | 1,063 | 53.21 |
I had a foreboding of it. Dare Obasanjo:.
They say developers love DOM. Hmmm, Java developers definitely don't, just look at numerous Java-and-developer-friendly DOM alternatives like JDOM, Dom4J, XOM etc. I remember my experience of processing XML using DOM in Java as a nightmare, I never had to write so dirty java code before and after that. Microsoft part of XML world is different - brilliant MSXML implementation is the only tree-based XML API for native Windows programming and ... it's brilliant. You see - the only and brilliant, that's a killer combination and that won't be so bad to have such one in .NET. Btw many don't realize MSXML actually extended DOM to make it at least usable, e.g. with "xml" property (InnerXml/OuterXml in XmlDocument) etc properties, which aren't actually W3C DOM). So the truth is developers love MSXML, not DOM. And they obviously love XmlDocument, because it's habitual and because it's is easier to use XmlDocument than XPathDocument even in areas where they compete. Try to select a value from namespaced XML to understand what I'm talking about. So XPathDocument is read-only and has clumsy API... No chances, be it even 10x faster..
And there is another side - one size doesn't fit all. I doubt XmlDocument can be made radically faster and it's interesting to see how it will survive in XQuery era, whose data model differs from DOM even more than XPath 1.0's. So instead of focusing fully on reanimating XmlDocument I wish System.Xml devs to focus on developing several tools best optimized for different scenarios. I wish we had editable XPathDocument in .NET 2.0. Do you?
TrackBack URL:
This sucks. I want editable XPathDocument....
Aaron asks if we believe in Contract-First. I sure do. BUT, he nailed it right on the head: It's very...
TITLE: More Information on the XPathDocument/XmlDocument Change in Whidbey beta 2
URL:
IP: 66.129.67.202
BLOG NAME: Dare Obasanjo's WebLog
DATE: 09/03/2004 07:48:38 AM
So it's editable XPathNavigator with read-only XPathDocument. Hmm.
Well, it's definitely a step in right direction. Too small, but at least now it's possible to develop Mvp.Xml.EditableXPathDocument without inventing proprietary update API.
So they cut WinFS out of LongHorn and some of Avalon(?) as well and basically decoupling all 3 pillars of the next gen platform from Windows so that developers don't have to change their existing code to run on longhorn. I guess, MSFT is siding with the Chen Camp...
The XPathNavigator will be editable in Whidbey. However the XPathDocument will not be.
Lets build our own like Don said. That was a bad decision by them. Devs like DOM because they don't know anything else. Evangilism on the topic would rally the support needed.
Sounds like a perfect opportunity for the Mvp.Xml project!
Do it ourselves, or get MS to release the code as open source. The best of both worlds, backward compatibility and something or the cutting edge folks that need the performance.
Don
I do.
I already have an XpathNavigator over a relational content model, and I was waiting like crazy to making it Editable.
This page contains a single entry by Oleg Tkachenko published on August 26, 2004 3:34 PM.
"XSLT 2.0" and "XPath 2.0" books by Michael Kay are in print was the previous entry in this blog.
XQuery 1.0 and XPath 2.0 type hierarchy chart is the next entry in this blog.
Find recent content on the main index or look in the archives to find all content. | http://www.tkachenko.com/blog/archives/000282.html | CC-MAIN-2013-48 | refinedweb | 620 | 67.86 |
Briefly, IronPython is an implementation of Python in the .Net runtime. This allows you access to .Net framework goodness while programming in a dynamic language. The current stable version 2.0.1 maps to CPython 2.5.This allows me to do fun fun things like use my python project to access a c# project. follows is a python script using Pinsor and accessing a .net dll.
C# code:
public class Command
{
private readonly DoingStuff _stuff;
public Command (DoingStuff stuff)
{
_stuff = stuff;
}
public void Execute()
{
Console.WriteLine(_stuff.Message);
}
}
public class DoingStuff
{
public string Message
{
get { return "From the Doing Stuff Class, I'm doing stuff"; }
}
}
python code:
from pinsor import *
import clr
clr.AddReference('csharpdemo')
from csharpdemo import *
kernel = PinsorContainer()
kernel.register( Component.oftype(DoingStuff),Component.oftype(Command).depends([DoingStuff]) )
cmd = kernel.resolve(Command)
cmd.Execute()
ouput is like:
C:ipyexmaple>”c:Program FilesIronPython 2.0.1ipy.exe” ipydemo.py
From the Doing Stuff Class, I’m doing stuff
I’m sure you’ll all asking yourselves “what the heck” so let me summarize:
- created c# dll
- placed a python script next to it named ipydemo.py
- in that script I imported the c# dll
- then I called my custom python IoC container Pinsor which is written in pure python
- added .Net objects to my python IoC container
- resolved a .Net object from my python IoC container
- then executed it with the expected results and dependencies.
When I did this test I stared at the screen completely stunned for a bit. Pinsor was very easy to implement, and it has some decent abilities considering its short life and my limited time. I doubt I could make the same thing in C# in twice as much time, and I’m better in C#. This opens up some worlds for me.
With all this in mind has anyone had a chance to play with ironpython support in VS 2010?
Finally, I’d like to thank Micheal Foord with his blog and book IronPython In Action. I highly recommend reading both if you’re interested in quality programming, but especially if any of this intrigues you.
Post Footer automatically generated by Add Post Footer Plugin for wordpress. | http://lostechies.com/ryansvihla/2009/05/28/ironpython-how-cool-it-is/ | CC-MAIN-2014-35 | refinedweb | 365 | 65.93 |
CFD Online Discussion Forums
(
)
-
FLUENT
(
)
- -
Using UDF to alter particle size
(
)
Tumppi
May 18, 2005 04:05
Using UDF to alter particle size
Hello everybody
Kind of newbie quetion here. I'm trying to alter the particle size (diameter) with UDF in fluent. I made a little code for testing purposes and interpreted it and then defined an injection which uses this UDF. But when I dislay particle track, the diameter doesn't change. I used DEFINE_DPM_LAW as define function. If someone could help me it would be great.
Allan Walsh
May 18, 2005 13:45
Re: Using UDF to alter particle size
I assume you are manipulating P_DIAM(p). Print out P_DIAM(p) in your UDF, before and after manipulating it. Depending on particle type (i.e. inert, combusting, etc.) FLUENT could reset it somewhere else.
Good luck.
Tumppi
May 23, 2005 07:50
Re: Using UDF to alter particle size
Well seems thet I'm dooing something really wrong. When I put printf lines to see what happens I only get 0 values for P_DIAM(p) and its only reported once at the start.
If someone can guide me to right direction that would be nice.
Allan Walsh
May 23, 2005 11:13
Re: Using UDF to alter particle size
Do the values for P_DIAM(P) match up with what FLUENT shows if you do a step-by-step particle tracking? They should.
If it is only getting to your print statement once, it could be that FLUENT is only using your UDF once and then setting the particle to use the next law. Can you check that.
Tumppi
May 25, 2005 03:21
Re: Using UDF to alter particle size
I think I found the answer why I cant get the UDF to work. If you read part 19.9.3 Particle Types at the manual you can see that only law 1 is active when one uses inert particle (which I did). So I gues that I have to use a droplet particle type to get my UDF to work, am I right?
Thank you for your help so far Allan.
Tumppi
May 27, 2005 03:23
Re: Using UDF to alter particle s. Problem Solved
Well I solved the problem and got it to work with inert particle, I had selected wrong law from the custom laws panel at injection properties. When I put my custom law as Law 1 I got it to work.
Thanks for help.
tagada
April 7, 2011 10:17
Hello Tumppi,
I'm very interested to have a look on your UDF, I'm also trying to alter the particle size with UDF an,d it doesn't work.
This is my code:
#include "udf.h"
#include "dpm.h"
DEFINE_DPM_LAW (diam_particle, p, ci)
{
/* compute new particle diameter */
P_DIAM(p) = 6*pow((0.015/2*0.7*1000.0*9.81)* 0.000300*P_POS(p)[0],1/3);
}
Thank you
All times are GMT -4. The time now is
20:36
. | https://www.cfd-online.com/Forums/fluent/36690-using-udf-alter-particle-size-print.html | CC-MAIN-2017-13 | refinedweb | 500 | 73.88 |
30 September 2009 17:13 [Source: ICIS news]
HOUSTON (ICIS news)--The energy industry hopes to defeat cap-and-trade and CO2 legislation by stalling the bills until after the 2010 US mid-term elections, a trade organisation official said on Wednesday.
“I am holding on for dear life for those elections,” National Petrochemical & Refiners Association (NPRA) executive vice president Greg Scott said at a methanol forum in ?xml:namespace>
“The clock is our friend,” he added. “If polling numbers hold, Democrats will lose seats in both the House and the Senate.”
The cap-and-trade bill is likely to pass Senate committees, but should die on the Senate floor where 44 Senators have already vowed not to pass such a bill in 2009, Scott said.
“If you can’t get to 60 [votes], you can’t do anything,” Scott said.
While the Democrats have 60 votes currently, NPRA also hopes to sway about 10-15 moderate Democrats, largely from the
The NPRA acknowledged it might not be able to ultimately win their votes, but hoped to use those concerns to slow down legislation until November 2010.
“Hopefully the electorate will send a message to [
Scott predicted that CO2 regulation efforts would be postponed until after the 2010 mid-term elections, while the Toxic Substances Control Act (TSCA) and Chemical Facility Anti-Terrorism Standards (CFATS) were unlikely to be taken up before 2010 as well, in large part due to stall tactics.
Scott said the NPRA also was trying to put up every roadblock it could to the US Environmental Protection Agency’s (EPA) efforts to regulate greenhouse gases (GHG) regulation, adding that the NPRA was confident it could delay such regulations for at least 5-10 years.
In the near term, key remaining 2009 issues are likely to be proposed low carbon fuel standards and industry taxes, he said.
Those taxes include $80bn (€55bn) on the petrochemical and refining industries from President Barack Obama’s 2010 budget.
“Right or wrong, they still regard the refining and petrochemical industries as deep pockets with lots of money to give, and they fully intend for us to give,” Scott said.
Scott also criticised the budget’s proposed repeal of the last-in, first-out (LIFO) accounting system for all industries, which would create substantial tax liabilities for all businesses, he said.
The 7th Annual Methanol Forum was hosted by consultancy Jim Jordan & Associates.
( | http://www.icis.com/Articles/2009/09/30/9251691/us-petchems-work-to-stall-climate-bills-until-2010-elections.html | CC-MAIN-2014-41 | refinedweb | 401 | 52.23 |
ganesha-rados-grace - Man Page
manipulate the shared grace management database
Synopsis
ganesha-rados-grace [ --cephconf /path/to/ceph.conf ] [--ns namespace] [ --oid obj_id ] [ --pool pool_id ] [ --userid cephuser ] dump|add|start|join|lift|remove|enforce|noenforce|member [ nodeid ... ]
Description
This tool allows the administrator to directly manipulate the database used by the rados_cluster recovery backend. Cluster nodes use that database to indicate their current state in order to coordinate a cluster-wide grace period.
The first argument should be a command to execute against the database. Any remaining arguments represent the nodeids of nodes in the cluster that should be acted upon.
Most commands will just fail if the grace database is not present. The exception to this rule is the add command which will create the pool, database and namespace if they do not already exist.
Note that this program does not consult ganesha.conf. If you use non-default values for ceph_conf, userid, grace_oid, namespace or pool in your RADOS_KV config block, then they will need to passed in via command-line options.
Options
- --cephconf
- Specify the ceph.conf configuration that should be used (default is to use the normal search path to find one)
- --ns
- Set the RADOS namespace to use within the pool (default is NULL)
- --oid
- Set the object id of the grace database RADOS object (default is "grace")
- --pool
- Set the RADOS poolid in which the grace database object resides (default is "nfs-ganesha")
- --userid
- Set the cephx user ID to use when contacting the cluster (default is NULL)
Commands
dump
Dump the current status of the grace period database to stdout. This will show the current and recovery epoch serial numbers, as well as a list of hosts currently in the cluster and what flags they have set in their individual records.
add
Add the specified hosts to the cluster. This must be done before the given hosts can take part in the cluster. Attempts to modify the database by cluster hosts that have not yet been added will generally fail. New hosts are added with the enforcing flag set, as they are unable to hand out new state until their own grace period has been lifted.
start
Start a new grace period. This will begin a new grace period in the cluster if one is not already active and set the record for the listed cluster hosts as both needing a grace period and enforcing the grace period. If a grace period is already active, then this is equivalent to join.
join
Attempt to join an existing grace period. This works like start, but only if there is already an existing grace period in force.
lift
Attempt to lift the current grace period. This will clear the need grace flags for the listed hosts. If there are no more hosts in the cluster that require a grace period, then it will be fully lifted and the cluster will transition to normal operations.
remove
Remove one or more existing hosts from the cluster. This will remove the listed hosts from the grace database, possibly lifting the current grace period if there are no more hosts that need one.
enforce
Set the flag for the given hosts that indicates that they are currently enforcing the grace period; not allowing the acquisition of new state by clients.
noenforce
Clear the enforcing flag for the given hosts, meaning that those hosts are now allowing clients to acquire new state.
member
Test whether the given hosts are members of the cluster. Returns an error if any of the hosts are not present in the grace db omap.
Flags
When the dump command is issued, ganesha-rados-grace will display a list of all of the nodes in the grace database, and any flags they have set. The flags are as follows:
E (Enforcing)
The node is currently enforcing the grace period by rejecting requests from clients to acquire new state.
N (Need Grace)
The node currently requires a grace period. Generally, this means that the node has clients that need to perform recovery.
Nodeid Assignment
Each running ganesha daemon requires a nodeid string that is unique within the cluster. This can be any value as ganesha treats it as an opaque string. By default, the ganesha daemon will use the hostname of the node where it is running.
This may not be suitable when running under certain HA clustering infrastructure, so it's generally recommended to manually assign nodeid values to the hosts in the RADOS_KV config block of ganesha.conf.
Ganesha Configuration
The ganesha daemon will need to be configured with the RecoveryBackend set to rados_cluster. If you use a non-default pool, namespace or oid, nodeid then those values will need to be set accordingly in the RADOS_KV config block as well.
Starting a New Cluster
First, add the given cluster nodes to the grace database. Assuming that the nodes in our cluster will have nodeids ganesha-1 through ganesha-3:
ganesha-rados-grace add ganesha-1 ganesha-2 ganesha-3
Once this is done, you can start the daemons on each host and they will coordinate to start and lift the grace periods as-needed.
Adding Nodes to a Running Cluster
After this point, new nodes can then be added to the cluster as needed using the add command:
ganesha-rados-grace add ganesha-4
After the node has been added, ganesha.nfsd can then be started. It will then request a new grace period as-needed.
Removing a Node from the Cluster
To remove a node from the cluster, first unmount any clients that have that node mounted (possibly moving them to other servers). Then execute the remove command with the nodeids to be removed from the cluster. For example:
ganesha-rados-grace remove ganesha-4
This will remove the ganesha-4's record from the database, and possibly lift the current grace period if one is active and it was the last one to need it. | https://www.mankier.com/8/ganesha-rados-grace | CC-MAIN-2021-04 | refinedweb | 994 | 60.24 |
Numpy, MATLAB and singular matrices
I tried to get this result in Euler, using various methods, including the package ALgLib, which uses LAPACK routines, and the LAPACL routines of Maxima. Really, no success.
>A=redim(1:9,3,3); b=sum(A);
>A\b
Determinant zero!
Error in :
A\b
^
>alsolve(A,b,fit(A,b)
0
3
0
>svdsolve(A,b)
1
1
1
>&load(lapack);
>&dgesv(@A,@b)
[ 0.0 ]
[ ]
[ 3.0 ]
[ ]
[ 0.0 ]
>M=A|b; M=M/b
0.166667 0.333333 0.5 1
0.266667 0.333333 0.4 1
0.291667 0.333333 0.375 1
>pivotize(M,1,3)
0.333333 0.666667 1 2
0.133333 0.0666667 0 0.2
0.166667 0.0833333 0 0.25
>pivotize(M,2,1)
0 0.5 1 1.5
1 0.5 0 1.5
0 0 0 0
>
Sorry about the formatting. Cannot use ?
Mathematica will give the same solution and a warning, but no condition number.
LinearSolve[N@{{1,2,3}, {4,5,6}, {7,8,9}}, {15,15,15}]
LinearSolve::luc: “Result for LinearSolve of badly conditioned matrix {{1.,2.,3.},{4.,5.,6.},{7.,8.,9.}} may contain significant numerical errors”
{-39., 63., -24.}
R will give no solution, just an error and a condition number.
solve(matrix(1:9, ncol=3, byrow=T), c(15,15,15))
system is computationally singular: reciprocal condition number = 2.59052e-18
I’m not good with Maple and I could only get a symbolic (full) solution, even with inexact numbers.
Julia 0.1.2 gives the same solution, but no warning.
In my opinion, octave gives the best solution:
warning: matrix singular to machine precision, rcond = 3.08471e-18
warning: attempting to find minimum norm solution
warning: dgelsd: rank deficient 3×3 matrix, rank = 2
x =
-7.5000e+00
6.8782e-15
7.5000e+00
It is possible to get this solution by using the pseudo inverse of A
——————————————–
from numpy.linalg import pinv # python
x=dot(pinv(A),b)
——————————————–
x=pinv(A),b) % matlab
——————————————–
Nice blog
Thanks for all of the comments and evaluations in other systems.
@Szabolcs – As you probably discovered yourself, if you use Mathematica’s linear solve with exact arithmetic you get a completely different answer, which is interesting
In[1]:= LinearSolve[{{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}, {15, 15, 15}]
Out[1]= {-15, 15, 0}
Maple doesn’t give a warning either but does give results compatible with most of the other systems:
with(LinearAlgebra):
M := Matrix([[1.0, 2.0, 3.0],[4.0, 5.0, 6.0],[7, 8, 9]]):
v := Vector([15, 15, 15]):
x:=LinearSolve(M,v):
lprint(x);
gives
Vector[column](3, {1 = HFloat(-38.9999999999999929), 2 = HFloat(62.9999999999999929), 3 = HFloat(-24.)}, datatype = float[8], storage = rectangular, order = Fortran_order, shape = [])
I’m relatively new to Maple myself so I am not yet sure how to just get the floating point numbers of the returned vector in a simple list like (a,b,c)
The Julia team are also looking at this issue now:
An issue was reported on NumPy but no response so far:
Good blog! A related question is how you can tell in this particular case that the matrix must be singular (i.e. must have a vanishing determinant) without explicitly computing its determinant.
Here’s something weird in MATLAB:
A = [1 2 3; 4 5 6; 7 8 9]
B = [1 4 7; 2 5 8; 3 6 9]
Up to now it looks like A’ == B and MATLAB claims it’s so. But now try
A’ \ [15;15;15]
Warning: Matrix is close to singular or badly scaled. Results may
be inaccurate. RCOND = 1.156482e-18.
ans = [-2.5; 0; 2.5]
B \ [15; 15; 15]
Warning: Matrix is singular to working precision.
ans = [NaN; NaN; NaN]
Does MATLAB not really carry out the transposition in this case, just change how it interprets the underlying data?
I realized that I entered the transpose of the matrix into Maple when I tried it. Interestingly, in this case it gives a symbolic solution, even if the matrix had inexact numbers:
Vector(3, {(1) = -5.+_t1[1], (2) = 5.-2*_t1[1], (3) = _t1[1]})
I also tried the transpose in other systems:
In MATLAB I get a warning, as before. (But see my other comment—it might return NaN.)
In Mathematica I get {-2.5, 0, 2.5} but no warning.
R gives no solution and says “Lapack routine dgesv: system is exactly singular: U[3,3] = 0″
Julia gives no result and says “ERROR: SingularException(3)”
What Szabolics is seeing is the effect of rounding errors.
It just so happens that LU factorization of B = A’ produces an exactly singular
U matrix, and this is what produces the NaNs in the solution.
But rounding errors make the U factor for A nonsingular:
>> B = zeros(3); B(:) = 1:9; A = B’; [LA,UA] = lu(A); [LB,UB] = lu(B); UA, UB
UA =
7.0000e+00 8.0000e+00 9.0000e+00
0 8.5714e-01 1.7143e+00
0 0 1.1102e-16
UB =
3 6 9
0 2 4
0 0 0
Additional background to supplement Nick’s answer to Szabolcs:
The A’\[15;15;15] case goes through an optimized code path that avoids doing an explicit transpose. The different results are then different round-off behaviors from two separate code paths.
@Szabolcs:
In your example, A’\b should be similar to doing: linsolve(A,b,struct(‘TRANSA’,true)). This selects a separate execution path specifically to solve a system of the form A’*x=b, which explains the different results.
Of course mldivide will first perform tests on the matrix A to check for special properties and chose an appropriate solver [1], while linsolve relies on the user to specify the properties of the coefficient matrix..
The underlying LAPACK routine used to solve [2] a general double-precision real matrix is DGESV. The expert version of this driver (DGESVX) let us specify whether to solve for Ax=b or A’x=b (second parameter to the function) [3].
[1]:
[2]:
[3]:
Note that my comment above only holds when using the operator form A’\b not the function form mldivide(A’,b) of the backslash.
My guess is that the MATLAB parser specifically recognizes the first case and optimizes it accordingly at an early stage, but not the second form where the matrix would already be transposed before handing it to mldivide…
Is there a function in IPython or numpy already to get the estimate of the matrix conditional or its reciprocal?
Warren
There’s numpy.linalg.cond but, long term, there’s a better way to find the estimate in numpy’s solve. As it says within the discussion at solve currently uses LAPACK’s [DZ]GESV but if they used [DZ]GESVX instead they’d get the reciprocal condition number along with the solution.
Mike
This is timely. We are about to hit linear and non-linear equations in my “computational methods in physics” course, and I can use this as an example of of being aware of what might go wrong. I was showing them how to get a wrong answer out of quad (using scipy.integrate) today.
Sounds like an interesting topic for a blog post, Jim
@Mike Croucher
Go right ahead. Here is simple code to calculate the same integral twice and get one right and one wrong result. Explaining why was a homework problem.
from scipy.integrate import quad
from scipy import exp
from numpy import inf
def bb(x):
return x**3/(exp(x)-1)
print quad(bb,0,inf)
print ‘\n We can fool it.’
print ‘Here we do the same calculation with a change of variable’
def bb2(t):
x=1e5*t
return 1e5*(x**3/(exp(x)-1))
print quad(bb2,0,inf) | http://www.walkingrandomly.com/?p=5092 | CC-MAIN-2015-22 | refinedweb | 1,321 | 64.71 |
Runs the tests for a project.
This task starts the current application, loads up
test/test_helper.exs and then requires all files matching the
test/**/*_test.exs pattern in parallel.
A list of files and/or directories can be given after the task name in order to select the files to run:
mix test test/some/particular/file_test.exs mix test test/some/particular/dir
Tests in umbrella projects can be run from the root by specifying the full suite path, including
apps/my_app/test, in which case recursive tests for other child apps will be skipped completely:
# To run all tests for my_app from the umbrella root mix test apps/my_app/test # To run a given test file on my_app from the umbrella root mix test apps/my_app/test/some/particular/file_test.exs
--color- enables color in the output
--cover- runs coverage tool. See "Coverage" section below
--exclude- excludes tests that match the filter
--export-coverage- the name of the file to export coverage results too. Only has an effect when used with
--cover
--failed- runs only tests that failed the last time they ran
--force- forces compilation regardless of modification times
--formatter- sets the formatter module that will print the results. Defaults to ExUnit's built-in CLI formatter
--include- includes tests that match the filter
--listen-on-stdin- runs tests, and then listens on stdin. Receiving a newline will result in the tests being run again. Very useful when combined with
--staleand external commands which produce output on stdout upon file system modifications
--max-cases- sets the maximum number of tests running asynchronously. Only tests from different modules run in parallel. Defaults to twice the number of cores
--max-failures- the suite stops evaluating tests when this number of test failures is reached. It runs all tests if omitted
-
--partitions- sets the amount of partitions to split tests in. This option requires the
MIX_TEST_PARTITIONenvironment variable to be set. See the "Operating system process partitioning" section for more information
--preload-modules- preloads all modules defined in applications
--raise- raises if the test suite failed
--seed- seeds the random number generator used to randomize the order of tests;
--seed 0disables randomization
--slowest- prints timing information for the N slowest tests. Automatically sets
--traceand
--preload-modules
--stale- runs only tests which reference modules that changed since the last time tests were ran with
--stale. You can read more about this option in the "The --stale option" section below
--timeout- sets the timeout for the tests
--trace- runs tests with detailed reporting. Automatically sets
--max-casesto
1. Note that in trace mode test timeouts will be ignored as timeout is set to
:infinity
These configurations can be set in the
def project section of your
mix.exs:
:test_paths - list of paths containing test files. Defaults to
["test"] if the
test directory exists; otherwise, it defaults to
[]. It is expected that all test paths contain a
test_helper.exs file
:test_pattern - a pattern to load test files. Defaults to
*_test.exs
:warn_test_pattern - a pattern to match potentially misnamed test files and display a warning. Defaults to
*_test.ex
:test_coverage - a set of options to be passed down to the coverage mechanism
ExUnit provides tags and filtering functionality that allow option:
mix test --include external:true
The example above will run all tests that have the external option option has no effect.
For this reason, Mix also provides an
--only option that excludes all tests and includes only the given ones:
mix test --only external
Which is similar to:
mix test --include external --exclude test
It differs in that the test suite will fail if no tests are executed when the
--only option is used.
In case a single file is being tested, it is possible to pass one or more specific line numbers to run only those given tests:
mix test test/some/particular/file_test.exs:12
Which is equivalent to:
mix test --exclude test --include line:12 test/some/particular/file_test.exs
Or:
mix test test/some/particular/file_test.exs:12:24
Which is equivalent to:
mix test --exclude test --include line:12 --include line:24 test/some/particular/file_test.exs
If a given line starts a
describe block, that line filter runs all tests in it. Otherwise, it runs the closest test on or before the given line number.
The
:test_coverage configuration accepts the following options:
:output- the output directory for cover results. Defaults to
"cover"
:tool- the coverage tool
:summary- summary output configuration; can be either a boolean or a keyword list. When a keyword list is passed, it can specify a
:threshold, which is a boolean or numeric value that enables coloring of code coverage results in red or green depending on whether the percentage is below or above the specified threshold, respectively. Defaults to
[threshold: 90]
:export- a file name to export results to instead of generating the result on the fly. The
.coverdataextension is automatically added to the given file. This option is automatically set via the
--export-coverageoption or when using process partitioning. See
mix test.coverageto compile a report from multiple exports.
:ignore_modules- modules to ignore from generating reports and in summaries.
While ExUnit supports the ability to run tests concurrently within the same Elixir instance, it is not always possible to run all tests concurrently. For example, some tests may rely on global resources.
For this reason,
mix test supports partitioning the test files across different Elixir instances. This is done by setting the
--partitions option to an integer, with the number of partitions, and setting the
MIX_TEST_PARTITION environment variable to control which test partition that particular instance is running. This can also be useful if you want to distribute testing across multiple machines.
For example, to split a test suite into 4 partitions and run them, you would use the following commands:
MIX_TEST_PARTITION=1 mix test --partitions 4 MIX_TEST_PARTITION=2 mix test --partitions 4 MIX_TEST_PARTITION=3 mix test --partitions 4 MIX_TEST_PARTITION=4 mix test --partitions 4
The test files are sorted upfront in a round-robin fashion. Note the partition itself is given as an environment variable so it can be accessed in config files and test scripts. For example, it can be used to setup a different database instance per partition in
config/test.exs.
If partitioning is enabled and
--cover is used, no cover reports are generated, as they only contain a subset of the coverage data. Instead, the coverage data is exported to files such as
cover/MIX_TEST_PARTITION.coverdata. Once you have the results of all partitions inside
cover/, you can run
mix test.coverage to get the unified report.
The
--stale command line option attempts to run only the.
The
--stale option is extremely useful for software iteration, allowing you to run only the relevant tests as you perform changes to the codebase.
© 2012 Plataformatec
Licensed under the Apache License, Version 2.0. | https://docs.w3cub.com/elixir~1.11/mix.tasks.test | CC-MAIN-2021-10 | refinedweb | 1,144 | 53.31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.