text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Regexp to check if an IP is valid
I'm wondering if it's possible to compare values in regexps with the regexp system in Python. Matching the pattern of an IP is easy, but each 1-3 digits cannot be above 255 and that's where I'm a bit stumped.
You need to check the allowed numbers in each position. For the first optional digit, acceptable values are 0-2. For the second, 0-5 (if the first digit for that part is present, otherwise 0-9), and 0-9 for the third.
I found this annotated example at :
\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b
Validating IPv4 addresses with regexp, Here I have taken the value of ip and I am trying to match it with regex. Above condition checks if the value exceeds 255 for all the 4 octets then it is not a valid.. If your regex flavor supports Unicode, it may even match ١٢٣.१२३.೧೨೩.๑๒๓. Whether this is a problem depends on the files or data you intend to apply the regex to.
No need for regular expressions here. Some background:
>>> import socket >>> socket.inet_aton('255.255.255.255') '\xff\xff\xff\xff' >>> socket.inet_aton('255.255.255.256') Traceback (most recent call last): File "<input>", line 1, in <module> error: illegal IP address string passed to inet_aton >>> socket.inet_aton('my name is nobody') Traceback (most recent call last): File "<input>", line 1, in <module> error: illegal IP address string passed to inet_aton
So:
import socket def ip_address_is_valid(address): try: socket.inet_aton(address) except socket.error: return False else: return True
Note that addresses like '127.1' could be acceptable on your machine (there are systems, including MS Windows and Linux, where missing octets are interpreted as zero, so '127.1' is equivalent to '127.0.0.1', and '10.1.4' is equivalent to '10.1.0.4'). Should you require that there are always 4 octets, change the last line from:
else: return True
into:
else: return address.count('.') == 3
Validate an ip address, RegEx for Check whether a string is a valid ip address. . # repeat with 3 times (3x) $ #end of the line Whole combination means, digit from 0 to 255 and follow by a dot “.”, repeat 4 time and ending with no dot “.” Valid IP address format is “0-255.0-255.0-255.0-255”. 1.
You can check a 4-octet IP address easily without regexes at all. Here's a tested working method:
>>> def valid_ip(ip): ... parts = ip.split('.') ... return ( ... len(parts) == 4 ... and all(part.isdigit() for part in parts) ... and all(0 <= int(part) <= 255 for part in parts) ... ) ... >>> valid_ip('1.2.3.4') True >>> valid_ip('1.2.3.4.5') False >>> valid_ip('1.2. 3 .4.5') False >>> valid_ip('1.256.3.4.5') False >>> valid_ip('1.B.3.4') False >>>
Validate an ip address, Regular Expression to Check whether a string is a valid.
Regex is for pattern matching, but to check for a valid IP, you need to check for the range (i.e. 0 <= n <= 255).
You may use regex to check for range, but that'll be a bit overkill. I think you're better off checking for basic patter and then check for the range for each number.
For example, use the following pattern to match an IP:
([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})
Then check whether each number is within range.
7.16. Matching IPv4 Addresses, Problem. You want to check whether a certain string represents a valid IPv4 address in 255.255.255.255 notation. Simple regex to check for an IP address: ^(? This regular expression is too simple - if you want to it to be accurate, you need to check that the numbers are between 0 and 255, with the regex above accepting 444 in any position. You want to check for 250-255 with 25 [0-5], or any other 200 value 2 [0-4] [0-9], or any 100 value or less with ? [0-9] [0-9].
The following supports IPv4, IPv6 as well as Python 2.7 & 3.3
import socket def is_valid_ipv4(ip_str): """ Check the validity of an IPv4 address """ try: socket.inet_pton(socket.AF_INET, ip_str) except AttributeError: try: socket.inet_aton(ip_str) except socket.error: return False return ip_str.count('.') == 3 except socket.error: return False return True def is_valid_ipv6(ip_str): """ Check the validity of an IPv6 address """ try: socket.inet_pton(socket.AF_INET6, ip_str) except socket.error: return False return True def is_valid_ip(ip_str): """ Check the validity of an IP address """ return is_valid_ipv4(ip_str) or is_valid_ipv6(ip_str)
How to validate an IP address using Regular Expressions in Java , Return true if the string matches with the given regex, else return false. Below is the implementation of the above approach:. Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems. Your regex cannot match the valid IP
Python program to validate an IP Address, Prerequisite: Python Regex. Given an IP address as input, write a Python program to check whether the given IP Address is Valid or not. What is an IP (Internet 7.16. Matching IPv4 Addresses Problem You want to check whether a certain string represents a valid IPv4 address in 255.255.255.255 notation. Optionally, you want to convert this address into a … - Selection from Regular Expressions Cookbook [Book]
How to validate IP address with regular expression, Validate ip address with regular expression * @param ip ip address for validation * @return true valid ip address, false invalid ip address An example of a valid IP is: 192.168.4.1 To validate the IP address we should follow these steps Tokenize the string (IP address) using the dot “.” delimiter If the sub strings are containing any non-numeric character, then return false
JavaScript : IP address validation, Example of valid IP address. 115.42.150.37; 192.168. Explanation of the said Regular expression (IP address). Regular Expression Previous: JavaScript : HTML Form validation - checking for password. Next: JavaScript IP address validation. Every computer connected to the Internet is identified by a unique four-part string, known as its Internet Protocol (IP) address. An IP address consists of four numbers (each between 0 and 255) separated by periods.
|
http://thetopsites.net/article/54025817.shtml
|
CC-MAIN-2020-34
|
refinedweb
| 1,082
| 66.54
|
Now that we have made sure that the robot is working properly, we can write the final sketch that will receive the commands via Bluetooth. As the sketch shares many similarities with the test sketch, we are only going to see what is added compared to the test sketch. We first need to include more libraries:
#include <SPI.h> #include "Adafruit_BLE_UART.h" #include <aREST.h>
We also define which pins the BLE module is connected to:
#define ADAFRUITBLE_REQ 10 #define ADAFRUITBLE_RDY 2 // This should be an interrupt pin, on Uno thats #2 or #3 #define ADAFRUITBLE_RST 9
We have to create an instance of the BLE module:
Adafruit_BLE_UART BTLEserial = Adafruit_BLE_UART(ADAFRUITBLE_REQ, ADAFRUITBLE_RDY, ADAFRUITBLE_RST);
In the
setup() ...
No credit card required
|
https://www.oreilly.com/library/view/arduino-android-blueprints/9781784390389/ch09s02.html
|
CC-MAIN-2019-26
|
refinedweb
| 120
| 71.55
|
Sugar.graphics.notebook
From OLPC
For Developers
Class: Notebook(gtk.Notebook)
Sugar has its own notebook class that is built on top of gtk.Notebook. Use the sugar version so that you can access specialized methods as needed.
How do I create a new notebook and populate it with pages?
The simple example below creates three pages, each with different types of containers. You can make anything a page in your notebook as long as it is a valid gtk container.
from sugar.graphics.notebook import Notebook ... top_canvas = Notebook() #Create pages for the notebook first_page = gtk.VBox() second_page = gtk.VBox() third_page = gtk.Frame() #Add the pages to the notebook. top_box.add_page('First Page', first_page) top_box.add_page('Second Page', second_page) top_box.add_page('Third_Page', third_page) #Set the canvas for this activity's UI to be the notebook object just created. self.set_canvas(top_canvas)
How do I do other standard operations on a sugar Notebook?
Most other work with the sugarized notebook can be done by using the gtk.Notebook interface
|
http://wiki.laptop.org/index.php?title=Sugar.graphics.notebook&oldid=197612
|
CC-MAIN-2013-48
|
refinedweb
| 168
| 53.58
|
Have you ever tried to cast hand shadows on a wall? It is the easiest thing in the world, and yet to do it well requires practice and just the right setup. To cultivate your #cottagecore aesthetic, try going into a completely dark room with just one lit candle, and casting hand shadows on a plain wall. The effect is startlingly dramatic. What fun!
Even a tea light suffices to create a great effect
In 2020, and now into 2021, many folks are reverting back to basics as they look around their houses, reopening dusty corners of attics and basements and remembering the simple crafts that they used to love. Papermaking, anyone? All you need is a few tools and torn up, recycled paper. Pressing flowers? All you need is newspaper, some heavy books, and patience. And hand shadows? Just a candle.
This TikTok creator has thousands of views for their handshadow tutorials
But what's a developer to do when trying to capture that #cottagecore vibe in a web app?
While exploring the art of hand shadows, I wondered whether some of the recent work I had done for body poses might be applicable to hand poses. What if you could tell a story on the web using your hands, and somehow save a video of the show and the narrative behind it, and send it to someone special? In lockdown, what could be more amusing than sharing shadow stories between friends or relatives, all virtually?
Hand shadow casting is a folk art probably originating in China; if you go to tea houses with stage shows, you might be lucky enough to view one like this!
When you start researching hand poses, it's striking how much content there is on the web on the topic. There has been work since at least 2014 on creating fully articulated hands within the research, simulation, and gaming sphere:
MSR throwing hands
There are dozens of handpose libraries already on GitHub:
There are many applications where tracking hands is a useful activity:
• Gaming
• Simulations / Training
• "Hands free" uses for remote interactions with things by moving the body
• Assistive technologies
• TikTok effects :trophy:
• Useful things like Accordion Hands apps
One of the more interesting new libraries, handsfree.js, offers an excellent array of demos in its effort to move to a hands free web experience:
Handsfree.js, a very promising project
As it turns out, hands are pretty complicated things. They each include 21 keypoints (vs PoseNet's 17 keypoints for an entire body). Building a model to support inference for such a complicated grouping of keypoints has provenn challenging.
There are two main libraries available to the web developer when incorporating hand poses into an app: TensorFlow.js's handposes, and MediaPipe's. HandsFree.js uses both, to the extent that they expose APIs. As it turns out, neither TensorFlow.js nor MediaPipe's handposes are perfect for our project. We will have to compromise.
TensorFlow.js's handposes allow access to each hand keypoint and the ability to draw the hand to canvas as desired. HOWEVER, it only currently supports single hand poses, which is not optimal for good hand shadow shows.
MediaPipe's handpose models (which are used by TensorFlow.js) do allow for dual hands BUT its API does not allow for much styling of the keypoints so that drawing shadows using it is not obvious.
One other library, fingerposes, is optimized for finger spelling in a sign language context and is worth a look.
Since it's more important to use the Canvas API to draw custom shadows, we are obliged to use TensorFlow.js, hoping that either it will soon support multiple hands OR handsfree.js helps push the envelope to expose a more styleable hand.
Let's get to work to build this app.
As a Vue.js developer, I always use the Vue CLI to scaffold an app using
vue create my-app and creating a standard app. I set up a basic app with two routes: Home and Show. Since this is going to be deployed as an Azure Static Web App, I follow my standard practice of including my app files in a folder named
app and creating an
api folder to include an Azure function to store a key (more on this in a minute).
In my package.json file, I import the important packages for using TensorFlow.js and the Cognitive Services Speech SDK in this app. Note that TensorFlow.js has divided its imports into individual packages:
"@tensorflow-models/handpose": "^0.0.6", "@tensorflow/tfjs": "^2.7.0", "@tensorflow/tfjs-backend-cpu": "^2.7.0", "@tensorflow/tfjs-backend-webgl": "^2.7.0", "@tensorflow/tfjs-converter": "^2.7.0", "@tensorflow/tfjs-core": "^2.7.0", ... "microsoft-cognitiveservices-speech-sdk": "^1.15.0",
We will draw an image of a hand, as detected by TensorFlow.js, onto a canvas, superimposed onto a video suppled by a webcam. In addition, we will redraw the hand to a second canvas (shadowCanvas), styled like shadows:
<div id="canvas-wrapper column is-half"> <canvas id="output" ref="output"></canvas> <video id="video" ref="video" playsinline</video> </div> <div class="column is-half"> <canvas class="has-background-black-bis" id="shadowCanvas" ref="shadowCanvas" > </canvas> </div>
Working asynchronously, load the Handpose model. Once the backend is setup and the model is loaded, load the video via the webcam, and start watching the video's keyframes for hand poses. It's important at these steps to ensure error handling in case the model fails to load or there's no webcam available.
async mounted() { await tf.setBackend(this.backend); //async load model, then load video, then pass it to start landmarking this.model = await handpose.load(); this.message = "Model is loaded! Now loading video"; let webcam; try { webcam = await this.loadVideo(); } catch (e) { this.message = e.message; throw e; } this.landmarksRealTime(webcam); },
Still working asynchronously, set up the camera to provide a stream of images
async setupCamera() { if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) { throw new Error( "Browser API navigator.mediaDevices.getUserMedia not available" ); } this.video = this.$refs.video; const stream = await navigator.mediaDevices.getUserMedia({ video: { facingMode: "user", width: VIDEO_WIDTH, height: VIDEO_HEIGHT, }, }); return new Promise((resolve) => { this.video.srcObject = stream; this.video.onloadedmetadata = () => { resolve(this.video); }; }); },
Now the fun begins, as you can get creative in drawing the hand on top of the video. This landmarking function runs on every keyframe, watching for a hand to be detected and drawing lines onto the canvas - red on top of the video, and black on top of the shadowCanvas. Since the shadowCanvas background is white, the hand is drawn as white as well and the viewer only sees the offset shadow, in fuzzy black with rounded corners. The effect is rather spooky!
async landmarksRealTime(video) { //start showing landmarks this.videoWidth = video.videoWidth; this.videoHeight = video.videoHeight; //set up skeleton canvas this.canvas = this.$refs.output; ... //set up shadowCanvas this.shadowCanvas = this.$refs.shadowCanvas; ... this.ctx = this.canvas.getContext("2d"); this.sctx = this.shadowCanvas.getContext("2d"); ... //paint to main this.ctx.clearRect(0, 0, this.videoWidth, this.videoHeight); this.ctx.strokeStyle = "red"; this.ctx.fillStyle = "red"; this.ctx.translate(this.shadowCanvas.width, 0); this.ctx.scale(-1, 1); //paint to shadow box this.sctx.clearRect(0, 0, this.videoWidth, this.videoHeight); this.sctx.shadowColor = "black"; this.sctx.shadowBlur = 20; this.sctx.shadowOffsetX = 150; this.sctx.shadowOffsetY = 150; this.sctx.lineWidth = 20; this.sctx.lineCap = "round"; this.sctx.fillStyle = "white"; this.sctx.strokeStyle = "white"; this.sctx.translate(this.shadowCanvas.width, 0); this.sctx.scale(-1, 1); //now you've set up the canvases, now you can frame its landmarks this.frameLandmarks(); },
As the keyframes progress, the model predict new keypoints for each of the hand's elements, and both canvases are cleared and redrawn.
const predictions = await this.model.estimateHands(this.video); if (predictions.length > 0) { const result = predictions[0].landmarks; this.drawKeypoints( this.ctx, this.sctx, result, predictions[0].annotations ); } requestAnimationFrame(this.frameLandmarks);
Since TensorFlow.js allows you direct access to the keypoints of the hand and the hand's coordinates, you can manipulate them to draw a more lifelike hand. Thus we can redraw the palm to be a polygon, rather than resembling a garden rake with points culminating in the wrist.
Re-identify the fingers and palm:
fingerLookupIndices: { thumb: [0, 1, 2, 3, 4], indexFinger: [0, 5, 6, 7, 8], middleFinger: [0, 9, 10, 11, 12], ringFinger: [0, 13, 14, 15, 16], pinky: [0, 17, 18, 19, 20], }, palmLookupIndices: { palm: [0, 1, 5, 9, 13, 17, 0, 1], },
...and draw them to screen:
const fingers = Object.keys(this.fingerLookupIndices); for (let i = 0; i < fingers.length; i++) { const finger = fingers[i]; const points = this.fingerLookupIndices[finger].map( (idx) => keypoints[idx] ); this.drawPath(ctx, sctx, points, false); } const palmArea = Object.keys(this.palmLookupIndices); for (let i = 0; i < palmArea.length; i++) { const palm = palmArea[i]; const points = this.palmLookupIndices[palm].map( (idx) => keypoints[idx] ); this.drawPath(ctx, sctx, points, true); }
With the models and video loaded, keyframes tracked, and hands and shadows drawn to canvas, we can implement a speech-to-text SDK so that you can narrate and save your shadow story.
To do this, get a key from the Azure portal for Speech Services by creating a Service:
You can connect to this service by importing the sdk:
import * as sdk from "microsoft-cognitiveservices-speech-sdk";
...and start audio transcription after obtaining an API key which is stored in an Azure function in the
/api folder. This function gets the key stored in the Azure portal in the Azure Static Web App where the app is hosted.
async startAudioTranscription() { try { //get the key const response = await axios.get("/api/getKey"); this.subKey = response.data; //sdk let speechConfig = sdk.SpeechConfig.fromSubscription( this.subKey, "eastus" ); let audioConfig = sdk.AudioConfig.fromDefaultMicrophoneInput(); this.recognizer = new sdk.SpeechRecognizer(speechConfig, audioConfig); this.recognizer.recognized = (s, e) => { this.text = e.result.text; this.story.push(this.text); }; this.recognizer.startContinuousRecognitionAsync(); } catch (error) { this.message = error; } },
In this function, the SpeechRecognizer gathers text in chunks that it recognizes and organizes into sentences. That text is printed into a message string and displayed on the front end.
In this last part, the output cast onto the shadowCanvas is saved as a stream and recorded using the MediaRecorder API:
const stream = this.shadowCanvas.captureStream(60); // 60 FPS recording this.recorder = new MediaRecorder(stream, { mimeType: "video/webm;codecs=vp9", }); (this.recorder.ondataavailable = (e) => { this.chunks.push(e.data); }), this.recorder.start(500);
...and displayed below as a video with the storyline in a new div:
const video = document.createElement("video"); const fullBlob = new Blob(this.chunks); const downloadUrl = window.URL.createObjectURL(fullBlob); video.src = downloadUrl; document.getElementById("story").appendChild(video); video.autoplay = true; video.controls = true;
This app can be deployed as an Azure Static Web App using the excellent Azure plugin for Visual Studio Code. And once it's live, you can tell durable shadow stories!
Try Ombromanie here. The codebase is available here
Take a look at Ombromanie in action:
Learn more about AI on Azure
Azure AI Essentials Video covering speech and language
Azure free account sign-up
|
https://techcommunity.microsoft.com/t5/educator-developer-blog/ombromanie-creating-hand-shadow-stories-with-azure-speech-and/ba-p/1822815
|
CC-MAIN-2022-33
|
refinedweb
| 1,860
| 51.04
|
#include <nrt/Core/Blackboard/details/ModulePortHelpers.H>
A Posting port is a unique binding of a sent Message type and a returned Message type to a port class.
Module objects can post() on a given Posting port if and only if they derive from the corresponding MessagePoster of the Posting. Typically, Msg and Ret should derive from nrt::MessageBase, with a special case allowed for Ret being void. See MessagePoster for how to use post(). A convenience macro is provided (in details/ModulePortHelpers.H) to easily declare a posting class:
Where PortName is the name of the class that will embody your Posting port, MsgType is the type of posted message (must derive from nrt::MessageBase), RetType is the type of message returned by any subscriber (callback) that will respond to posts on this Posting (must derive from nrt::MessageBase or be void), and Description is a plain C-style string describing your Posting. For example:
The reason for declaring a new Posting type is to allow one to have several postings with identical message and return types, but different descriptions and port classes. Although these deal with identical messages, they may post in different namespaces and on different topics (see the definition of MessagePoster) and thus achieve different functions.
Definition at line 143 of file ModulePortHelpers.H.
Allocate a message and return a unique_ptr to it, to be used by post()
All given args are forwarded to the message constructor.
|
http://nrtkit.net/documentation/classnrt_1_1MessagePosting.html
|
CC-MAIN-2020-10
|
refinedweb
| 241
| 56.69
|
Classes and objects: the bread-and-butter of many a developer. Object-oriented programming is one of the mainstays of modern programming, so it shouldn't come as a surprise that Python is capable of it.
But if you've done object-oriented programming in any other language before coming to Python, I can almost guarantee you're doing it wrong.
Hang on to your paradigms, folks, it's going to be a bumpy ride.
Class Is In Session
Let's make a class, just to get our feet wet. Most of this won't come as a surprise to anyone.
class Starship(object): sound = "Vrrrrrrrrrrrrrrrrrrrrr" def __init__(self): self.engines = False self.engine_speed = 0 self.shields = True def engage(self): self.engines = True def warp(self, factor): self.engine_speed = 2 self.engine_speed *= factor @classmethod def make_sound(cls): print(cls.sound)
Once that's declared, we can create a new instance, or object, from this class. All of the member functions and variables are accessible using dot notation.
uss_enterprise = Starship() uss_enterprise.warp(4) uss_enterprise.engage() uss_enterprise.engines >>> True uss_enterprise.engine_speed >>> 8
Wow, Jason, I knew this was supposed to be 'dead simple', but I think you just put me to sleep.
No surprises there, right? But look again, not at what is there, but what isn't there.
You can't see it? Okay, let's break this down. See if you can spot the surprises before I get to them.
Declaration
We start with the definition of the class itself:
class Starship(object):
Python might be considered one of the more truly object-oriented languages, on account of its design principle of "everything is an object." All other classes inherit from that
object class.
Of course, most Pythonistas really hate boilerplate, so as of Python 3, we can also just say this and call it good:
class Starship:
Personally, considering The Zen of Python's line about "Explicit is better than implicit," I like the first way. We could debate it until the cows come home, really, so let's just make it clear that both approaches do the same thing in Python 3, and move on.
Legacy Note: If you intend your code to work on Python 2, you must say
(object).
Methods
I'm going to jump down to this line...
def warp(self, factor):
Obviously, that's a member function or method. In Python, we pass
self as the first parameters to every single method. After that, we can have as many parameters as we want, the same as with any other function.
We actually don't have to call that first argument
self; it'll work the same regardless. But, we always use the name
self there anyway, as a matter of style. There exists no valid reason to break that rule.
"But, but...you literally just broke the rule yourself! See that next function?"
@classmethod def make_sound(cls):
You may remember that in object-oriented programming, a class method is one that is shared between all instances of the class (objects). A class method never touches member variables or regular methods.
If you haven't already noticed, we always access member variables in a class via the dot operator:
self.. So, to make it extra-super-clear we can't do that in a class method, we call the first argument
cls. In fact, when a class method is called, Python passes the class to that argument, instead of the object.
As before, we can call
cls anything we want, but that doesn't mean we should.
For a class method, we also MUST put the decorator
@classmethod on the line just above our function declaration. This tells the Python language that you're making a class method, and that you didn't just get creative with the name of the
self argument.
Those methods above would get called something like this...
uss_enterprise = Starship() # Create our object from the starship class # Note, we aren't passing anything to 'self'. Python does that implicitly. uss_enterprise.warp(4) # We can call class functions on the object, or directly on the class. uss_enterprise.make_sound() Starship.make_sound()
Those last two lines will both print out "Vrrrrrrrrrrrrrrrrrrrrr" the exact same way. (Note that I referred to
cls.sound in that function.)
...
What?
Come on, you know you made sound effects for your imaginary spaceships when you were a kid. Don't judge me.
Class vs Static Methods
The old adage is true: you don't stop learning until you're dead. Kinyanjui Wangonya pointed out in the comments that one didn't need to pass
cls to "static methods" - the phrase I was using in the first version of this article.
Turns out, he's right, and I was confused!
Unlike many other languages, Python distinguishes between static and class methods. Technically, they work the same way, in that they are both shared among all instances of the object. There's just one critical difference...
A static method doesn't access any of the class members; it doesn't even care that it's part of the class! Because it doesn't need to access any other part of the class, it doesn't need the
cls argument.
Let's contrast a class method with a static method:
@classmethod def make_sound(cls): print(cls.sound) @staticmethod def beep(): print("Beep boop beep")
Because
beep() needs no access to the class, we can make it a static method by using the
@staticmethod decorator. Python won't implicitly pass the class to the first argument, unlike what it does on a class method (
make_sound())
Despite this difference, you call both the same way.
uss_enterprise = Starship() uss_enterprise.make_sound() >>> Vrrrrrrrrrrrrrrrrrrrrr Starship.make_sound() >>> Vrrrrrrrrrrrrrrrrrrrrr uss_enterprise.beep() >>> Beep boop beep Starship.beep() >>> Beep boop beep
Initializers and Constructors
Every Python class needs to have one, and only one,
__init__(self) function. This is called the initializer.
def __init__(self): self.engine_speed = 1 self.shields = True self.engines = False
If you really don't need an initializer, it is technically valid to skip defining it, but that's pretty universally considered bad form. In the very least, define an empty one...
def __init__(self): pass
While we tend to use it the same way as we would a constructor in C++ and Java,
__init__(self) is not a constructor! The initializer is responsible for initializing the instance variables, which we'll talk more about in a moment.
We rarely need to actually to define our own constructor. If you really know what you're doing, you can redefine the
__new__(cls) function...
def __new__(cls): return object.__new__(cls)
By the way, if you're looking for the destructor, that's the
__del__(self) function.
Variables
In Python, our classes can have instance variables, which are unique to our object (instance), and class variables (a.k.a. static variables), which belong to the class, and are shared between all instances.
I have a confession to make: I spent the first few years of Python development doing this absolutely and completely wrong! Coming from other object-oriented languages, I actually thought I was supposed to do this:
class Starship(object): engines = False engine_speed = 0 shields = True def __init__(self): self.engines = False self.engine_speed = 0 self.shields = True def engage(self): self.engines = True def warp(self, factor): self.engine_speed = 2 self.engine_speed *= factor
The code works, so what's wrong with this picture? Read it again, and see if you can figure out what's happening.
Final Jeopardy music plays
Maybe this will make it obvious.
uss_enterprise = Starship() uss_enterprise.warp(4) print(uss_enterprise.engine_speed) >>> 8 print(Starship.engine_speed) >>> 0
Did you spot it?
Class variables are declared outside of all functions, usually at the top. Instance variables, on the other hand, are declared in the
__init__(self) function: for example,
self.engine_speed = 0.
So, in our little example, we've declared a set of class variables, and a set of instance variables, with the same names. When accessing a variable on the object, the instance variables shadow (hide) the class variables, making it behave as we might expect. However, we can see by printing
Starship.engine_speed that we have a separate class variable sitting in the class, just taking up space. Talk about redundant.
Anyone get that right? Sloan did, and wagered...ten thousand cecropia leaves. Looks like the sloth is in the lead. Amazingly enough.
By the way, you can declare instance variables for the first time from within any instance method, instead of the initializer. However...you guessed it: don't. The convention is to ALWAYS declare all your instance variables in the initializer, just to prevent something weird from happening, like a function attempting to access a variable that doesn't yet exist.
Scope: Private and Public
If you come from another object-oriented language, such as Java and C++, you're also probably in the habit of thinking about scope (private, protected, public) and its traditional assumptions: variables should be private, and functions should (usually) be public. Getters and setters rule the day!
I'm also an expert in C++ object-oriented programming, and I have to say that I consider Python's approach to the issue of scope to be vastly superior to the typical object-oriented scope rules. Once you grasp how to design classes in Python, the principles will probably leak into your standard practice in other languages...and I firmly believe that's a good thing.
Ready for this? Your variables don't actually need to be private.
Yes, I just heard the gurgling scream of the Java nerd in the back. "But...but...how will I keep developers from just tampering with any of the object's instance variables?"
Often, that concern is built on three flawed assumptions. Let's set those right first:
The developer using your class almost certainly isn't in the habit of modifying member variables directly, any more than they're in the habit of sticking a fork in a toaster.
If they do stick a fork in the toaster, proverbially speaking, the consequences are on them for being idiots, not on you.
As my Freenode #python friend
grymonce said, "if you know why you aren't supposed to remove stuck toast from the toaster with a metal object, you're allowed to do so."
In other words, the developer who is using your class probably knows better than you do about whether they should twiddle the instance variables or not.
Now, with that out of the way, we approach an important premise in Python: there is no actual 'private' scope. We can't just stick a fancy little keyword in front of a variable to make it private.
What we can do is stick an underscore at the front of the name, like this:
self._engine.
That underscore isn't magical. It's just a warning label to anyone using your class: "I recommend you don't mess with this. I'm doing something special with it."
Now, before you go sticking
_ at the start of all your instance variable names, think about what the variable actually is, and how you use it. Will directly tweaking it really cause problems? In the case of our example class, as it's written right now, no. This actually would be perfectly acceptable:
uss_enterprise.engine_speed = 6 uss_enterprise.engage()
Also, notice something beautiful about that? We didn't write a single getter or setter! In any language, if a getter or setter are functionally identical to modifying the variable directly, they're an absolute waste. That philosophy is one of the reasons Python is such a clean language.
You can also use this naming convention with methods you don't intend to be used outside of the class.
Side Note: Before you run off and go eschew
private and
protected from your Java and C++ code, please understand that there's a time and a place for scope. The underscore convention is a social contract among Python developers, and most languages don't have anything like that. So, if you're in a language with scope, use
private or
protected on any variable you would have put an underscore in front of in Python.
Private...Sort Of
Now, on a very rare occasion, you may have an instance variable which absolutely, positively, never, ever should be directly modified outside of the class. In that case, you may precede the name of the variable with two underscores (
__), instead of one.
This doesn't actually make it private; rather, it performs something called name mangling: it changes the name of the variable, adding a single underscore and the name of the class on the front.
In the case of
class Starship, if we were to change
self.shields to
self.__shields, it would be name mangled to
self._Starship__shields.
So, if you know how that name mangling works, you can still access it...
uss_enterprise = Starship() uss_enterprise._Starship__shields >>> True
It's important to note, you also cannot have more than one trailing underscore if this is to work. (
__foo and
__foo_ will be mangled, but
__foo__ will not). But then, PEP 8 generally discourages trailing underscores, so it's kinda a moot point.
By the way, the purpose of the double underscore (
__) name mangling actually has nothing to do with private scope; it's all about preventing name conflicts with some technical scenarios. In fact, you'll probably get a few serious frowns from Python ninjas for employing
__ at all, so use it sparingly.
Properties
As I said earlier, getters and setters are usually pointless. On occasion, however, they have a purpose. In Python, we can use properties in this manner, as well as to pull off some pretty nifty tricks!
Properties are defined simply by preceding a method with
@property.
My favorite trick with properties is to make a method look like an instance variable...
class Starship(object): def __init__(self): self.engines = True self.engine_speed = 0 self.shields = True @property def engine_strain(self): if not self.engines: return 0 elif self.shields: # Imagine shields double the engine strain return self.engine_speed * 2 # Otherwise, the engine strain is the same as the speed return self.engine_speed
When we're using this class, we can treat
engine_strain as an instance variable of the object.
uss_enterprise = Starship() uss_enterprise.engine_strain >>> 0
Beautiful, isn't it?
(Un)fortunately, we cannot modify
engine_strain in the same manner.
uss_enterprise.engine_strain = 10 >>> Traceback (most recent call last): >>> File "<stdin>", line 1, in <module> >>> AttributeError: can't set attribute
In this case, that actually does make sense, but it might not be what you're wanting other times. Just for fun, let's define a setter for our property too; at least one with nicer output than that scary error.
@engine_strain.setter def engine_strain(self, value): print("I'm giving her all she's got, Captain!")
We precede our method with the decorator
@NAME_OF_PROPERTY.setter. We also have to accept a single
value argument (after
self, of course), and positively nothing beyond that. You'll notice we're not actually doing anything with the
value argument in this case, and that's fine for our example.
uss_enterprise.engine_strain = 10 >>> I'm giving her all she's got, Captain!
That's much better.
As I mentioned earlier, we can use these as getters and setters for our instance variables. Here's a quick example of how:
class Starship: def __init__(self): # snip self._captain = "Jean-Luc Picard" @property def captain(self): return self._captain @captain.setter def captain(self, value): print("What do you think this is, " + value + ", the USS Pegasus? Back to work!")
We simply preceded the variable these functions concern with an underscore, to indicate to others that we intend to manage the variable ourselves. The getter is pretty dull and obvious, and is only needed to provide expected behavior. The setter is where things are interesting: we knock down any attempted mutinies. There will be no changing this captain!
uss_enterprise = Starship() uss_enterprise.captain >>> 'Jean-Luc Picard' uss_enterprise.>> What do you think this is, Wesley, the USS Pegasus? Back to work!
Technical rabbit trail: if you want to create class properties, that requires some hacking on your part. There are several solutions floating around the net, so if you need this, go research it!
A few of the Python nerds will be on me if I didn't point out, there is another way to create a property without the use of decorators. So, just for the record, this works too...
class Starship: def __init__(self): # snip self._captain = "Jean-Luc Picard" def get_captain(self): return self._captain def set_captain(self, value): print("What do you think this is, " + value + ", the USS Pegasus? Back to work!") captain = property(get_captain, set_captain)
(Yes, that last line exists outside of any function.)
As usual, the documentation on properties has additional information, and some more nifty tricks with properties.
Inheritance
Finally, we come back to that first line for another look.
class Starship(object):
Remember why that
(object) is there? We're inheriting from Python's
object class. Ahh, inheritance! That's where it belongs.
class USSDiscovery(Starship): def __init__(self): super().__init__() self.spore_drive = True self._captain = "Gabriel Lorca"
The only real mystery here is that
super().__init__() line. In short,
super() refers to the class we inherited from (in this case,
Starship), and calls its initializer. We need to call this, so
USSDiscovery has all the same instance variables as
Starship.
Of course, we can define new instance variables (
self.spore_drive), and redefine inherited ones (
self._captain).
We could have actually just called that initializer with
Starship.__init__(), but then if we wanted to change what we inherit from, we'd have to change that line too. The
super().__init__() approach is ultimately just cleaner and more maintainable.
Legacy Note: By the way, if you're using Python 2, that line is a little uglier:
super(USSDiscovery, self).__init__().
Before you ask: YES, you can do multiple inheritance with
class C(A, B):. It actually works better than in most languages! Regardless, but you can count on a side order of headaches, especially when using
super().
Hold the Classes!
As you can see, Python classes are a little different from other languages, but once you're used to them, they're actually a bit easier to work with.
But if you've coded in class-heavy languages like C++ or Java, and are working on the assumption that you need classes in Python, I have a surprise for you. You really aren't required to use classes at all!
Classes and objects have exactly one purpose in Python: data encapsulation. If you need to keep data and the functions for manipulating it together in a handy unit, classes are the way to go. Otherwise, don't bother! There's absolutely nothing wrong with a module composed entirely of functions.
Review
Whew! You still with me? How many of those surprises about classes in Python did you guess?
Let's review...
The
__init__(self)function is the initializer, and that's where we do all of our variable initialization.
Methods (member functions) must take
selfas their first argument.
Class methods must take
clsas their first argument, and have the decorator
@classmethodon the line just above the function definition. They can access class variables, but not instance variables.
Static methods are similar to class methods, except they don't take
clsas their first argument, and are preceded by the decorator
@staticmethod. They cannot access any class or instance variables or functions. They don't even know they're part of a class.
Instance variables (member variables) should be declared inside
__init__(self)first. We don't declare them outside of the constructor, unlike most other object-oriented languages.
Class variables or static variables are declared outside of any function, and are shared between all instances of the class.
There are no private members in Python! Precede a member variable or a method name with an underscore (
_) to tell developers they shouldn't mess with it.
If you precede a member variable or method name with two underscores (
__), Python will change its name using name mangling. This is more for preventing name conflicts than hiding things.
You can make any method into a property (it looks like a member variable) by putting the decorator
@propertyon the line above its declaration. This can also be used to create getters.
You can create a setter for a property (e.g.
foo) by putting the decorator
@foo.setterabove a function
foo.
A class (e.g.
Dog) can inherit from another class (e.g.
Animal) in this manner:
class Dog(Animal):. When you do this, you should also start your initializer with the line
super().__init__()to call the initializer of the base class.
Multiple inheritance is possible, but it might give you nightmares. Handle with tongs.
As usual, I recommend you read the docs for more:
- Python Tutorials: Classes
- Python Reference: Built-In Functions - @classmethod
- Python Reference: Built-In Functions - @staticmethod
- Python Reference: Built-In Functions - @property
Ready to go write some Python classes? Make it so!
Thank you to
deniska,
grym (Freenode IRC
#python), and @wangonya
(Dev) for suggested revisions.
Posted on by:
Jason C. McDonald
Author | Hacker | Speaker | Time Lord
Discussion
I read a really funny analogy for why python’s private-ish variables are good enough and we don’t have stricter access control:
Thanks for the article!
I'd like to add that you should mention that there's pretty much never a good reason to use
staticmethodas opposed to a global method in the module.
Usually, when we write a module, we can simply write a method outside the class, to get the same behaviour as a
staticmethod
I try to avoid such statements, especially in this article series. If it exists, it exists for some reason. ;)
That said, I plan to discuss the ins and outs of this in more depth in the book.
Ah I see! I really appreciate this series. It is very well written and enjoyable to read!
By the way, what is that reason?
Namespacing.
Thanks for the excellent Article.
Some small suggestion: in the
setterexample the actual line of code that sets the Instance Member
self.__captain = valueis missing ;-)
Resulting in this output, Jean-Luc Picard refuses to go away....
Awesome coverage of classes in Python! Kudos to showing how to set properties using getters/setters. When I first learned python, I did not understand this and would create my own def to modify a property on my instances. Very fun, Next Generation is my favorite Trek series.
Should
clsbe passed into a staticmethod? I thought that was only for class methods
I cannot believe it! I actually missed that little detail:
@staticmethodand
@classmethodare actually two separate things!
I've just edited the article to correct that, and added a new section to clarify the difference. Thank you so much for correcting that.
Great article!
Nice, properties, getters, and setters all explain some things I had never even seen or thought of before (Python being my first programming language), it definitely expanded my understanding of what a class is and can do.
Great article! Python is neither my primary nor my secondary language, but I've been using it a lot for my grad school work. I realize how naive my own Python code has been after reading your post haha. Do you have any suggestions for more material like this to learn the best practices? I have found that the python docs are not the most captivating. Thank you for this post! Great job.
Unfortunately, I haven't found much! That's precisely why I started this article series.
If you have some extra cash laying around, you could sign up for Udemy: The Complete Python Course by Codestars. He covers good practice pretty well. However, it is unfortunately quite beginner-oriented, so it's slow going at first.
in the subtopic Methods should be
as engage has no parameter factor
Another great catch. Thank you!
In inheritance, what if we do not write 'super().init()' in child class' init() method? Does child class not call it by default?
No, it doesn't, as far as I know. As mentioned in the Zen of Python,
Explicit is better than implicit. It's part of the language philosophy.
We often need to handle passing data to the parent
__init__()in a specific way, and we may need to do some other things first, so Python just assumes we know best how to handle that.
Imagine how maddening it could get if Python sometimes called it for you, but sometimes not! (Some languages actually are confusing like that, but Python aims to be obvious.)
Thanks for clarifying!
It would be very helpful if you also explain MRO (Method Resolution Order) in context of Python 3 in any upcoming series.
This was a very clear and well-written article, probably one of the best that I’ve read regarding Python classes. Thanks!
Nice article Jason. Keep writing more.
Thanks for writing this article.Jason
Thank you for great article!
Just one thing, you have a typo in your example "engine_strain" as @property. It should be "elif" instead "else if".
Thanks for catching that!
Not quite. The class variables are shadowed; instance variables eclipse, as they dominate.
Good article, thanks 😎
|
https://dev.to/codemouse92/dead-simple-python-classes-42f7
|
CC-MAIN-2020-40
|
refinedweb
| 4,242
| 67.25
|
Recently I adapted Phil Nash’s Catch C++ (and Objective C) testing framework [Nash] to integrate with Visual Studio (VS). I’ve made a fork of Catch available on Github [Catch], together with some documentation that explains how to use it in that environment [VS]. I thought perhaps that for ACCU it would be more interesting if I wrote up some details about why it does what it does.
For those who are unfamiliar with C++ testing in Visual Studio...
The five minute guide to testing in Visual Studio
First, I should define some terms that I’ll be using. A ‘Managed’ C++ test is one that runs native C++ code (the code we want tested) under a managed C++ wrapper (that creates the test environment). Until VS2012 this was the only kind of C++ unit test that you could write that integrated with the Visual Studio IDE. With VS2012, Microsoft added ‘Native’ C++ tests. These are tests that use a native C++ wrapper to create tests but that can still be run from the Visual Studio IDE.
Figure 1 shows an example of a ‘Managed’ test in VS2012. The IDE has the Test Explorer to the left showing that the test has failed. Clicking the highlight at the top of the stack trace (bottom left) opens the code and positions the cursor at the failing line (I’ve manually highlighted the line to make this clearer...).
This is what I wanted to replicate with Catch...so for those who are unfamiliar with Catch....
The five minute guide to Catch
Catch is a C++ testing framework that is simple to get running (header only, no dependencies) and in the case of failure (or optionally for success) can also provide both the original expression and the values that caused failure. The current version is designed to run from the command line.
To make the command line work, Catch needs a
main() function; my personal convention is to create a main.cpp with this content:
// main.cpp #define CATCH_CONFIG_MAIN #include "catch.hpp"
Then I create a file for my tests (this file can be shared with Visual Studio) and write a test (see Listing 1).
The ‘test case’ is defined with free form text for the name, then creates an instance of the object that we want tested. When the tests are run, Catch will loop through the
TEST_CASE for as many
SECTIONs as are defined, so in this example the
TEST_CASE gets run twice. This creates a new, initialised
testObject each time the
TEST_CASE is run; for this reason many Catch tests require no
setup() or
teardown() methods.
The first
SECTION will clearly fail, but Catch carries on and runs the
TEST_CASE again to run the second
SECTION, which works. The output from a test run with default arguments is like Listing 2.
For the failure, the program outputs both the original expression and the values that caused the failure. The final line confirms that the assertion in each
SECTION was executed, with 1 failure. Catch can also be run to show the output from successful
REQUIREments, along with many other options.
Goals using Catch in Visual Studio
My initial goal was purely selfish; I currently work in an environment where VS is used to implement tests using Microsoft’s test framework. The environment has a couple of serious usability problems [MSTest] and this makes testing a somewhat painful experience.
My involvement started from Visual Studio 2010 and later moved on to VS2012. From a fairly early stage I had written some macros that enabled me to share source code between Catch and MSTest but this didn’t integrate very well with the IDE, then I discovered by accident that VS2012 had some ‘Native’ C++ unit test support and I realised that it should be possible to hook into this directly from Catch. This started a train of thought that made me wonder if I could do the same thing for Managed tests too.
Initially I wanted:
- To be able to write a test with Catch macros and share source code between command line Catch and VS.
- If an assertion failed in the IDE, the test should stop and the IDE should allow me to jump to the location of the problem, just like it does in MSTest.
This last requirement is slightly different from running regular Catch from the command line; normally we expect that Catch will do its best to run as many of the tests as possible, then report all of the problems at the end of the run. In the IDE, I wanted it to stop and that meant that I had to tinker with some of Catch’s internals...
First implementation
So I spent a couple of days doing an experiment and ended up with something that seemed to meet these goals. To make it work, I had to do three things:
- Redefine the
TEST_CASEmacro.
- Rewrite the reporter so that it reported at the end of the test.
- Hook up the assertion to the relevant MSTest mechanism.
The result looks like Figure 2 (VS2010).
Changes to the TEST_CASE macro
The
TEST_CASE macro maps to an underlying
INTERNAL_CATCH_TESTCASE macro that needed a completely new definition, along with the equivalent macros for testing class methods that are very similar.
My first version of these macros implemented an instance of a test class (either a ‘Managed’ test class or a ‘Native’ test class), provided a semi-unique name for a function that would get invoked and then called it (more about the ‘semi’-uniqueness later...). As in regular Catch, the definition of the invoked
TEST_CASE function follows the macro and is written by the user. The Catch ‘test name’ is passed as a property attribute to the test class so that it can be displayed in the IDE. In the first version, the Catch ‘description’ field was discarded.
// TEST_CASE maps to INTERNAL_CATCH_TESTCASE // First param is test name // Second param is 'description', // often used for tags TEST_CASE( "./failing/exceptions/double", "[.][failing]" ) { //... }
Since Catch is header only, I also had to adapt the code to allow for functions that would be
#included multiple times; Regular Catch ‘knows’ which module contains
main() so only includes certain headers in that module. Using an MSTest project in Visual Studio doesn’t require a
main() function, so catch.hpp needs to pull in everything in each module. Consequently, some functions needed to get inlined and a couple of static members needed to get replaced by templates so that the compiler/linker had to work out how to keep just one instance from all modules.
Changes to the reporter
The first reporter looked very similar to the Catch ‘console’ reporter. The main change was to collect together all the information that needed to be reported when the test completed so that it could be sent to the output windows in VS....
...except that Phil had optimised a couple of functions that returned strings using a static
std::string. It turns out that Managed C++ doesn’t like this much (‘This function must be called in the default domain’ in atexit, presumably as a result of trying to release the memory from the
std::string). So (for now) those functions had to return a new string each time. Ho hum...
Assertions
Catch has a number of macros that check for failures (
REQUIRE,
CHECK,
REQUIRE_FALSE, etc) but underlying all of this is an
INTERNAL_CATCH_ACCEPT_EXPR macro that helpfully throws an exception if the test needs to abort. All I had to do to make this work was call the MSTest assertion with the relevant details of the failure instead of throwing. That would give me the context that I wanted so that the IDE could point me to the problem by clicking the link.
Some feature creep
Catch already has an extensive internal self test suite that is slightly complicated because, being a test framework, needs to check failures as well as tests that pass. I had decided that the best way to check that my code worked would be to build a project that used all of the internal test suite and one of the first modules I encountered was the VariadicMacrosTests file.
I had already started to prepare a blog post about what I had done and as part of that I created a new VS test project from scratch, using the VS wizard. However I noticed that the wizard generated a project that used Unicode by default. I wanted to be able to use multi-byte strings (MBCS) as well. ‘Simple’, you may think and indeed it was until I encountered the variadic macros in Catch. The problem with this is that some of the strings needed to be passed as wide strings (e.g. to test class attributes) and some didn’t (e.g. to be used internally by Catch). When it isn’t known how many parameters have been passed to the macro, it can become tricky to know how to convert a possibly non-existent value! Much of the complexity in my implementation of the
TEST_CASE macros is there to deal with this problem.
I also discovered that Native C++ tests didn’t want to play nicely with anonymous namespaces. I tried several possible ways to fool the compiler into allowing a unique class name to be used but in the end I had to accept that each test needed to go into a namespace uniquely named for each file. If I didn’t do this, there was a risk that the semi-unique name generated by the
TEST_CASE macro would cause a name clash between modules.
Sadly if there is a duplicate name, VS silently ignores one of them and only runs one of the tests, so I had to manually check for a name clash and generate an error if it happened. The workaround for both these problems is pretty simple; since neither Catch or VS cares what namespace the tests are in,
TEST_CASEs in a module should go into a namespace named after that module, e.g.:
// module1.cpp namespace module1 { TEST_CASE("blah") { //... } }
Remarkably, most other things just worked.... but I had some trouble with Catch’s ability to register and translate unknown exceptions...
More feature creep
Although I suspected that the feature wasn’t used much, I was sure that it should be possible to implement some code that would allow me to translate unknown exceptions. I realised that to make it work I would have to fix up Catch’s static registration. The existing macros worked like this; first define a translation:
CATCH_TRANSLATE_EXCEPTION( double& ex ) { return Catch::toString( ex ); }
then when a test throws an unexpected exception, it should be sent to the output, e.g.:
TEST_CASE( "Unexpected exceptions can be translated", "[.][failing]" ) { if( Catch::isTrue( true ) ) throw double( 3.14 ); }
will send this to the output:
... c:\projects\catch\projects\selftest\exception tests.cpp(130): FAILED: due to unexpected exception with message: 3.14
In common with many other test frameworks, Catch has a global registration object that it uses to register tests, and it also uses this to register reporters and exception translators, so initially I just implemented the exception translators to use a static templated object.
Once I’d done that I could run each individual test from the self test suite in the IDE and manually check that the output matched the output from Catch.
A fortunate co-incidence
I had been thinking about one of the other tests in the test suite that I hadn’t managed to implement. The code collected all the registered tests, worked out whether they were expected to ‘pass’ or ‘fail’ and then ran them all in two batches, one for passes and one for failures. Around this time, Phil had written a Python script that verified the expected output from running all tests, so he removed this code. I had a feeling this code might be useful, and so it prompted me to think about how I could perhaps use a similar technique to run all the tests automatically in VS. My idea was to run all the tests in a ‘batch’, then use a similar Python script to generate compatible output from the VS output, instead of having to manually check every time.
However, in that version I didn’t register any tests; my initial implementation of
TEST_CASE just created a test class and ran the method inline, so didn’t need it. After a little tinkering with a few angle brackets, I discovered that I could indeed implement the whole of Catch’s registration mechanism in VS. Then I started to wonder what else I could do with it...
A new goal!
I started with a new macro that didn’t register a catch test case but instead asked the Catch registration object to run all the tests. That worked but I knew that Phil was able to use Catch’s ‘tag’ filtering to selectively run different groups of tests and I wanted to be able to do the same thing. I also realised that if I could somehow call this ‘batch test’ using the VS command line tools then I would be able to easily integrate tests written for Catch into other Continuous Integration environments, such as TeamCity, TFS or Jenkins.
During my first encounter with Native C++ tests I had discovered that VS2012 Native tests could not use MSTest.exe to run them from the command line. Instead it seemed that MSTest.exe had been deprecated in favour of vstest.console.exe that had the necessary plumbing to understand binaries built for native tests and run them. A brief look at the command line parameters for both tools suggested the options for passing filters into tests was going to be limited to a single textual parameter (‘Category’ for MSTest and, somewhat bizarrely, ‘Owner’ for Native vstest.console.exe).
I did explore the possibility of feeding parameters into the tests as a database but that seemed overly convoluted and it wasn’t clear if it would work in Native C++ tests (I don’t think it does...) so I ended up with a macro whereby I could specify an identifier that would be recognised by the VS tools and that I could ‘map’ to Catch ‘tags’ to provide filtering using the existing Catch code, the snappily named
CATCH_MAP_CATEGORY_TO_TAG:
CATCH_MAP_CATEGORY_TO_TAG(all, "~[vs]");
This runs all the registered tests except those tagged ‘[vs]’, which corresponds to the default run of Catch on the command line. This macro also changes the default behaviour of Catch so that instead of stopping immediately (as we need for the IDE) it runs as many tests as possible. Sadly, this change in behaviour also exposed some shortcomings in my implementation of the reporter and capture mechanisms; in some circumstances expected output would be lost. This required a bit more rework of Catch’s internals, in particular I found that I needed to push the current test state onto the stack so that the reporter could use that information when a failing test was unwound.
Then I developed some new Python scripts that parsed and verified the Catch output and compared that against the output from MSTest.exe/vstest.console.exe (output from these tools can be directed to a .trx file). Aside from some minor presentation differences, this worked well and was good enough to validate that the VS code is doing the right thing, at least for ‘all’ tests.
The final section
There were still two things that I couldn’t validate though; the Catch self test validation script runs a set of tests that shows output from successful tests, and another that aborts after 4 failures. All these things can be easily changed using different parameters from the Catch command line. Could I replicate this somehow? What I wanted was to be able to define test parameters before I used
CATCH_MAP_CATEGORY_TO_TAG. Such changes should apply to the current batch test run only; subsequent test runs should revert to defaults. A few additional macros and some additional configuration classes allowed me to do this too:
CATCH_CONFIG_SHOW_SUCCESS(true) CATCH_CONFIG_WARN_MISSING_ASSERTIONS(true) CATCH_MAP_CATEGORY_TO_TAG(allSucceeding, "~[vs]");
This produced all the correct output, but in the wrong order. The order that the tests are run depends on their order of registration with the global registrar, which of course depends on the order the compiler/linker decides to implement static objects. Phil normally uses OSX to develop Catch, and this does things in a different order from VS. The solution is to sort the output by test name in the validation scripts before it can be compared.
The same problem afflicts the test that aborts after 4 failures, but the effect is slightly different. The code for OSX was presumably written using XCode/Clang and the tests that code decides to execute before it gets to 4 failures was different to VS and I got a completely different set of failures! So finally I had to add two more macros; one that ‘registers’ a test to be run in a specific order and one that runs the ordered list of tests (Listing 3).
Wrap up
I now have a very flexible test environment that I can use to share Catch source code between Visual Studio and command line Catch. If I want to avoid the torture of the Test Explorer, I can run Catch from the command line by simply adding a main.cpp that specifies
CATCH_CONFIG_MAIN. For those times when I need to resort to the debugger, I can easily run individual tests in the IDE. As a bonus, I can also specify a ‘batch run’ that uses the built in VS command line tools, which means that integration with TeamCity (or other CI environments) should be easy. I think my goals have been met; if you are suffering from similar frustrations, please give the fork a try and let me know what works, and what doesn’t.
Finally, I’ve been discussing with Phil the possibility that the code for my fork could be merged back into the mainline; my understanding is that he is keen to do this, although as far as I know he hasn’t had time to take a good look at what I’ve done to his code yet! So I hope that this will happen, or perhaps will have happened by the time you read this...
References
[Catch] My fork of Catch:
[Nash] Phil Nash’s Catch framework:
[VS] Documentation for VS integration:
[MSTest]
|
https://accu.org/index.php/journals/1851
|
CC-MAIN-2018-30
|
refinedweb
| 3,086
| 65.76
|
Java XML Validation API can be used to validate XML against XSD in java program.
javax.xml.validation.Validator class is used in this program to validate xml against xsd in java.
Validate XML against XSD
Here are the sample XSD and XML files used.
Employee.xsd
Copy<?xml version="1.0" encoding="UTF-8"?> <schema xmlns="" targetNamespace="" xmlns: <element name="empRequest" type="empns:empRequest"></element> <element name="empResponse" type="empns:empResponse"></element> <complexType name="empRequest"> <sequence> <element name="id" type="int"></element> </sequence> </complexType> <complexType name="empResponse"> <sequence> <element name="id" type="int"></element> <element name="role" type="string"></element> <element name="fullName" type="string"></element> </sequence> </complexType> </schema>
Notice that above XSD contains two root element and namespace also, I have created two sample XML file from XSD using Eclipse.
EmployeeRequest.xml
Copy<?xml version="1.0" encoding="UTF-8"?> <empns:empRequest xmlns: <empns:id>5</empns:id> </empns:empRequest>
EmployeeResponse.xml
Copy<?xml version="1.0" encoding="UTF-8"?> <empns:empResponse xmlns: <empns:id>1</empns:id> <empns:role>Developer</empns:role> <empns:fullName>Pankaj Kumar</empns:fullName> </empns:empResponse>
Here is another XML file that doesn’t confirms to the
Employee.xsd.
employee.xml
Copy<?xml version="1.0"?> <Employee> <name>Pankaj</name> <age>29</age> <role>Java Developer</role> <gender>Male</gender> </Employee>
Here is the program that is used to validate all three XML files against the XSD. The
validateXMLSchema method takes XSD and XML file as argument and return true if validation is successful or else returns false.
XMLValidation.java
Copypackage com.journaldev.xml; import java.io.File; import java.io.IOException; import javax.xml.XMLConstants;) { System.out.println("EmployeeRequest.xml validates against Employee.xsd? "+validateXMLSchema("Employee.xsd", "EmployeeRequest.xml")); System.out.println("EmployeeResponse.xml validates against Employee.xsd? "+validateXMLSchema("Employee.xsd", "EmployeeResponse.xml")); System.out.println("employee.xml validates against Employee.xsd? "+validateXMLSchema("Employee.xsd", "employee.xml")); } public static boolean validateXMLSchema(String xsdPath, String xmlPath){ try { SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = factory.newSchema(new File(xsdPath)); Validator validator = schema.newValidator(); validator.validate(new StreamSource(new File(xmlPath))); } catch (IOException | SAXException e) { System.out.println("Exception: "+e.getMessage()); return false; } return true; } }
Output of the above program is:
CopyEmployeeRequest.xml validates against Employee.xsd? true EmployeeResponse.xml validates against Employee.xsd? true Exception: cvc-elt.1: Cannot find the declaration of element 'Employee'. employee.xml validates against Employee.xsd? false
The benefit of using Java XML validation API is that we don’t need to parse the file and there is no third party APIs used.
Goutam Bhattacharya says
Hello Pankaj, This is really a good example, but the problem is the source passed to validator becomes locked. Delete operation on same file fails with exception. Could you please suggest a way out?
Tóth Mihály says
Is there any validation for Datamatrix string content.
I mean is there a standard combined validation with two pass?
In the first pass is the decoding of Datamatrix content
2nd pass: is a schema validation .
Sai says
How can I validate the XML file if the schemaLocation is provided in the file tags.
Thanks in advance.
vinay says
This validation works fine field by field in top down approach like field1 comes first and field2 next,if both fields are invalid as per XSD then error will be thrown for only field1.
I really appreciate if u can share how to validate whole XML against XSD in one shot so that all the errors will be thrown in a single response.
Kannan.S says
what changes to make in the xsd so that the Employee.xml is valid
ss says
Hi all,
Please help me with this following scenario
1 multilevel XSD(parent child) with 1 XML
Please give me example…..
Thanks in Advance
Himanshu says
Hi Pankaj,
I am having a situation where I will get some data from a Stream at run time and I need to convert it into XML by performing some general validations. I am not having any xsd ot xslt.
Can you provide some pointers on the same.
Thanks,
Himanshu
MAI says
Hi,
I’m stuck here:
SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
Exception Ljava/lang/ExceptionInInitializerError; thrown while initializing Lorg/apache/xerces/impl/xs/SchemaGrammar;
Exception Ljava/lang/ExceptionInInitializerError; thrown while initializing Lorg/apache/xerces/impl/xs/XSConstraints;
java.lang.IllegalArgumentException:
Can you help me please?
I’m using Android Studio 1.2.2, jdk 1.8, android 4.4.2
Suresh says
How to create the xsd in the first place. Any tools (or Eclipse menu option)available to create the xsd from xml?
Thank you
Suresh Kumar
Bhavesh says
That’s what I was looking for. Thanks for sharing.
Brandon says
Just what I was looking for, thanks!
Sathish Gollapally says
Very good article and easy to use also……
krishanveni says
Hi plese provide the code for the same above mentioned but not single its multiple xsd.
So my requirement is How to Validate XML with multiple XSD Schemas.it is very urgent requirement in my project.So please give the code.And one xsd is included in other xsd and so on.So I want to validate this xml file against the multiple schemas.
Regards
Krishanveni
Mintakastar says
Hi, this worked wonderfull.
but i have a little change, what if i have the XML and XSD files, into strings?
lets say a string read from the database, with the xml , xsd information,
String xml =””;
String xsd =””;
then
validateXMLSchema(xsd,xml)
How do i convert the “new File” to accept the String with the data?
or, do convert the “newSchema” and “validator.validate” to accept the String with the data?
Schema schema = factory.newSchema(new File(xsdPath));
validator.validate(new StreamSource(new File(xmlPath)));
thanks!!
Darko says
Short and clear.
Thanks a lot for this good start-up example.
Srinivas says
Simple and sweet. Thanks
Phanendra Akurathi] says
Is it possible to identify all the invalid xml tags in one go?
Rohit More says
I hope above link would help you. 🙂
Rajesh Kumar Yuvaraj says
Very good and Simple solution Pankaj. Keep Going
kamal says
With above mentioned link still i am not able to get all exception, It is throwing only single exception, Can you please let me know..It is very urgent.. In my xml i have 3-4 Enums, and if i delete all still it throws only single exception, not for all enums, can any body know the solutions. its very urgent.
|
https://www.journaldev.com/895/how-to-validate-xml-against-xsd-in-java
|
CC-MAIN-2019-30
|
refinedweb
| 1,081
| 51.04
|
By: Charlie Calvert
Abstract: This article explains how to take a default ASP.NET CodeBehind project of the type created in Visual Studio.net and run it under Mono on Linux or Windows.
This article explains how to get a standard CodeBehind ASP.NET program
up and running under Mono on Linux or Windows.
Mono is an open source implementation of Microsoft's .NET technology.
It is very much a work in progress, and so having the latest version is
important. The Mono team is going through the Microsoft .NET standard,
implementing each class and each method, one at a time, as fast as they
can. With each build, many ASP.NET standard functions are added to Mono.
If you are interested in Mono development on Linux, you should read my article
on installing the latest Mono code. Getting the latest build is important
if you want to have the widest possible range of code available to you
when you run your ASP.NET applications on Linux.
When you enter Visual Studio or C#Builder, and create a default ASP.NET Web
Application, you end up with four key files:
The first file, WebForm1.aspx, contains standard HTML code slightly
modified to support the ASP.NET standard. If that were the only file
in your project, then you would be in good shape, since either Mono's or
Microsoft's tools would automatically compile it for you. The other files,
however, are CSharp files that need to be compiled manually. It is the
compilation of these files that I believe needlessly discourages some Mono
developers.
The actual compilation of the files can be accomplished at the command
line with a single call to the Mono CSharp compiler
(mcs). The short file found in Listing 1 is a Linux
script that will perform the compilation. The same command will work in a
Windows batch file, but minus the first line which references the Linux
bash script engine. Note that the call to mcs, the Linux compiler, could all be
on one line. The code shown here will compile and run cleanly on Linux as
is, but you might want to place it all on one line in Windows. I have
broken the call to mcs into three lines with the backslash character in
order to make it more readable. You can remove the backslash character
and put the command all on one line.
Listing 1: A simple script for compiling an ASP.NET program created
in Visual Studio.
#! /bin/bash
mcs /t:library /out:WebApplication1.dll
-r:System.Web -r:System.Data -r:System.Drawing
AssemblyInfo.cs WebForm1.aspx.cs Global.asax.cs
On the first line of the call to mcs, the first argument (/t:library) tells the
compiler that you want to create a library. In Mono, libraries have a DLL
extension whether they are run on Windows or on Linux. The second
argument states that you want to create a library called
WebApplication1.dll. After the compilation is complete,
you should copy this file to a directory called bin,
which should be manually created one level further from the root of the
file system. The end result is that WebForm1.aspx will be one level
closer to the root than WebApplication1.dll:
/home/charlie/src/WebForm1.aspx
/home/charlie/src/bin/WebApplication1.dll
The second line of the call to mcs in Listing 1 specifies links to
the libraries that are needed to complete the compilation. A default
Visual Studio 2003 project depends on System.Web.dll, System.Data.dll
and System.Drawing.dll. By including these references in the script,
the Mono CSharp compiler knows to resolve the calls to these system
libraries.
The third line in the call to mcs lists the three CSharp source files
that you are generated by a default Visual Studio project. All three
files will be compiled and the resulting binary code will be stored in
WebApplication1.dll.
After executing this call to mcs, on either Linux or Windows, you
will end up with WebApplication1.dll. On my system, the file is 4608
bytes in size. As explained earlier, I copied this file to a directory
called bin, which is located one level further from the root of the
file system than WebForm1.aspx.
After compiling your ASP.NET project, you will want to run it. On
Linux, this is a fairly straight forward process. I will describe the
necessary steps in the next two sections of this article. The first
section is on Linux, the second on Windows.
Mono comes with a special Web Server called xsp.exe. Starting the
server is very simple. On my system, I simply type mono
/usr/bin/xsp.exe at the shell prompt. The output from the command
looks like this:
[charlie@rohan MonoCodeBehind]$ mono
/usr/bin/xsp.exe Listening on port: 8080
Listening on address: 0.0.0.0
Root directory: /home/charlie/src/csharp/WebApps/MonoCodeBehind
Hit Return to stop the server.
By default, the server will start on port 8080. This means that you
do not have to shut down Apache, if you happen to have it running. Apache is, of course ,running on port
80. For now, you want the root directory to be the directory in which
you have stored your source files. In particular, it should be where
WebForm1.aspx is stored. xsp provides options for changing the root
directory or the port. You can see all the options by typing mono
/usr/bin/xsp.exe --help:
xsp.exe [--root rootdir] [--applications APPS] [--virtual virtualdir]
[--port N] [--address addr]
--port N: n is the tcp port to listen on.
Default value: 8080
AppSettings key name: MonoServerPort
--address addr: addr is the ip address to listen on.
Default value: 0.0.0.0
AppSettings key name: MonoServerAddress
--root rootdir: the server changes to this directory before
anything else.
Default value: current directory.
AppSettings key name: MonoServerRootDir
--applications APPS: a comma separated list of virtual directory and
real directory for all the applications we want to manage
with this server. The virtual and real dirs. are separated
by a colon.
Samples: /:.
the virtual / is mapped to the current directory.
/blog:../myblog
the virtual /blog is mapped to ../myblog
/:.,/blog:../myblog
Two applications like the above ones are handled.
Default value: /:.
AppSettings key name: MonoApplications
--nonstop: don't stop the server by pressing enter. Must be used
when the server has no controlling terminal.
--version: displays version information and exits.
You can now start Mozilla, and type in the following URL:. Note that you have effectively
made the directory in which you stored WebForm1.aspx your root
directory for the Web Server. As a result, the URL is very
straightforward.
At this stage, you should see your ASP.NET program in your browser,
as shown in Figure 1. Please note that I have modified the default
project slightly to include a button and a text field.
Figure 1: A simple ASP.NET program running in the Mozilla browser
on RedHat Linux 9.0 with all the latest updates from the RedHat
Network and the Red Carpet mono channel.
There is no special preparation that you need do to run your
project on Windows. If you have Microsoft's IIS server installed on
your system, then all you need to do is set up a virtual directory,
copy your files into it, and aim your browser at your copy of
WebForm1.aspx.
When setting up your project, remember to place WebApplication1.dll in a subdirectory one level further from the root
of the file system than the directory that holds WebForm1.aspx.
If WebForm1.aspx is in a virtual directory called MyWebForm, then you
should point your web browser at the following URL:. The result should look very much
like what is shown in Figure 1. Note that I modified the default
VS or C#Builder project slightly to contain a text field and a button
so that you can see that the project actually does something.
To set up a virtual directory in IIS, run the snap-in administration tool for IIS (c:\windows\System32\inetsrv\inetmgr.exe). Right click on the Default Web Site and choose New | Virtual Directory to run a wizard. After completing the wizard, select the directory you created, right click to bring up its properties, and use the Application Setting to create an Application Name. The execute permissions for the directory can be left at Scripts only.
This article has been about how to get a default ASP.NET project up and
running in Mono. However, I think it is important to show you the actual source
files, even if I don't explain them in any detail. That way you can check
your source code against the code I created in order to confirm that you
are following all the steps correctly.
As you may have noticed in the screen shot, I did modify the default
project slightly by adding a button and text field, and code to respond to
clicks on the button by placing the word "Sam" in the text field. This
modifies the default project just enough so you can sense something
has been accomplished, without muddying the waters by adding complex code
not germane to the topic of this article.
The files needed in this project are shown in Listings 2 - 5.
Listing 2: The WebForm1.aspx file.
<%@ Page language="c#"
Codebehind="WebForm1.aspx.cs"
AutoEventWireup="false"
Inherits="WebApplication1.WebForm:Button id="Button1"
style="Z-INDEX: 101;
LEFT: 144px;
POSITION: absolute;
TOP: 64px"
runat="server"
Text="Button">
</asp:Button>
<asp:TextBox id="TextBox1"
style="Z-INDEX: 102;
LEFT: 256px;
POSITION: absolute;
TOP: 72px"
runat="server">
</asp:TextBox>
</form>
</body>
</HTML>
Listing 3: The WebForm1.aspx.cs file.
{
protected System.Web.UI.WebControls.TextBox TextBox1;)
{
TextBox1.Text = "Neither fire nor wind, birth nor death can erase our good deeds.";
}
}
}
Listing 4: The Global.asax.cs file.
using System;
using System.Collections;
using System.ComponentModel;
using System.Web;
using System.Web.SessionState;
namespace WebApplication1
{
/// <summary>
/// Summary description for Global.
/// </summary>
public class Global : System.Web.HttpApplication
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;)
{
}
#region Web Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.components = new System.ComponentModel.Container();
}
#endregion
}
}
Listing 5: The AssemblyInfo.cs file.". The location of the project output
// directory is dependent on whether you are working with a local or web project.
// For local projects, the project output directory is defined as
// <Project Directory>obj<Configuration>. For example, if your KeyFile is
// located in the project directory, you would specify the AssemblyKeyFile
// attribute as [assembly: AssemblyKeyFile("....mykey.snk")]
// For web projects, the project output directory is defined as
// %HOMEPATH%VSWebCache<Machine Name><Project Directory>obj<Configuration>.
// (*) Delay Signing is an advanced option - see the Microsoft .NET Framework
// documentation for more information on this.
//
[assembly: AssemblyDelaySign(false)]
[assembly: AssemblyKeyFile("")]
[assembly: AssemblyKeyName("")]
In this article you have seen how to compile a default Visual Studio or
C#Builder CodeBehind ASP.NET project under Mono. It turns out not to be a
difficult task, so long as you understand the steps involved. The key
points to remember are that you need to use mcs to compile the CSharp
source files in your project, and that you should place resulting DLL in a
directory called bin that is one level further from the root than the
directory where your source is stored.
Server Response from: SC1
|
http://edn.embarcadero.com/article/32057
|
crawl-002
|
refinedweb
| 1,925
| 58.38
|
Penanya
Cookie or Session Variable not updating
- Hello,
I have a page that is running an SQL query and looking for a certain keyword in an items name. If the keyword is found, It then looks for a cookie. If the cookie doesn't exist, it then creates one.
On the next iteration (20 secs) the cookie will be located, and, if the values don't match, then it plays a sound, and then overwrites the cookie with the new values.
On the next iteration, the values match, so the cookie won't be overridden, and no sound is played... or .. that's what I am trying to do.
However, the values do not seem to be updated.
My code is
string@item2.name</td> <td width="30%" style="padding-left:10px; color:red; font-weight:bolder">@item2.time.ToString("HH:mm")</td> <td width="20%" style="padding-left:10px; color:lawngreen; font-weight:bold">TABLE: @item.table</td> <td width="10%" style="padding-left:10px; color:red">@item.no</td> } </tr>What I am trying to do it for it to look for through the items for any that start with NOW, and if that items time and Number (No) are different (No may be greater or lesser, Time should always be greater) to that in the cookie, play a sound, and update the cookie with the latest values (so next time around, if no extra item starting with NOW has been added/found, the cookie will not be written to and the sound will not play).
I understand that cookies may not be the way to go here, and Session variable(s) might be a better option, so I have tried this in HomeController.cs
public class HomeController : Controller { public ActionResult Index() { Session["awayTime"] = ""; return View(); }..and then this code snippet in on of the subpages
if (Session["awayTime"] == null) { Session["awayTime"] = @item2.time.ToString("HH:mm"); } if ((Session["awayTime"].ToString() != @item2.time.ToString("HH:mm")) { <audio src = "/sounds/alert.wav" autoplay ></audio > Session["awayTime"] = @item2.time.ToString("HH:mm"); }
So, the Index pages loads up first (and sets the session variable to a null value, and the user (me) picks the subpage. Becuase on the first run, the variable is null, it assigns a value, then when the page is reloaded, the variable value doesn't match the existing one, so plays a sound, then reassigns the value.
However.... neither the cookie option nor the Session variable seems to work. the cookie does not update with the new details, and I am not sure I am using the session variable correctly, as both options play the sound on reload (plus, if the user bookmarks the subpage instead of the index, the session variable solution (as it is) will not work).
Can someone help me, please ?
Thank You
- Diedit oleh G-Oker 07 Juni 2018 8:49
- Dipindahkan oleh CoolDadTxMVP 07 Juni 2018 18:26 ASP.NET related
|
https://social.microsoft.com/Forums/id-ID/c5684e98-cc37-491b-827d-8bf73af1450c/cookie-or-session-variable-not-updating?forum=Offtopic
|
CC-MAIN-2018-26
|
refinedweb
| 490
| 70.53
|
Hello,
I want to translate SharePoint 2010 to my language and use it my language.
How can I do it?
I have Windows 2008 Server R2
SharePoint 2010 (64Bit)
Visual Studio 2008(SQL Server Express)
Help me to do this job.
also I want to change data setting is it possible to do it?
View Complete Post
This month we examine the Microsoft translation Web service and show you how you can incorporate translation services into your own Web application.
Sandor Maurice & Vikram Dendi
MSDN Magazine April 2009
I cannot translate an object after the same object has been translated via a storyboard.
Here is the sample code:
XAML:
<Window x:Class="WpfApplication1.MainWindow"
xmlns=""
xmlns:x=""
Title="MainWindow" Width="400"<
I use simple default route
routes.MapRoute( "Default", "{controller}/{action}/{id}");
The problem is that I want to translate some paths in my language but names of the controllers and actions should be leaved in english. For example I want to map URL "/wydarzenia/4/rejestruj" to "/events/4/register'.
I tried
routes.MapRoute( "Register", "wydarzenia/{id}/rejestruj", new { controller = "Events", action = "RegisterToEvent" });
But this doesn't work. Any ideas?
default rule denay all traffic
i want to allow hotmail.com and translate.google
i traying to allow by domains like *.hotmail and *.live.com *.translate.google.com
but not work with me ??
what is the solution ......??? thx
I have one page that is written in coldfusion that I would like to translate into asp.net. I have never used coldfusion and don't even know where to start. Is there someone out there that can help?
I have this little function for forcing a download of MP3 files. I have it integrated into my VB app but would like it to be converted from C# to VB so that I can use other functions in my APP_Code directory. If anyone can help I would surely appreciate it. I have tried the automatic translators on the web but they don't seem to be able to handle this little widget. Here is the code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
/// <summary>
/// Forces Download/Save rather than opening in Browser
/// </summary>
public static class HttpExtensions
{
public static void ForceDownload(this HttpResponse Response, string virtualPath, string fileName)
{
Response.Clear();
Response.AddHeader("content-disposition", "attachment; filename=" + fileName);
Response.WriteFile(virtualPath);
Response.ContentType = "";
Response.End();
}
}
Thanks in advance.
using vb.net/asp.net 2005.
I am trying to set the border color of cells of my datagrid to an HTML color code: #c1c1c1
I have the following and would like to convert it so that it uses my color code and not the text of the color name, does anyone know the syntax? what I have is:
e.Row.Cells(1).BorderColor = Drawing.Color.Red
thx
MC
Hi,
I have some Named Sets that I want to translate. I have done exactly the same way as with the Calculated Members but the Sets don´t use the translations? Any ide why?
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/42086-translate.aspx
|
CC-MAIN-2017-13
|
refinedweb
| 519
| 58.28
|
<>. ZF has often been accused of "singletonitis." While we're not sure if we completely agree, we will note that in most cases, the singletons.<>. Goto is often considered "evil," but is invaluable when creating Finite State Machines (FSM) and parsers; usage of goto will be evaluated on a case-by-case basis. Some examples of components that could potentially benefit from such an implementation include Zend_Search_Lucene (which already implements a FSM), the MVC (more below), Zend_Ical, and Zend_Markup (though not all of these may make use of it).<>Ok, but why not get rid of any naming inconsistency now with 2.0. It is not such a big deal to do the renaming from Zend_Foo_Bar_Whatever to Zend\Bar\Foo\Whatever. Every IDE nowadays can do this without much effort. These changes only need to be documented in a migration guide.<>If you accept that naming is a critical part of the design of the code, then it would seem to be in our best interest to seize this opportunity to fix as much as possible in pursuit of this goal. The longer bad names stay around, the higher their cost to fix will become. IIRC, I believe that Kent Beck said in the Refactoring book that you should make a point to fix bad names before other issues. Good names may be the single biggest factor affecting the understandability of the code.<>Case sensitivity may be how a computer works, but it's a bitch to debug; I can't tell you how many times I've seen posts on the mailing lists or issue tracker where the problem ended up being option case – nor how many back and forth messages/comments it takes to determine this. It's relatively easy to determine when the key in question refers to a class name, but other options are less obvious.</p>
Nov 11, 2009
George S.
<p>How about focusing on the extendability of classes? I can't tell you how many times I've run into a situation where I need to modify the behavior of a method, and I've run into either private variables (without getters/setters) or final classes in the ZF. There are definitely cases for their use, but I feel that there are a lot of private variables that could be protected. Also some classes are declared final, but shouldn't be. I can't remember any specific examples off hand, but I think the REST server/client has a couple.</p>
Nov 12, 2009
Roman S. Borschel
<p>Simply making classes non-final or turning private to protected does not make something extensible. Without any further thought this makes them only "hackable". There is a very good rule of thumb for inheritance: Design and document for inheritance or else prohibit it. Designing a class for inheritance means that the extension points are well-defined, the consequences of extending a class are documented, etc. That makes the class "robust".<br />
By just making classes non-final and members protected instead of private you indirectly increased your maintenance burden and the chance for breaking backwards compatibility. Every single protected member might now be used by some application that relies on it, even renaming breaks BC. Every single protected method might now be overriden or used in a subclass, causing strange side-effects that were never anticipated.</p>
<p>The worst example I read somewhere, I think it was on a cakephp wiki page, is that "all members should be protected, this is a framework and thus should be extensible". This is just not extensibility or at least a very unstable and unreliable extensibility.</p>
<p>Maybe you're frustrated because you were run into such situations where you needed to modify the behavior of a class in a way that was not intended. Probably you would've been happier at that point when it would've been possible to just extend that class or override that particular method. But you would be frustrated later when your code broke all of a sudden because the code you relied upon was refactored. If you demand everything to be protected/non-final you put a huge burden on the framework developers to maintain backwards compatibility.<>Generally, framework component developers cannot identify all use cases when authoring a new component. Like all great technologies, the really interesting use cases don't really become apparent until after people have had a chance to play with the code. That said, when we decide to restrict usage of final and private, and static state for that matter, it's because we feel there is both a public API as well as an extension point API that should be available to consumers. This allows us to not lock in (via BC) to a particularly inflexible and restrictive API, and also allows consumers to extend in ways that otherwise would have presented pain points. This also allows us to grow the original use case, and further unit test them as new use cases and possible extensions become available.<>I'm not sure where you're looking, but there are exactly four classes marked final withing ZF – 2 in the InfoCard component, one in AMF, and Zend_Version. I hardly think this will be an issue – but we will review those cases. (In your specific example of the REST server, it will be going away in 2.0, and we recommend using the new Zend_Rest_Route to build your RESTful applications via the MVC.)</p>
<p>Regarding private variable usage, we actually recommend protected visibility except in those cases where overriding could cause the implementation to break. There may be a few places we need to review the visibility, but I do not expect too many changes in this regard. As Roman notes, visibility is not the key to extensibility; class and component design is. The flex points should be well-defined and documented.</p>
Nov 12, 2009
George S.
<p>I was wrong earlier about making classes final, I forgot my use case... It was actually making methods final. </p>
<p>So here is a situation I ran in to... There is a REST API I wanted to consume, I knew the results of the REST calls I was making wouldn't change very often (maybe once a day), so I naturally wanted to cache the results. The static method getHttpClient in Zend_Service_Abstract is final, there is no way to use a custom HttpClient... Also, the HttpClient as it exists right now, doesn't support caching (and for good reason, this use case is very obscure). However, I couldn't find a good solution to implementing caching in my scenario, without wrapping the entire process in a caching statement... (I'd rather cache the calls, not the final Zend_Rest response).<>I'm still looking over the Controller 2.0 and Controller_Router 2.0 docs, but I didn't see any mention of Zend_Rest_Route in the classes section of the Router 2.0 wiki page? It will need to be updated along with the other routes, right?<>I've read through all the docs on the wiki, and followed the discussion in the mailing list. First of all, let me say that this looks really good. I like the direction ZF is headed, and it certainly seems that we have enough enthusiastic contributors behind the wheel to make the framework go in a direction we're all happy with in the end. This needed to be said, because my notes below might come across as more harsh than they actually are.</p>
<p>Unified constructors. This is simply a bad idea. be assumed that all classes are "configurable" objects (I mean who are we, jQuery?). Granted, some of the classes in the framework (I've even written such a component myself :S) benefit from taking $options as a single constructor param, but most don't. Sure, an array is an efficient data structure in PHP, but we lose <em>all the good stuff</em>, namely a self-documenting syntax which is easy to learn, because it is typed and can be automatically parsed by IDEs. Not only IDEs, actually. My favorite way of learning a class is by looking at the constructor, either by opening the file directly, or looking at auto-completion suggestions in the IDE. I expect to find a clear overview of the required params for constructing the class in a valid state. This won't be the case with an $options array, which has to be manually written in a docblock. Sure, IDEs <em>could</em> parse this information, but we all know that won't be a viable feature for several years, and only in a few selected IDEs (probably as an extended "ZF" feature for Zend Studio which won't be contributed back to the original PDT project, and then we have it going with slow bureaucracy at least until PHP 6.0 is released and the whole PDT project will have to be rewritten once again).</p>
<p>But it goes beyond syntax and "pedagogical" reasons. I'd take it as far as to say that if this is a feature we <em>need</em> because constructor params change too frequently, something is completely funked further down the line. A constructor shouldn't need to take a whole load of params to construct the object in a valid state, it should be simple. Perhaps the class has too much responsibility, and should be refactored separate its behaviour into more logical groupings. Of course this can't always be done, but I'd still argue that in most cases a constructor shouldn't take more than 3 params. This may sound blatantly provocative, so take it with a pinch of salt, but think about it; wouldn't it be nice? Supear-easy to get an overview and learn the API.</p>
<p>For classes that absolutely <em>have</em> to take a single $options param, there is still a way to do it which is more easily parsed by both humans and IDEs. Make an FooOptions class which is consumed by the Foo constructor. This is something that is done in several other languages, and from my experience it works well when the constructor needs a bunch of options. Example, let's say we have an an <code>Application</code> class that takes 10 param. Instead of putting those params in a naive array, make an <code>ApplicationOptions</code> class with 10 properties (and/or getters/setters), which allows us to do <code>public function _<em>construct(ApplicationOptions $options)</em></code><em>. This allows for auto-completion in IDEs, it's easy to understand, OOP-wise it's easy to extend, and way easier to maintain in the long run than an array. And of course, all such classes (and indeed all constructors) should follow the principle of _convention over configuration</em>, i.e., the user should have to specify as few options as necessary. A nice side effect for this pattern is that those options class may contain extra logic if that's something you need.</p>
<p>I don't buy the argument that this makes (type 3) dependency injection a breeze. Surely, implementation-wise it's a breeze, because you can stuff whatever you like in an array and hope for the best, but dependency injection shouldn't trade off readable code, and you shouldn't have to depend on dependency injection to be able to create a class.</p>
<p>So yeah, I'm somewhat against unified constructors. Moving on...</p>
<p>Exceptions. What's the deal with those marker interfaces and having to wrap stuff to use it? From what I can tell, the argument narrows down to this: it allows us to catch "zend" exceptions, or "zend\somecomponent" exceptions. Let's say I'm writing some code in the zend\navigation namespace, and want to throw an InvalidArgumentExcception. Do I have to a) make an empty Exception marker interface in the namespace, then b) make an InvalidArgumentException that extends SPL's InvalidArgumentException and implements the marker interface? I'm asking not because I'm trying to bash the sketched solution, but because I find it unclear. It seems <em>extremely</em> cluttered, I have to admit, and there is no apparent use-case where you'd benefit from catching exceptions based on component. Furthermore, an InvalidArgumentException is an InvalidArgumentException – its logical meaning doesn't change whether it's used in the zend\navigation namespace or the zend\view namespace. I've never understood why ZF does things so differently with exceptions than other languages/frameworks I know. Nowadays we're just throwing the general "component exception" no matter what, we don't have exception classes that carry any meaning by themselves. This is a shame, because it really is much easier to work with exception classes you immediately understand what means. And to cap it all off, this would mean you wouldn't have to catch "general" exceptions where it doesn't make sense.</p>
<p>Let's use SPL exceptions directly, that's what they're there for. No need to wrap any exception class or make marker interfaces, because it doesn't really solve anything. I know there has been talk about making "translatable" exceptions and whatnot, but it doens't look like this is happening any time soon, and in case it does, it could be implemented by retrofitting a TranslatableException interface or by introspecting general exception classes (which would make it work outside the ZF world as well).</p>
<p>Now for the finale, our favorite topic of discussion: class/interface naming. I wrote a comment about this in the Controller 2.0 page yesterday, but it seems it got clogged in the tubes or something. I know this has been discussed before, but I'm curious to see the reasoning behind the current <code>Interface</code> suffix scheme, because as I recall it, this has never been anyone's favorite. First off, Interface is a long word, and takes up quite a bit of horizontal space. I'm not trying to be a 80-characers-per-line nazi here, but I generally am in favor of shorter, more descriptive names. Secondly, the >There, fixed it. A lot easier to read, and frankly it just looks way cooler. On a semi-uninteresting sidenote; shouldn't the controller take a more specific Event type, such as MvcEvent, ControllerEvent, or DispatchEvent? Let's not forget the mantra "descriptive names". I doubt it's the intention that a controller should be able to dispatch >The original thought behind the unified constructor is that, because this has become a common pattern throughout the framework, we should make it explicit. That said, you, Stefan Priebsh, and others are providing some excellent arguments against the concept – and I particularly appreciate that you are providing some alternatives to the practice.</p>
<p>The areas where the pattern has become useful can be lumped into two primary categories:</p>
<ul>
<li>Plugin/adapter classes</li>
<li>Service classes<br />
The first case encompasses components such as Zend_Form (particularly elements, validators, filters, and decorators) and Zend_Application (particularly resources); the second refers to the various Zend_Service consumables, a number of which have, due to changes in the remote APIs, needed to change constructor signatures. In the case of plugin/adapter classes, typically the objects are being created via a factory, and methods are not being called until after instantiation. In the case of service classes, often the values passed to the constructor are necessary in order to connect to a given web API – but will not be used until a call to the API is>I also like your idea of using configuration "containers" to pass to either the constructor or setOptions(). This has benefits particularly for those using IDEs, and provides an OOP paradigm; that said, I think allowing an array and/or Zend_Config object also should be allowed. I'd like to examine this idea more.<> have a good grasp of the English language – but not all non-native speakers do. Coming up with a good name for an interface or abstract class can be difficult, particularly in a language as nuanced as English (e.g.: what do you call a Person interface? Personable? not quite the connotation desired...). Additionally, if you have both an interface and an abstract implementation sitting beside a concrete implementation, naming becomes even more difficult. Consider the case of routing: a good name for an interface might be "Router", but what do you call the concrete implementation? or an abstract implementation?</p>
<p>Finally, while API docs and IDEs may make finding the interfaces and abstract classes easy, what if you don't have the API docs in front of you, or don't use an IDE? How do you distinguish, by glancing through the filesystem, which class is an interface, which an abstract, etc?<>Also, thanks for summarizing the PHP Interop Group thingy. I was given a link on the mailing list, but couldn't really find anything on the topic at the URL. To be honest, I still don't get it, but that's fine. As long as I get to throw SPL exceptions, I'm happy.<>A: I wouldn't expect anyone to learn the ins and out of the framework architecture without ever using some sort of documentation. I'm thinking that if you're really looking for abstract classes or interfaces in a specific namespace, you probably already know what you're looking for (to some extent).<>I just want to point out one thing with the naming questions: you came up with some right off the bat, but even in those, there's a ton of variability – does the indicator of abstract (regardless of what it is) come before or after the class/component name? Does "default" or "base" indicate abstractness? etc. The goal of the naming conventions is to take some of the variability out and make it consistent, particularly between different >I don't know how late-to-the-game I am with my reply, but I felt compelled to point something out to Robin, and other anti-unified constructionists out there: You can still "document" the constructor even with just an array as a parameter. Take the following code snippet as an example:<>The main reason is that you give away the two main advantages of dependency injection through the constructor, explicitness and the ability to use type hints. You'd force developers to look at the source code to find out about required or optional dependencies (I'm just focussing on the dependency injection aspect of the parameter arrays here). Type hinting is possible in the individual setter methods, but not in the general setOptions() method. This will lead to problems, or require people to explicitly code type checks.</p>
<p>Another reason, albeit less important, is that you make it a lot harder to figure out which constructor arguments are optional. Constructor arguments with default values make that very easy.</p>
<p>The Foo implementation shown above has a hard-coded, implicit dependency on Option, which is really, really bad. Number one, you have to look at the Foo source code to learn about the dependency, as Option does not show up in the Foo API. Number two, you cannot easily swap out Option for anything else. Late static binding can obviously help, but if somebody wants to extend Option, they will still have an unnecessary hard job.</p>
<p>In the above example, Option is basically a registry through the backdoor. Pass $options with "everything" to the Foo constructor, let Option pick the values that Foo need by checking for which setters there are. I see a high potential for abuse.</p>
<p>In addition, Option makes some assumptions, for example the presence of a public setter method named setSomething(). Underscore-separated names are converted, which implies that you run into all the troubles of named parameters when renaming a setter method: the IDE is not going to help people, they have to figure out that they must fix their named parameters throughout the code. If you choose that route, you have to throw an exception in Option when a setter does not exist - otherwise people have to debug their code just to find out that due to a typo, one option value has not been set.< />
Since there is a lot of dependencies in Zend Framework classes, why not get rid of the Option class and using setter injection instead? Since the actual construction would be performed automatically by a injector/provider/container there is really no difference. We can even adopt a new standard and name the setters injectCollaborator(), injectOtherCollaborator() and so on.<>Finally array options, they could (and imho should) be disregarded, and simply be wrapped in a "ConfigureInterface" that handles arrays, for example Zend_Config itself, or maybe the "unified" constructor could handle this for you, which would be a "service" to the user, more than a recommended usage:<>Based on your arguments and those others have made, I'm definitely inclining to revisit this decision, and you can see more of my thoughts on this in a reply to Robin Skoglund further up the thread. As I note there, the pattern really makes most sense for those classes being used as plugins/adapters, or for classes where we know that constructor arguments may vary over time (concrete service consumable classes – the Zend_Service tree, basically). Even so, there may be other approaches we can consider – and I'm willing to do so as long as we can approach all such cases in a uniform way. One issue ZF is often take to task for is the inconsistency of API, and that's one thing I want to address with 2.0.</p>
Nov 15, 2009
Iván Montes
<p>Since the dispatching process and the "plugin" systems are going to change, I was wondering if it wouldn't make sense to make their design suitable for C implemented extension. Those two "components" really need to be as fast as possible and I assume that a "simple" PHP extension, which implements support for them, would really help to obtain some very good speed gains.<>Some people apparently have thought about doing this already pre-1.0, and more recently we also gave that a shot ('round the release of 1.7). However, because we didn't have time, nor enough C programmers we froze the project for a while/indefinitely (we'll see which of the two).<>Even though I believe that all those improvements discussed here are really good, I think - as of today - ZF is mostly lacking of good support on the Model and does have great room for improvement there, I think. I feel that part as being under-represented in the discussions so far.</p>
<p>I would love to see the Service Layer being more exposed and more "default" so that developers stop putting business logic in their controllers and - even better - also stop putting it into the business objects. Business logic perfectly fits into the service layer and ZF could support that by making a service layer "something really normal that is there by default". ORM is another topic that - I beliebe - is already covered by ideas of integrating Doctring (2) or at least making usage of Doctrine, Propel etc. easier in ZF. Zend_Db is pretty fine, but Doctrine & Co. really provides persistence in another dimension.</p>
<p>One thing missing also is solid support for transactions. Having something like JTA would really push ZF / PHP heavily towards "enterprise readiness". Also applications are going to be more and more distributed and therefore transaction management is getting even more important. Without solid transaction management, mission critical enterprise applications are hardly imaginable, to be honest.<>Personally, I'm not terribly interested in this. The request object is mutable, and by implementing request parameter => action method parameter mapping, you actually limit the functionality quite a bit, as well as put more of an onus on the end-user developer to define the required parameters. As an example, if, in the future, they require more parameters, they will have to change the method signature.<>Also it could be considered adding a section to the performance section of the Reference Guide that shows an example of how to make some requests go around the front controller but still use ZF components. E.g. a php file in the public directory that includes the bootstrap but does not start the MVC layer. </p>
Dec 12, 2009
John Kleijn
<p>I have two major issues with the above proposed implementation of zend\Options. Actually three. The third being that IMO you should get rid of the classes in nodes, ALL of them. Options should have it's own package, options. So, zend\options\Options. Pretty pretty please.<>I think ZF 2.0 with a minimum requirement of PHP 5.3 is a very large mistake. Take a look at all the web hosts around. Many of the web hosts (especially shared hosts) do not yet offer 5.3. DreamHost for example says they can not move up to it until all the 3rd party PHP modules they use are supported under 5.3, they have tested it and then had a server upgrade.</p>
<p>I can not name a single shared web host that supports PHP 5.3.<br />
Do you expect ZF 2.0 to be used by anyone on shared web hosts?</p>
Dec 16, 2009
David Muir
<p>I don't think 5.3 will be a major issue. The next version of Symphony and Doctrine will be 5.3 only, and they're both due out (probably) sometime late next year. Besides, the web-hosts will be slow to upgrade if there's no demand for it.</p>
Dec 17, 2009
Matthew Weier O'Phinney
<p>ZF 2.0 is not going to happen overnight; we're anticipating being ready for release no earlier than late 2010. By that point, I have no doubts that 5.3.2 will already be released, if not even 5.3.3 – in other words, it will be a fairly mature product.<>Finally, we plan to support the last minor release of the 1.X branch for a while following the 2.0 release, as we know that upgrading will not be something everyone can jump on board immediately – in part due to hosting providers. You do not have to adopt immediately.</p>
Dec 18, 2009
John Kleijn
<p>Really, why so late? The whole host support question is completely unnecessary, exactly for the reason above: branching. If you are going to branch anyway, why not get 2.0 out there ASAP? People (like myself) are impatient for a namespaced ZF. Doctrine is releasing a first beta of their 2.0 in early January. Yes, Symfony is mentioning late 2010, but personally I do not care much about Symfony. But perhaps exactly <strong>because</strong> Symfony is being so slow about it, ZF has an opportunity to jump in that gap.</p>
<p>Overall I simply don't see why it has to take a whole year. Adapting the current codebase to use namespaces really shouldn't take a year. That's just ridiculous. As soon as you establish the guidelines any twat can do it. I realise there are (some) other changes you would like to be included in the 2.0 release, but really, those can be in 2.1 as well, who cares, it is just a number. We have been waiting in anticipation for namespaces for a very long time, and now that they're finally here, and our IDEs finally support it, you're telling me I have to wait another whole year before I will be able to use a namespaced ZF. I beg of you to reconsider. Pretty pretty (pretty) please.</p>
Dec 18, 2009
Matthew Weier O'Phinney
<p>It's more than just namespacing that we're doing. We're re-architecting a number of components in order to make them (a) easier to maintain, and (b) more performant. That work will not happen overnight. At the same time, there are a ton of issues in the tracker that need to be addressed or at least considered (to ensure we don't propagate them to the new release, or to ensure requested features make it to the new release). Additionally, since major releases are the only times we can break backwards compatibility, we want to try and address as many such breaks as possible <em>now</em> so that we won't feel hampered 6-12 months down the road and have to wait another 2 years to change things. Finally, who do you think is going to do the work, exactly? Yes, we have 3 folks at Zend, myself and two others – but we have to work not only on this, but also maintaining the current branch, reviewing new code contributions, documentation, maintaining the site, and more. The community? Mostly volunteers, who have limited time individually to contribute. As an aggregate, yes, we can make progress – but it >I'm am not saying that wont be a lot of work nor claiming you have massive human resources, I just said that looking at just namespaces, it should be possible to get that released early next year. I'll admit though, I hadn't thought of BC and the policy to only break it on major releases. And I wont deny some parts of ZF could use some BC breaking refactoring. I guess it would have been a good idea to start a little earlier, especially considering how long 5.3 has been in development, but there's nothing to be done about that now. It's indescribably disappointing though.</p>
Dec 18, 2009
Matthew Weier O'Phinney
<p>The 5.3 API did not stabilize until 2-3 months before the stable release. Since we were planning on taking advantage of a number of new language constructs, we needed to wait until they were stable to even begin planning (namespaces, for instance, underwent a number of iterations; additionally, some features like traits looked like they might make it into 5.3, but were dropped relatively late in the 5.3 cycle).</p>
<p>Could we have started planning earlier? Potentially. But with initiatives like Zend_Tool, Zend_Application, and more, we were already overwhelmed with responsibilities.</p>
<p>Frankly, I think it's probably just as well we wait. It's always good to wait until at least the .1 version of any release series to ensure the biggest bugs are worked out, and to get a chance to see how usage patterns pan out. We're much better prepared at this point to adopt 5.3 features than we would have been had we started planning earlier.<>What's more important is that you could have started with the BC breaking refactorings, in general, sooner. You know, those things that you are saying prevent you from putting a namespaced release out there early next year. Not because you've known how namespaces were going to work for a long time, because you didn't, but because you knew there was going to be a PHP release with namespaces. It doesn't require a lot of imagination to have the foresight to see that <strong>a lot</strong> of your userbase can not wait to use this feature.</p>
<p>Are you telling me there was some conscious choice that favoured development of Zend_Tool and Zend_Application over preparing for 2.0? Because then you guys made a major error in judgement. Because in my humble opinion those additions really don't add any "much needed" functionality, but more importantly, addition of those do not break BC, thus could have been added <strong>after</strong> 2.0.</p>
Dec 18, 2009
Luke Crouch
<p>Can we quit crying over spilled milk?</p>
Dec 18, 2009
John Kleijn
<p>We can. I wasn't planing on furthering the discussion in this direction, that was just my response to Matthew's defense of the current time frame. I realize and mentioned there is nothing to be done about it now (aside from breaking with the BC policy perhaps), but I disagree that this was unavoidable as one would think by taking that reply word for word.</p>
<p>Perhaps a more interesting discussion would be what can be done to get to 2.0 as soon as possible, as in sooner than the currently projected release date of late 2010.</p>
Dec 18, 2009
Ralf Eggert
<p>I am also really waiting for ZF 2.0 to come and I would love to get it as soon as possible. But what I don't want is when the requirement "to get it released asap" is getting more weight than all other requirements mentioned in the discussion and on this page. I rather wait another 12 months and get a production ready ZF 2.0 than have to wait maybe even a longer time to get ZF 2.1 or even 2.2 which fix all the problems that have been left in ZF 2.0 to release it asap.</p>
<p>12 months isn't really a long time nowadays.</p>
Dec 19, 2009
John Kleijn
<p>I would agree, except for the 12 months is not a long time part. 12 months is an agonizing long time when you are waiting, and I already went through it waiting for PHP 5.3 to get to stable (on a side note I fail to see how 12 months has gotten any shorter).<>Honestly, I think the timing is spot on. All this talk of planning and implementing new features and refactoring for ZF 2.0 far earlier is sort of silly. PHP 5.3 is so new, and we have so many components, that until very recently there would have been a major shortage of developers with 5.3 experience. It's also completely moot - ZF 1.9.x and 1.10 work perfectly well under PHP 5.3 - nobody absolutely must have ZF 2.0 right now.</p>
<p>>The need for refactoring, the BC policy and namespaces did not drop out of the sky though. Anyway, lets drop this. Nothing to be done about it.</p>
<p>Your point? You expected us to cease all development for PHP 5.2, a version which will be supported for years to come and is everywhere, simply because PHP 5.3 might get widely adopted a year or two from now? We've known all this for a long long time - but I'd say most of us are still using PHP 5.2 in production. The will to improve ZF 1.x for our current production needs is still relevant. I won't be using 5.3 in production for a long time yet simply to let a few mini releases roll out with any loose issues resolved.</p>
Dec 19, 2009
John Kleijn
<p>> "PHP 5.3 is so new, and we have so many components, that until very recently there would have been a major shortage of developers with 5.3 experience."</p>
<p>Really, as I've mentioned before, nobody would have needed any 5.3 experience to implement the refactorings that are needed. The two (support for 5.3, namespaces in particular, and BC breaking refactoring) are completely separate, except in that implementing namespaces in itself is a BC breaking refactoring. I'll reiterate that I doubt I am the only one waiting for the opportunity to use the 5.3 features in production environments. If refactorings were further progressed, this would have been a reality many months earlier.<>However, I would like to add that I am not looking for scapegoating or anything like that. It is the way it is, and as much as you all can deny error on anyones part or acknowledge it, it does not effectively change anything right now. The continued defense of the current state of things is more a battle of egos than anything else IMO.<>It's really hard to see how one could take advantage of PHP 5.3 without the necessary experience. Sure, you could hit the low hanging fruit but that is mere cosmetics and doesn't bring about significant advantages or improvements to the framework as a whole as the Roadmap is currently hinting at. Switching syntax a new major version does not make. The relevant refactorings depend on your view - there are dozens of components that could use improvement outside the MVC core. Without more resources, hitting all of them is very difficult.</p>
<p>>No, of course not, that is ridiculous. I would have expected to start the BC breaking refactoring for 2.0 sooner. In a separate branch.</p>
<p>I was hinting at the resource issue. Take myself - I could have a) worked on ZF 2.0 during 2009 or b) written Zend_Feed_Reader, Zend_Feed_Writer and Zend_Feed_Pubsubhubbub which I really wanted yesterday on my production platform which will run PHP 5.2 for at least another six months. I could not do both. The needs of the now will always be more pressing. We're only now hitting the point where those needs are starting to coincide with PHP 5.3.</p>
<p>It is something of an ego battle <ac:emoticon ac:. People have worked their asses off for every single ZF minor version this year with some really amazing results applicable right now to improving their applications. Questioning why nobody was worked on ZF 2.0 misses a huge point - everything we've done IS an improvement for ZF 2.0. You've missed the indirect benefits of that. Zend_Application? Zend_Tool? Zend_Feed_XXX? Tons more. Throw in Matthew's MVC improvements (which he even has some code for) and you have a card of features that could already justify a ZF 2.0 release even without the PHP 5.3 improvements. Frankly I'm amazed at some of the stuff we've crammed into mere minor releases <ac:emoticon ac:. You should looks at how lines of code has increased month on month since the 1.0 release. That ZF 2.0 is an opportunity to allow PHP 5.3 as a dependency and break compatibility where needed shouldn't de-emphasis the fantastic work of the past year in making the framework better than ever with new features. I've always thought we passed the watermark for a notional ZF 2.0 months ago and we're really working towards ZF 3.0 <ac:emoticon ac:.</p>
<p>> Maybe you can explain to me how not being an absolute requirement makes it moot?</p>
<p>You could run ZF 1.10 on PHP 5.3 in production today. PHP 5.3 isn't the reason for ZF 2.0, it's merely another dependency we can leverage off. People overestimate the importance of PHP 5.3 to improving anything. If you have good code now, you'll still have good code on PHP 5.3. The reverse also holds true. An awful lot of improvements that can be made have nothing to do with the PHP version. This is the unfortunate consequence of having ZF 2.0 and PHP 5.3 so closely linked.<>I guess now that PHP 5.3 exists at all anything written against PHP 5.2 is just an abomination right. Really, I mean, those who have reviewed the code behind 1.9+ (I do this for any code I'm going to use on large projects), don't you think it's pretty good even without 5.3?<>We'll agree to disagree. I don't use Zend_Feed nor Zend_Tool and didn't really need Zend_Application. In contrast, I am, and I am guessing a lot of users if not most, very eager to start using namespaces in ZF projects. From this perspective your priorities seem off. May be you're right though, and your userbase is more appreciative of before mentioned components than it would have been of a quick namespaced release. In which case I stand alone and acknowledge that this discussion is moot. But I seriously doubt it.</p>
Dec 19, 2009
Pádraic Brady
<p>A quick namespaced release would never be ZF 2.0 since it's mere cosmetics if nothing deeper - if folk want namespaces that bad there's nothing stopping them from contributing a namespaced 1.10 release. I'm sure all of us would appreciate the help. I'll agree to disagree though <ac:emoticon ac:. I don't view namespaces as sufficient justification alone for a major release.</p>
Jan 04, 2010
Brad Gushurst
<p>All the talk on the features and specs for ZF 2.0 are great but one thing I think was kind of forgotten about on the 1.0 release was the documentation. Now don't get me wrong I think we have great component documentation but one thing I think we could use more of is actual use case tutorials. Say someone wants to get Auth/Acl/Navigation setup on their website; why should we rely on them searching through a whole lot of blogs (outdated) which are possibly using bad practices? I think before we even start to approach the release date of ZF 2.0 we need to have more Getting Started and use case documentation written; and actually it's something I would like to help out with too.<>I would like to see Doctrine-like functionality within Zend Framework 2.0, and possibly a Doctrine adapter. I'm not sure if the others are suggesting the inclusion of the actual Doctrine library within ZF2, but I would think that would be off-limits - just as we do not expect to see MySQL or memcached bundled in.</p>
Dec 19, 2010
Martin Keckeis
<p>Please discuss here, this page is "old"<br />
<a href="">ZF 2.0 requirements</a></p>
|
http://framework.zend.com/wiki/display/ZFDEV2/Zend+Framework+2.0+Roadmap?focusedCommentId=18579917
|
CC-MAIN-2014-42
|
refinedweb
| 7,049
| 63.09
|
v-easy-components
Vue2.0-based component/instruction library
simple
Any function is presented in modules
open source
Anyone can post PUSH code on GtiHub as long as the code is used
rich
v-easy-components not only provides components, but also provides some instructions to help developers save time
# Installation
yarn add v-easy-components -D
Please refer to the Installation chapter for more details.
# Use
import Vue from 'vue'; import VEasy from 'v-easy-components'; import 'v-easy-components/lib/theme-chalk/index.css'; Vue.use(VEasy);
Please refer to the Quick Start section for more details.
If you also want to contribute code, please go to GitHub or contact me linkorgs@163.com
WARNING
v-easy-components is a Vue 2.0-based component/instruction library
ICP备: 湘ICP备17008304号
|
https://linkorg.club/
|
CC-MAIN-2020-29
|
refinedweb
| 131
| 53.92
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Create/Write on click one2many
Hello, I have a bit of a problem with editable one2many fields on openerp 6.1
I need to on click of the save button on an editable tree line the system automatically save that line on the database, pretty much like the previous versions of Openerp.
Hi,
due to the code in web client (create a virtual id for the line which is changed to real id only when form is saved) and onchange method return (can't return a view or action), it is not possible, this is logical, lines are saved only when form is saved. The best way I find for 6.1 is to add a button update which return an action ir.actions.act_window, just on click to do (not two for save the form and edit in case where you want to save multi lines).
|
https://www.odoo.com/forum/help-1/question/create-write-on-click-one2many-15679
|
CC-MAIN-2017-04
|
refinedweb
| 176
| 67.28
|
- 자습서
- Survival Shooter tutorial
- Creating Enemy #1
Creating Enemy 4 of 10 of the Survival Shooter tutorial, in which you will create the enemy behaviour, including animation, state machines, nav-mesh and code.
Creating Enemy #1
The next phase we're going to go in to
- 00:02 - 00:06
is to create our first enemy.
- 00:06 - 00:08
We need our enemies in the game to
- 00:08 - 00:11
damage the player, we need them to follow the player,
- 00:11 - 00:13
but in this particular one we're just going to create
- 00:13 - 00:16
the first enemy and make him follow you around.
- 00:17 - 00:19
So let's get on with that.
- 00:19 - 00:21
As before with the player
- 00:21 - 00:24
in the Models folder in Characters you will
- 00:24 - 00:26
find there are some models.
- 00:27 - 00:29
One of those models
- 00:29 - 00:32
is this lovingly created zombunny,
- 00:32 - 00:35
which looks something like this.
- 00:37 - 00:39
And what we're going to do with that is to
- 00:39 - 00:41
place him directly in to the scene and then
- 00:41 - 00:43
start working with him in a similar way
- 00:43 - 00:45
that we did with the player.
- 00:45 - 00:47
As usual I'm just going to quickly hit Save
- 00:47 - 00:49
just so I don't loose any work if we get a crash
- 00:49 - 00:52
and then I'm going to
- 00:52 - 00:54
drag my scene view around
- 00:54 - 00:56
and this bit really doesn't matter where we place
- 00:56 - 00:58
him because we're going to spawn him later on
- 00:58 - 01:00
I'm going to use dragging
- 01:00 - 01:02
from the project window and you can see that it
- 01:02 - 01:04
snaps to the environment or whatever I happened
- 01:04 - 01:05
to place him on to.
- 01:05 - 01:06
I'm just going to drop him near the player
- 01:06 - 01:08
somewhere like that.
- 01:08 - 01:10
Then when we shoot this
- 01:10 - 01:12
bunny we're going to have
- 01:12 - 01:14
stuff kind of fly out of him.
- 01:14 - 01:18
We've created for you a particle system to do just that.
- 01:19 - 01:23
In the Prefabs folder you will find
- 01:23 - 01:26
a HitParticles prefab.
- 01:26 - 01:28
So like we said before prefabs are a way of storing
- 01:28 - 01:30
something that you've already setup and we did
- 01:30 - 01:32
exactly the same thing, we've created this
- 01:32 - 01:34
HitParticles, and what we need to do
- 01:34 - 01:38
is apply this to the Zombunny.
- 01:38 - 01:40
In Unity all game objects that you put in
- 01:40 - 01:42
can have a hierarchy.
- 01:43 - 01:46
The Zombunny has a child object which is it's
- 01:46 - 01:48
mesh, it's kind of outline
- 01:48 - 01:50
but it also has a parent and we want to
- 01:50 - 01:52
drag drop this particle system
- 01:52 - 01:55
on so that it's attached to the parent.
- 01:55 - 01:57
So what I'm going to do is select my Zombunny
- 01:57 - 02:01
and I'm going to drag and drop HitParticles
- 02:01 - 02:03
on to the parent Zombunny
- 02:03 - 02:05
so that when we expand it we've got
- 02:05 - 02:09
Zombunny, the actual object itself and we've got HitParticles.
- 02:09 - 02:11
And basically all this is doing, if I select
- 02:11 - 02:13
my HitParticles now,
- 02:13 - 02:15
is creating this little puff of stuffing.
- 02:15 - 02:17
I can see that because I've got
- 02:17 - 02:19
that game object selected the scene view shows
- 02:19 - 02:21
me the particle effect overlay
- 02:21 - 02:23
and you can see it's creating a puff of
- 02:23 - 02:25
fluff that's coming out of him.
- 02:25 - 02:27
This particle system, we didn't want to get bogged
- 02:27 - 02:29
down too much with these various
- 02:29 - 02:31
settings but we're basically using
- 02:31 - 02:33
a texture which is applied to
- 02:33 - 02:37
this to just fire out a little emission of those.
- 02:37 - 02:39
Again we want to
- 02:39 - 02:41
detect whether this is something that we
- 02:41 - 02:43
can shoot at, so similar to how
- 02:43 - 02:47
we detected whether we can turn to face
- 02:47 - 02:49
a particular way by isolating
- 02:49 - 02:51
the floor on to a floor layer
- 02:51 - 02:55
this Zombunny is going to be on a shootable layer.
- 02:55 - 02:57
So with the Zombunny parent selected
- 02:57 - 02:59
make sure you reselect the parent,
- 02:59 - 03:01
go to Layer at the top of the inspector
- 03:01 - 03:03
and choose Shootable.
- 03:03 - 03:05
Set that to shootable and then
- 03:05 - 03:07
it's going to ask you if you would like
- 03:07 - 03:10
to changing the children.
- 03:11 - 03:13
Basically all this is saying is do you want them all to be on
- 03:13 - 03:18
the same layer, this child objects, say yes, change the children.
- 03:18 - 03:20
A quick note, the environment
- 03:20 - 03:22
that you all dragged in to the scene at the start
- 03:22 - 03:24
is also on the shootable layer, so that
- 03:24 - 03:27
means that when we later give the
- 03:27 - 03:30
layer a working gun you can hit the environment as well.
- 03:31 - 03:33
We've done that, and then we basically
- 03:33 - 03:35
want to setup the Zombunny in a similar way
- 03:35 - 03:37
to the player, so we want him to have physics,
- 03:37 - 03:40
we want him to have a physical presence in the world
- 03:40 - 03:42
so we're going to do some similar
- 03:42 - 03:44
things that we did before.
- 03:44 - 03:46
The first one of those we're going to do is to
- 03:46 - 03:48
add a rigidbody component.
- 03:48 - 03:50
Just to give you a different way of doing things
- 03:50 - 03:52
we've previously used the menu but
- 03:52 - 03:54
you'll notice that this is actually a search field.
- 03:54 - 03:56
So I'm just going to type in Rig and I go
- 03:56 - 03:58
straight to rigidbody, I can hit
- 03:58 - 04:00
return to add that.
- 04:00 - 04:02
I'm just going to make that expanded
- 04:02 - 04:04
so that I can apply some settings.
- 04:04 - 04:07
Again, drag an angular drag
- 04:07 - 04:09
should be set to infinity, so I'm going to
- 04:09 - 04:11
type in INF, hit return,
- 04:11 - 04:14
and it should change to capitalised Infinity.
- 04:14 - 04:16
And the constraints are exactly the same.
- 04:16 - 04:19
We want them to be able to rotate
- 04:19 - 04:22
only in the Y axis, we want them to only
- 04:22 - 04:24
be moving in X and Z
- 04:24 - 04:26
and frozen position Y.
- 04:26 - 04:28
Your constraints should look like this.
- 04:28 - 04:33
Freeze position Y, freeze rotation X and Zee.
- 04:34 - 04:36
In order to actually give this physics
- 04:36 - 04:38
object a boundary we're going to put in
- 04:38 - 04:40
a capsule collider,
- 04:40 - 04:43
the outline of it's physical shape.
- 04:43 - 04:46
I'm hitting Add Component one more time
- 04:46 - 04:50
and then I'm going to type in Cap for capsule collider
- 04:50 - 04:52
and the settings for it are
- 04:53 - 04:58
Centre value of Y 0.8,
- 04:58 - 05:01
and a height of 1.5.
- 05:03 - 05:06
So it should look like this.
- 05:07 - 05:10
We need this character to attack the player
- 05:10 - 05:12
so this bit's going to be slightly different
- 05:12 - 05:14
to how the player was setup.
- 05:14 - 05:16
The capsule collider is there to give
- 05:16 - 05:18
the character a physical presence but what
- 05:18 - 05:20
we want to do is to give him the
- 05:20 - 05:22
ability to detect the player
- 05:22 - 05:24
We're going to use one more collider to do that
- 05:24 - 05:28
and we're going to use a trigger collider for this purpose.
- 05:28 - 05:30
So we're going to Add Component one more time
- 05:30 - 05:33
and we're going to say SPH, sphere collider.
- 05:33 - 05:34
Hit return.
- 05:34 - 05:37
You can also go to Add Component - Physics
- 05:37 - 05:40
Sphere Collider, but that's just a short way of doing that.
- 05:42 - 05:45
A trigger is something that doesn't have a physical presence,
- 05:45 - 05:47
it's a collider, which means you can check
- 05:47 - 05:49
when something's intersecting it, but it doesn't have
- 05:49 - 05:52
a physical presence so we can just walk straight through it.
- 05:52 - 05:55
So we check the Is Trigger box
- 05:55 - 05:57
and then if we just have a look
- 05:57 - 06:01
back at our bunny very briefly at the same window
- 06:01 - 06:03
you can see that by default it's set at the
- 06:03 - 06:05
origin of this, which is at it's feet.
- 06:05 - 06:09
So we're going to set the centre Y to 0.8
- 06:09 - 06:14
and the radius also to 0.8.
- 06:14 - 06:16
What you'll see is that if I just expand my
- 06:16 - 06:18
capsule collider again
- 06:18 - 06:23
it's slightly further out than the
- 06:23 - 06:25
capsule collier is, the reason being that
- 06:25 - 06:27
we want to detect the player within
- 06:27 - 06:29
the bunnies reach.
- 06:29 - 06:31
So we want it to bump in to stuff but we want
- 06:31 - 06:33
it's reach to be able to harm the player
- 06:33 - 06:35
so we make that sphere collider slightly bigger,
- 06:35 - 06:37
we make it a trigger so that it's not actually
- 06:37 - 06:39
going to bump in to anything,
- 06:39 - 06:41
that's how we're going to detect the player.
- 06:41 - 06:43
Again with triggers
- 06:43 - 06:45
the actual interactions within a
- 06:45 - 06:47
scene is not what we use a trigger for.
- 06:47 - 06:49
More importantly a trigger, when anything collides
- 06:49 - 06:51
with a trigger or goes in to a trigger
- 06:51 - 06:53
a function is called when you have a script attached
- 06:53 - 06:55
that says 'hey, something touched that trigger'.
- 06:55 - 06:57
And that's when we write our
- 06:57 - 06:59
individualised code that'll be like
- 06:59 - 07:01
'okay, something touched that trigger'.
- 07:01 - 07:03
It's interactions in the scene view are not what
- 07:03 - 07:05
we're interested in, we're actually interested in the
- 07:05 - 07:07
behind the scenes interaction we get to write
- 07:07 - 07:09
with our own code.
- 07:09 - 07:11
Finally the last thing we're going to do
- 07:11 - 07:13
to complete the Zombunny is just add an audio
- 07:13 - 07:15
source so I'm going to type in
- 07:15 - 07:17
Audio, I'm going to scroll down,
- 07:17 - 07:19
I can use my arrow keys in this Add Component menu
- 07:19 - 07:22
and I'm going to use return to choose Audio Source.
- 07:22 - 07:24
This thing is just to allow him
- 07:24 - 07:26
to make a sound when he gets hurt
- 07:26 - 07:30
and then we're going to make that the default sound clip as well.
- 07:30 - 07:32
As before, using circle select I can
- 07:32 - 07:35
just click and choose Zombunny Hurt,
- 07:35 - 07:37
it's the very bottom one from the list.
- 07:37 - 07:39
I can double click it
- 07:39 - 07:41
to assign and close the window.
- 07:41 - 07:44
As before we don't want it to make
- 07:44 - 07:47
the sound so we're going to uncheck play on awake.
- 07:47 - 07:49
We obviously don't want that to loop or anything
- 07:49 - 07:50
so we're going to leave that there.
- 07:50 - 07:52
Next we need to make the guy actually follow
- 07:52 - 07:54
the player so in Unity there's a
- 07:54 - 07:57
system called Nav Mesh.
- 07:57 - 08:01
In Unity Window - Navigation is what you need.
- 08:02 - 08:04
Window - Navigation.
- 08:06 - 08:08
That should automatically dock itself
- 08:08 - 08:10
next to the inspector. If it doesn't that's a great place
- 08:10 - 08:12
to put it, so drag and drop that tab
- 08:12 - 08:15
next to the inspector on the right of your interface.
- 08:15 - 08:17
We're going to reselect the Zombunny and go
- 08:17 - 08:18
back to the inspector.
- 08:18 - 08:20
The navigation panel is
- 08:20 - 08:22
there for us to setup our level.
- 08:22 - 08:24
We're going to setup our Zombunny first and then
- 08:24 - 08:26
go back and setup the level.
- 08:26 - 08:28
With our Zombunny selected
- 08:28 - 08:30
I'm just going to close
- 08:30 - 08:32
some of these by collapsing them up to give us
- 08:32 - 08:34
more space to look at.
- 08:34 - 08:38
I'm going to add something called a Nav Mesh Agent.
- 08:38 - 08:40
Like I said there's a system called Nav Mesh
- 08:40 - 08:43
which is used for a simple AI in Unity
- 08:43 - 08:45
and what it means is you can
- 08:45 - 08:47
do a process called baking, which is to
- 08:47 - 08:50
specify which parts of the level are navigable.
- 08:50 - 08:52
And then you have something called an Agent
- 08:52 - 08:54
which is something that's going to traverse that, or move
- 08:54 - 08:56
over that environment.
- 08:56 - 08:58
The Zombunny is something that's going to do that.
- 08:58 - 09:00
We are going to give it a
- 09:00 - 09:02
Nav Mesh Agent Component.
- 09:03 - 09:05
You can see that it has an outline here
- 09:05 - 09:07
similar to a collider
- 09:07 - 09:11
and we're going to set a more appropriate default of 0.3.
- 09:11 - 09:13
0.3 in radius.
- 09:13 - 09:15
We're going to set the speed to 3 by default
- 09:15 - 09:20
and stopping distance to 1.3.
- 09:21 - 09:23
We're going to set the heigh, which is slightly
- 09:23 - 09:26
lower down to 1.1.
- 09:27 - 09:29
Because it's easier to see these settings
- 09:29 - 09:32
on the powerpoint I'm just going to switch over to that.
- 09:32 - 09:34
So we've added a nav mesh component.
- 09:34 - 09:38
The radius is 0.3, the speed is 3,
- 09:38 - 09:41
stopping distance is 1.3
- 09:41 - 09:44
and height, a few properties down is 1.1.
- 09:44 - 09:47
It should look like what we have in Unity.
- 09:49 - 09:51
If you recall for the player we're moving
- 09:51 - 09:53
the player using physics,
- 09:53 - 09:55
but we're not doing that with the bunnies themselves.
- 09:55 - 09:57
The nav mesh agent is actually
- 09:57 - 09:59
what is moving.
- 09:59 - 10:01
Just keep that in mind.
- 10:01 - 10:03
We're not actually going to be using forces with the bunnies,
- 10:03 - 10:05
instead the nav mesh agent is going to
- 10:05 - 10:07
pick and follow the targets
- 10:07 - 10:09
automatically while also
- 10:09 - 10:11
avoiding obstacles and things like that,
- 10:11 - 10:13
and that's what Will meant when he was talking about AI
- 10:13 - 10:14
it's AI pathfinding.
- 10:14 - 10:16
What I wanted to point out as well
- 10:16 - 10:19
and you guys don't need to do this so just watch,
- 10:19 - 10:23
the environment is already prefabbed.
- 10:23 - 10:25
We put colliders on to everything so you can see
- 10:25 - 10:28
that the player could bump in to all of these objects.
- 10:28 - 10:30
We used primitive colliders for performance reasons
- 10:30 - 10:32
to make it all work nice and neatly.
- 10:32 - 10:34
What you should also note about that is there's a
- 10:34 - 10:38
checkbox up at the top here called Static.
- 10:38 - 10:41
That's checked and that means that
- 10:41 - 10:43
Navigation Static is also checked.
- 10:43 - 10:45
So what that means is that all of the
- 10:45 - 10:47
things we've specified in here
- 10:47 - 10:49
as being a navigable space
- 10:49 - 10:51
are going to be included
- 10:51 - 10:53
when we bake the nav mesh, when we create
- 10:53 - 10:55
this navigable area.
- 10:55 - 10:57
What I want you to do now is to go to that
- 10:57 - 10:59
navigation window again and we're going to
- 10:59 - 11:02
switch over to the Bake tab,
- 11:02 - 11:04
the middle one of the three, and there we have
- 11:04 - 11:06
some settings that we're going to put in there.
- 11:06 - 11:10
The radius of the nav mesh is
- 11:10 - 11:14
approximately how near to a wall an agent can move
- 11:14 - 11:16
so we're going to have multiple types of enemy
- 11:16 - 11:19
in this so we've got some Zombunnys,
- 11:19 - 11:21
Zombears and some Helephants.
- 11:21 - 11:23
Now Helephants are a lot bigger
- 11:23 - 11:25
so we need to compromise with the
- 11:25 - 11:28
radius of the nav mesh.
- 11:28 - 11:30
Normally you'd want to have nav mesh agents
- 11:30 - 11:33
and nav meshes with the same radius
- 11:33 - 11:35
but since we've got different size agents we're going to
- 11:35 - 11:38
compromise and make the radius for it 0.75.
- 11:38 - 11:42
Likewise this height
- 11:42 - 11:44
is a compromise, we have Zombunnys
- 11:44 - 11:46
which we made 1.1 in height.
- 11:46 - 11:48
The Helephants are going to be a lot bigger
- 11:48 - 11:50
but we're going to make our height
- 11:50 - 11:51
a compromise between the two,
- 11:51 - 11:53
we're going to give it 1.2.
- 11:53 - 11:56
But normally if you've got one type of agent
- 11:56 - 11:59
and they're all the same give the radius
- 11:59 - 12:01
the same as the nav mesh agent and the height
- 12:01 - 12:03
the same as the nav mesh agent.
- 12:04 - 12:06
The next thing we're going to
- 12:06 - 12:08
change is the Step Height.
- 12:08 - 12:11
Your nav mesh agent can go up steps and
- 12:11 - 12:13
because our floor is uneven,
- 12:13 - 12:15
it's made up of floorboards that are
- 12:15 - 12:17
sticking up a bit like that, we need the nav mesh
- 12:17 - 12:19
agents to not go like 'oh I can't get over that
- 12:19 - 12:21
floorboard, I'd better got around it' so we need a
- 12:21 - 12:23
small step height to make sure that it can get up
- 12:23 - 12:25
over those bumps.
- 12:25 - 12:27
Then we're expanding the Advanced area
- 12:27 - 12:30
and we're setting the Width Inaccuracy
- 12:30 - 12:35
to just 1%, so drag that Width Inaccuracy down to 1.
- 12:35 - 12:39
The Width Inaccuracy affects how
- 12:39 - 12:41
carefully the nav mesh is baked.
- 12:41 - 12:43
A nav mesh can be baked very quickly
- 12:43 - 12:45
if you give it high inaccuracies
- 12:45 - 12:47
and it'll look quite angular and
- 12:47 - 12:49
it'll be quite approximate to your
- 12:49 - 12:51
environment but we want
- 12:51 - 12:54
it to be a very accurate nav mesh.
- 12:54 - 12:56
This is going to take a little while to bake but
- 12:56 - 12:58
we'll end up with a very nice nav mesh afterwards.
- 12:58 - 13:00
Once that's done you are hitting Bake
- 13:00 - 13:02
at the bottom of the window.
- 13:02 - 13:04
What you'll see in the scene view
- 13:04 - 13:06
once it's completed that process
- 13:06 - 13:08
is that you will see a blue overlay
- 13:08 - 13:10
on the entire level.
- 13:10 - 13:12
You will get a progress bar at the bottom,
- 13:12 - 13:14
it says 'exporting tiles' like this.
- 13:15 - 13:17
Once it's done that process you will
- 13:17 - 13:19
see an overlay in the scene view
- 13:19 - 13:21
as long as you are on the navigation window.
- 13:21 - 13:23
And it should look something like this.
- 13:23 - 13:25
So what's happening here is
- 13:25 - 13:28
nav agents and pathfinding and all that
- 13:28 - 13:29
will work with complex meshes,
- 13:29 - 13:31
however it is incredibly inefficient
- 13:31 - 13:33
and you're going to have all sorts of issues
- 13:33 - 13:35
if there's bumps and cracks and things like that
- 13:35 - 13:37
basically saying that while you can
- 13:37 - 13:39
traverse these unusually sized
- 13:39 - 13:41
and shaped meshes it's really not
- 13:41 - 13:42
worth it, right?
- 13:42 - 13:44
Generally you want your movement to be smooth
- 13:44 - 13:46
and you don't want it to be that inefficient.
- 13:46 - 13:48
When we bake a nav mesh what it's doing is
- 13:48 - 13:50
it's finding all of the static meshes
- 13:50 - 13:52
in our scene, things that are marked as
- 13:52 - 13:54
navigationally static and saying
- 13:54 - 13:56
'these aren't going to move, these are going to be
- 13:56 - 13:57
things that we can walk on'
- 13:57 - 14:00
and it's going to calculate a very simple
- 14:00 - 14:04
flat mesh and say 'it's not exact but it's
- 14:04 - 14:06
close enough and no one will ever know'
- 14:06 - 14:07
so it's much more efficient and it actually makes it
- 14:07 - 14:09
much more accurate as far as actually moving
- 14:09 - 14:12
around how you would expect things to move around.
- 14:12 - 14:14
That's what we're doing, we're on the fly
- 14:14 - 14:16
Unity is generating a new mesh
- 14:16 - 14:18
which is just a flat plane,
- 14:18 - 14:20
possibly with a slow to it,
- 14:20 - 14:22
but it's very simplified and it makes
- 14:22 - 14:26
the actual AI process of navigating very simple.
- 14:26 - 14:28
Just to make this very clear,
- 14:28 - 14:32
these things in our scene are marked as static
- 14:32 - 14:34
so if we wanted to move our blocks,
- 14:34 - 14:36
at some point we changed the design of the game
- 14:36 - 14:39
we would move them and we would rebake the nav mesh
- 14:39 - 14:42
to recalcuate that traversable area.
- 14:42 - 14:45
Okay, so we need this character to
- 14:45 - 14:47
run around and chase the player
- 14:47 - 14:49
so we're going to apply animation to this.
- 14:49 - 14:51
What we want to do is select the Animation
- 14:51 - 14:53
folder in the project panel.
- 14:53 - 14:56
I'm going to click the Create button and I'm going to choose
- 14:56 - 14:58
Animator Controller.
- 14:59 - 15:01
So select the Animation folder and choose
- 15:01 - 15:03
Create - Animator Controller.
- 15:03 - 15:06
This one is going to be EnemyAC.
- 15:06 - 15:08
AC is short for Animator Controller and
- 15:08 - 15:11
then we're going to assign this to our Zombunny.
- 15:13 - 15:16
The Zombunny, if we drag and drop that asset
- 15:16 - 15:18
on to it it will assign it for us,
- 15:18 - 15:20
but just to be clear what we're actually doing
- 15:20 - 15:22
is assigning this to the controller
- 15:22 - 15:24
property of the animator.
- 15:24 - 15:26
Either drag and drop to
- 15:26 - 15:28
that component or simply drag and drop
- 15:28 - 15:30
straight on to that object to
- 15:30 - 15:32
automatically assign it for you.
- 15:32 - 15:34
So when you select the Zombunny you should have
- 15:34 - 15:37
Animator - EnemyAC as the asset.
- 15:38 - 15:42
So that is a animator controller, a state machine,
- 15:42 - 15:44
so I'm going to double click EnemyAC
- 15:44 - 15:45
to open it in the animator window.
- 15:45 - 15:47
We should have an empty new state machine
- 15:47 - 15:49
to add animations to
- 15:49 - 15:51
just like we did with the player.
- 15:51 - 15:53
As before with the player the
- 15:53 - 15:55
animation for the Zombunny is
- 15:55 - 15:57
stored within it's model so I'm going to
- 15:57 - 16:00
expand the model and go to the Characters
- 16:00 - 16:01
folder in the project.
- 16:01 - 16:04
In the project panel open up Models
- 16:04 - 16:06
and select Character and then
- 16:06 - 16:09
expand the Zombunny and you will see
- 16:09 - 16:11
that and it's animations.
- 16:11 - 16:14
The Zombunny has Move, Idle and Death
- 16:14 - 16:16
and they are clips that are on a
- 16:16 - 16:18
particular point on the timeline.
- 16:18 - 16:19
You can see as I go between these clips
- 16:20 - 16:22
they are all in a different part of the timeline
- 16:22 - 16:23
as you can see from here.
- 16:23 - 16:25
The way that you can do that when you bring in an FPX
- 16:25 - 16:27
model that you've animated is you can just
- 16:27 - 16:29
create new clips and tell it what
- 16:29 - 16:31
frame range you animated them on
- 16:31 - 16:33
and Unity will import those for you
- 16:33 - 16:34
as different clips.
- 16:34 - 16:36
Just like the player they appear
- 16:36 - 16:38
underneath that in the hierarchy
- 16:38 - 16:40
so you have Death, Idle and Move.
- 16:40 - 16:42
Just as before I'm going to drag them in.
- 16:42 - 16:44
I want the default state to be Moving
- 16:44 - 16:46
so I'm going to drag that one in first because
- 16:46 - 16:48
that gets made default,
- 16:48 - 16:49
default signified by orange.
- 16:49 - 16:51
Then I'm going to drag in the other two,
- 16:52 - 16:54
Idle and Death even.
- 16:54 - 16:56
I'm going to put Death next to the Any state,
- 16:56 - 16:59
and I'm going to put Idle and Move together.
- 17:00 - 17:02
Again the position of these is kind of irrelevant but
- 17:02 - 17:05
I like to have things nice and neat.
- 17:05 - 17:07
Once those are in there we've positioned
- 17:07 - 17:09
them nicely we're going to
- 17:09 - 17:11
create parameters for them.
- 17:11 - 17:13
So making sure that Move is the default,
- 17:13 - 17:16
so Set As Default with right click
- 17:16 - 17:19
if you haven't already, we're going to create parameters.
- 17:19 - 17:21
We have just two parameters for this one
- 17:22 - 17:24
and as before we're going to click the + icon
- 17:24 - 17:26
and we're going to select the type
- 17:26 - 17:28
of parameter that we want.
- 17:28 - 17:30
We want
- 17:30 - 17:33
a trigger parameter called PlayerDead.
- 17:33 - 17:35
These enemy characters are going to walk,
- 17:35 - 17:37
they're going to have their moving animation
- 17:37 - 17:39
until the player dies and they are going to go
- 17:39 - 17:40
in to their Idle animation.
- 17:40 - 17:43
Then we're going to create another trigger
- 17:43 - 17:46
which says that they themselves are dead,
- 17:46 - 17:48
which is just called Dead.
- 17:48 - 17:50
So two trigger parameters, PlayerDead
- 17:50 - 17:52
so that they can go to Idle when you get killed
- 17:52 - 17:54
and Dead themselves so that they can animate
- 17:54 - 17:56
in to their Death animation.
- 17:56 - 17:58
Then of course we need to actually control the
- 17:58 - 18:00
logic of the state machine.
- 18:01 - 18:03
We want our move to transition to Idle
- 18:03 - 18:06
so I'm going to right click Make Transition
- 18:06 - 18:08
and again I get the transition handle.
- 18:08 - 18:09
I'm going to click on Idle to assign it.
- 18:09 - 18:11
I will select
- 18:11 - 18:13
the transition by clicking on it.
- 18:13 - 18:15
Once that transition is highlighted in blue
- 18:16 - 18:18
I should have my list of transitions.
- 18:18 - 18:20
So in order to go from Move to Idle
- 18:20 - 18:22
the condition is that
- 18:22 - 18:24
the player is dead.
- 18:26 - 18:28
Then very simply in order to go from
- 18:28 - 18:31
the Any state I will make a transition,
- 18:31 - 18:32
click on Death,
- 18:33 - 18:35
select that transition.
- 18:37 - 18:39
The condition for that is that the
- 18:39 - 18:41
Dead trigger has occurred.
- 18:42 - 18:44
So if I just scrub through you see we go from
- 18:44 - 18:46
Moving in to Death.
- 18:48 - 18:51
Also notice that you can't go from Idle back to Move.
- 18:51 - 18:53
Because at that point the player is dead.
- 18:53 - 18:56
So we have no need to create a transition back.
- 18:56 - 18:58
These state machines don't work by themselves
- 18:58 - 19:00
The connection between the state machine and
- 19:00 - 19:03
animator and the actual functionality
- 19:03 - 19:05
is that the scripting needs to tell those
- 19:05 - 19:07
parameters what their value is.
- 19:07 - 19:10
We need a script to say that the
- 19:10 - 19:13
animator SetTrigger Player Dead will actually
- 19:13 - 19:17
happen otherwise the animator does nothing by itself.
- 19:17 - 19:20
In the Scripts folder in your project
- 19:20 - 19:23
you will see a folder called Enemy.
- 19:23 - 19:26
Go to Scripts folder and open the folder Enemy.
- 19:26 - 19:28
In that you will see a folder called
- 19:28 - 19:29
EnemyMovement.
- 19:29 - 19:31
We are going to drag and drop that on to
- 19:31 - 19:34
Zombunny in the hierarchy.
- 19:36 - 19:38
And then once you've assigned it either
- 19:38 - 19:41
double click EnemyMovement's icon
- 19:41 - 19:43
or go to the cog icon
- 19:43 - 19:45
and go to Edit Script, but basically we're opening
- 19:45 - 19:47
that for editing.
- 19:47 - 19:50
and when you open that for editing it should look like this.
- 19:51 - 19:53
What you'll notice here is we have a script that's
- 19:53 - 19:54
been ready written for you.
- 19:54 - 19:56
Some of the functionality has been disabled,
- 19:56 - 19:58
you'll notice the double forward slash
- 19:58 - 20:00
symbols at the start of some of these lines.
- 20:00 - 20:02
That's commenting, so you can write
- 20:02 - 20:04
- 20:04 - 20:06
to yourself or to coworkers, whatever,
- 20:06 - 20:08
but you can also use that to turn
- 20:08 - 20:10
functional lines of code in to deactivated
- 20:10 - 20:12
lines of code, so that's what we've done here
- 20:12 - 20:14
and we're going to come back to this script at some point
- 20:14 - 20:16
and enable some of that functionality.
- 20:16 - 20:20
But for now we're just going to explain how this script functions.
- 20:20 - 20:22
This is our EnemyMovement script and at the very
- 20:22 - 20:24
beginning we have this Transform player,
- 20:24 - 20:26
and the Transform player is going to be
- 20:26 - 20:29
what the enemy is going to move towards.
- 20:29 - 20:32
You will notice the Transform player is not public.
- 20:32 - 20:34
With the camera we had that public transform
- 20:34 - 20:36
target so we could click and drag
- 20:36 - 20:38
in the editor and the camera knows what it's following.
- 20:38 - 20:40
However our enemies are
- 20:40 - 20:43
not going to be in the game when the game starts.
- 20:43 - 20:45
Our enemies are actually going to be what's called
- 20:45 - 20:47
instantiated or spawned
- 20:47 - 20:49
later so whole hordes of them can appear.
- 20:49 - 20:51
As such we can't just click and drag
- 20:51 - 20:53
the player on to them.
- 20:53 - 20:55
So what's going to happen is they're going to have to find
- 20:55 - 20:57
the player on their own, so for now
- 20:57 - 20:58
that's just Transform player, not public.
- 20:58 - 21:01
We also have a reference to a nav mesh agent,
- 21:01 - 21:03
which is the component we put
- 21:03 - 21:06
on the Zombunny previously, this is just our
- 21:06 - 21:08
reference to it which we've called Nav.
- 21:08 - 21:10
Now down in the Awake method here
- 21:10 - 21:13
we are finding the player.
- 21:13 - 21:16
So we have Player, which is that Transform before,
- 21:16 - 21:23
player = GameObject.FindGameObjectWithTag.
- 21:23 - 21:25
So if you remember previously when we were
- 21:25 - 21:27
setting up the player we
- 21:27 - 21:29
gave the player the tag Player.
- 21:29 - 21:31
So now that we've set the player with the tag
- 21:31 - 21:33
Player, we can find
- 21:33 - 21:34
that player using this function.
- 21:34 - 21:36
So basically what it's going to do is go through all of the game
- 21:36 - 21:40
objects in our scene, it's going to say 'hey do you have that tag?'
- 21:40 - 21:43
until it finds the one and it gives it back.
- 21:43 - 21:45
And then what we're going to do is say
- 21:45 - 21:47
.Transform, which is going to give us
- 21:47 - 21:49
a reference to the transform,
- 21:49 - 21:51
basically where the player is.
- 21:51 - 21:53
So we're storing that in our variable named player.
- 21:53 - 21:55
Next we have a variable named nav
- 21:55 - 21:57
and we're just setting that equal to GetComponent
- 21:57 - 21:59
which is basically going to
- 21:59 - 22:03
pull a reference to the component we have in the editor.
- 22:03 - 22:05
At this point we're just setting up.
- 22:05 - 22:07
Then down in our update function
- 22:09 - 22:12
you'll notice we're using update, not fixed update.
- 22:12 - 22:14
As this is a nav mesh agent it's not
- 22:14 - 22:16
keeping in time with physics
- 22:16 - 22:18
so we're just going to use the regular update
- 22:18 - 22:20
and inside there we are going to say
- 22:20 - 22:23
nav.SetDestination (player.position); and that's it.
- 22:23 - 22:25
So nav being the nav mesh agent we're saying
- 22:25 - 22:28
'hey, that's where I want to go'.
- 22:28 - 22:29
'I want to go towards the player'
- 22:29 - 22:32
and that's it, the nav mesh agent goes
- 22:32 - 22:33
'cool, I'm going to head towards the player'.
- 22:33 - 22:35
And the nice thing about doing it this way is that
- 22:35 - 22:37
the enemies aren't going to bump in to each other,
- 22:37 - 22:39
they're not going to crop in and out of each other,
- 22:39 - 22:41
they're going to move around all the creepy
- 22:41 - 22:43
baby arms and Legos and
- 22:43 - 22:45
weirdness that we have in the scene there and
- 22:45 - 22:47
they're going to viciously and unerringly
- 22:47 - 22:50
attack you with extreme prejudice and violence.
- 22:50 - 22:52
The nav mesh agent makes it very easy to do these things.
- 22:52 - 22:54
And that's the whole script.
- 22:54 - 22:56
We'll revisit this script later when we want to
- 22:56 - 22:58
do uncomment it, re-add in that
- 22:58 - 23:01
functionality that you see is currently greyed out.
- 23:01 - 23:03
We could switch back to the Unity editor
- 23:03 - 23:05
right now, we don't need to save anything because
- 23:05 - 23:07
we haven't actually edited it,
- 23:07 - 23:10
but I do want to make sure you have applied it to Player.
- 23:10 - 23:12
So the Zombunny in the hierarchy should
- 23:12 - 23:15
have the enemy movement script attached to it.
- 23:17 - 23:19
I'm going to save my scene
- 23:19 - 23:21
and I'm going to press Play to test
- 23:21 - 23:23
my game, when you press Play it now
- 23:23 - 23:25
follows the player.
- 23:25 - 23:27
We don't have any coding for attacking
- 23:27 - 23:29
yes or anything like that
- 23:29 - 23:31
so it's not doing to do any harm to you,
- 23:31 - 23:34
you can walk around and push him around.
- 23:40 - 23:43
That is the end of phase 4.
EnemyMovement
Code snippet
using UnityEngine; using System.Collections; public class EnemyMovement : MonoBehaviour { Transform player; // Reference to the player's position. PlayerHealth playerHealth; // Reference to the player's health. EnemyHealth enemyHealth; // Reference to this enemy's health. NavMeshAgent nav; // Reference to the nav mesh agent. void Awake () { // Set up the references. player = GameObject.FindGameObjectWithTag ("Player").transform; playerHealth = player.GetComponent <PlayerHealth> (); enemyHealth = GetComponent <EnemyHealth> (); nav = GetComponent <NavMeshAgent> (); } void Update () { // If the enemy and the player have health left... if(enemyHealth.currentHealth > 0 && playerHealth.currentHealth > 0) { // ... set the destination of the nav mesh agent to the player. nav.SetDestination (player.position); } // Otherwise... else { // ... disable the nav mesh agent. nav.enabled = false; } } }
private var player : Transform; // Reference to the player's position. private var playerHealth : PlayerHealth; // Reference to the player's health. private var enemyHealth : EnemyHealth; // Reference to this enemy's health. private var nav : NavMeshAgent; // Reference to the nav mesh agent. function Awake () { // Set up the references. player = GameObject.FindGameObjectWithTag ("Player").transform; playerHealth = player.GetComponent (PlayerHealth); enemyHealth = GetComponent (EnemyHealth); nav = GetComponent (NavMeshAgent); } function Update () { // If the enemy and the player have health left... if(enemyHealth.currentHealth > 0 && playerHealth.currentHealth > 0) { // ... set the destination of the nav mesh agent to the player. nav.SetDestination (player.position); } // Otherwise... else { // ... disable the nav mesh agent. nav.enabled = false; } }
관련 자습서
- Nav Meshes (강좌)
- Navigation Overview (강좌)
- NavMesh Baking (강좌)
- NavMesh Obstacles (강좌)
- The NavMesh Agent (강좌)
|
https://unity3d.com/kr/learn/tutorials/projects/survival-shooter/enemy-one?playlist=17144
|
CC-MAIN-2019-39
|
refinedweb
| 7,221
| 70.97
|
fread
Binary stream input/output
DescriptionThe function fread reads
nmembobjects, each
sizebytes long, from the stream pointed to by
stream, storing them at the location given by
ptr. The function fwrite writes
nmembobjects, each
sizebytes long, to the stream pointed to by
stream, obtaining them from the location given by
ptr. The following code attempts at reading 10 bytes at once from the text file "fred.txt", then it stores them into
bufferand finally it displays this string.
Example:
Example - Binary stream input/output
Workings
#include <stdio.h> int main() { FILE *f; char buffer[11]; if (f = fopen("fred.txt", "rt")) { fread(buffer, 1, 10, f); buffer[10] = 0; fclose(f); printf("first 10 characters of the file:\n%s\n", buffer); } return 0; } reference:ferror to determine which occurred. The function fwrite returns a value less than
nmembonly if a write error has occurred.
|
http://www.codecogs.com/library/computing/c/stdio.h/fread.php?alias=fread
|
CC-MAIN-2018-34
|
refinedweb
| 145
| 63.39
|
Whether you're working locally or on the cloud, many machine learning engineers don't have experience actually deploying their models so that they can be used on a global scale. In this tutorial we'll see how you can take your work and give it an audience by deploying your projects on the web. We'll start by creating a simple model which recognizes handwritten digits. Then we'll see step-by-step how to create an interface for deploying it on the web using Flask, a micro web framework written in Python.
Quickly Building a Model: CNN with MNIST
Before we dive into deploying models to production, let's begin by creating a simple model which we can save and deploy. If you've already built your own model, feel free to skip below to Saving Trained Models with h5py or Creating a Flask App for Serving the Model. For our purposes we'll start with a simple use case of creating a deep learning model using the MNIST dataset to recognize handwritten digits. This will give us a glance at how to define network architectures from scratch, then train, evaluate, and save them for deployment.
A convolutional neural network (CNN) is used for the task of handwriting recognition, as well as most image recognition tasks. The image is first sent through different convolutional layers, where the features are extracted and identified by the neurons. Whenever the network encounters a pattern in the test set which has features similar to the ones it learned in training, it will classify that image to the corresponding output label.
Let's now implement the algorithm using the Keras deep learning framework in 8 simple steps.
Bring this project to life
Step 1: Importing Necessary Modules and Layers
We always begin by importing all the modules and functions we'll use. This neural network is implemented in Keras (this comes pre-installed on Paperspace, but if you're running this locally you can always install Keras from your command line with
pip install Keras). Next, we import the model and layers which we will use for building the neural network architecture, which in this case is a CNN.
# imports import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K
Step 2: Defining Hyperparameters
Choosing the hyperparameters for your network can be a challenging task. Without going into too much theory or testing many different values, here we use standard values for the batch size (which defines the number of training samples to work through before updating the model weights) and number of epochs (full presentations of the data in the training set for learning). There are 10 classes since we're considering the digits 1-10.
# Hyperparameters num_classes = 10 batch_size = 128 epochs = 12
Step 3: Loading the Images
The next step is to load our data set and set constant image sizes for our training process. The images sizes are fixed to (28 x 28), as the network input parameters are always constant (you can’t train your network with different dimensions). We simply load our MNIST dataset with a load method on the MNIST class which was imported in Step 1.
# Image Resolution img_rows, img_cols = 28, 28 # Loading the data. (x_train, y_train), (x_test, y_test) = mnist.load_data()
Step 4: Data Pre-Processing
In this step we need to make sure that the training data is pre-processed and tuned to the same direction; if your inputs are of different sizes, the performance of your network will be inaccurate. We use a simple reshape method on every image and iterate it over the complete data set. Next, we assign the respected label to every image for the training process, in this case, we use the
to_categorical method to assign a label to every image.
# Preparing the('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes)
Step 5: Defining the Architecture
With the Keras framework we can easily declare a model by sequentially adding the layers. We use the
add() method for this.
# Creating the Model')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
Step 6: The Training Loop
Next we fit the model with the declared hyperparameters and initiate the training process. This can be simply done by using the
model.fit() method and passing the parameters.
# Training the Model model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
Step 7: Evaluating the Model
# Evaluating the Predictions on the Model score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1])
Step 8: Saving the Model
# Saving the model for Future Inferences model_json = model.to_json() with open("model.json", "w") as json_file: json_file.write(model_json) # serialize weights to HDF5 model.save_weights("model.h5")
Upon running this program and successful training, you will find two files in the same directory:
- model.json
- model.h5
The model.h5 file is a binary file which holds the weights. The file model.json is the architecture of the model that you just built.
Saving Trained Models With h5py
The HDF5 library lets users store huge amounts of numerical data, and easily manipulate that data with NumPy. For example, you can slice into multi-terabyte data sets stored on disk as if they were real NumPy arrays. Thousands of data sets can be stored in a single file, categorized and tagged however you want.
The
save_weights method is added above in order to save the weights learned by the network using h5py. The h5py package is a Pythonic interface to the HDF5 binary data format.
Now that we have saved our model in HDF5 format we can load the weights whenever we want, and apply it to future tasks. To load the weights we'll also need to have the corresponding model architecture defined. Let's do this from a JSON file we previously used. Once the model is prepared with the trained weights, we're ready to use it for inference.
# imports from keras import model_from_json # opening and store file in a variable json_file = open('model.json','r') loaded_model_json = json_file.read() json_file.close() # use Keras model_from_json to make a loaded model loaded_model = model_from_json(loaded_model_json) # load weights into new model loaded_model.load_weights("model.h5") print("Loaded Model from disk") # compile and evaluate loaded model loaded_model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
Now that we have the model saved along with the weights learned from training, we can use them to do inference on new data. This is how we make our trained models reusable.
Creating a Flask App for Serving the Model
To serve the saved model we'll use Flask, a micro web framework written in Python (it's referred to as a "micro" framework because it doesn't require particular tools or libraries).
To create our web app that recognizes different handwritten digits, we need two routes on our flask app:
- An index page route for the users drawing the image
- A predict route to make inferences from our saved model
These are defined below.
from flask import Flask, render_template, request @app.route('/') def index_view(): return render_template('index.html') @app.route('/predict/',methods=['GET','POST']) def predict(): response = "For ML Prediction" return response if __name__ == '__main__': app.run(debug=True, port=8000)
Now, let's go ahead an implement our complete app.py. The predict function should take an image drawn by users and send it to the model. In our case, the image is a NumPy array containing the pixel intensities.
from flask import Flask, render_template, request from scipy.misc import imsave, imread, imresize import numpy as np import keras.models import re import sys import os import base64 sys.path.append(os.path.abspath("./model")) from load import * global graph, model model, graph = init() app = Flask(__name__) @app.route('/') def index_view(): return render_template('index.html') def convertImage(imgData1): imgstr = re.search(b'base64,(.*)',imgData1).group(1) with open('output.png','wb') as output: output.write(base64.b64decode(imgstr)) @app.route('/predict/',methods=['GET','POST']) def predict(): imgData = request.get_data() convertImage(imgData) x = imread('output.png',mode='L') x = np.invert(x) x = imresize(x,(28,28)) x = x.reshape(1,28,28,1) with graph.as_default(): out = model.predict(x) print(out) print(np.argmax(out,axis=1)) response = np.array_str(np.argmax(out,axis=1)) return response if __name__ == '__main__': app.run(debug=True, port=8000)
Here we have the loader function, load.py:
import numpy as np import keras.models from keras.models import model_from_json from scipy.misc import imread, imresize,imshow import tensorflow as tf def init(): json_file = open('model.json','r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) #load weights into new model loaded_model.load_weights("model.h5") print("Loaded Model from disk") #compile and evaluate loaded model loaded_model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy']) #loss,accuracy = model.evaluate(X_test,y_test) #print('loss:', loss) #print('accuracy:', accuracy) graph = tf.get_default_graph() return loaded_model,graph
Before we dive into the last step of deploying into the cloud, let's create an interface which enables users to draw images from the browser. We'll use JavaScript and render a canvas on the HTML page. Below is the JavaScript snippet for rendering a Canvas for drawing.
(function() { var canvas = document.querySelector( "#canvas" ); var context = canvas.getContext( "2d" ); canvas.width = 280; canvas.height = 280; var Mouse = { x: 0, y: 0 }; var lastMouse = { x: 0, y: 0 }; context.fillStyle="white"; context.fillRect(0,0,canvas.width,canvas.height); context.color = "black"; context.lineWidth = 6; context.lineJoin = context.lineCap = 'round'; debug(); canvas.addEventListener( "mousemove", function( e ) { lastMouse.x = Mouse.x; lastMouse.y = Mouse.y; Mouse.x = e.pageX - this.offsetLeft; Mouse.y = e.pageY - this.offsetTop; }, false ); canvas.addEventListener( "mousedown", function( e ) { canvas.addEventListener( "mousemove", onPaint, false ); }, false ); canvas.addEventListener( "mouseup", function() { canvas.removeEventListener( "mousemove", onPaint, false ); }, false ); var onPaint = function() { context.lineWidth = context.lineWidth; context.lineJoin = "round"; context.lineCap = "round"; context.strokeStyle = context.color; context.beginPath(); context.moveTo( lastMouse.x, lastMouse.y ); context.lineTo( Mouse.x, Mouse.y ); context.closePath(); context.stroke(); }; function debug() { /* CLEAR BUTTON */ var clearButton = $( "#clearButton" ); clearButton.on( "click", function() { context.clearRect( 0, 0, 280, 280 ); context.fillStyle="white"; context.fillRect(0,0,canvas.width,canvas.height); }); $( "#colors" ).change(function() { var color = $( "#colors" ).val(); context.color = color; }); $( "#lineWidth" ).change(function() { context.lineWidth = $( this ).val(); }); } }());
Once you're done using this snippet in your HTML, by the end of this tutorial your directory structure should look like this:
ml-in-prod/
├── app.py
├── Procfile
├── requirements.txt
├── runtime.txt
├── model/
│ ├── model.json
│ ├── model.h5
│ └── load.py
├── templates/
│ ├── index.html
│ └── draw.html
└── static/
├── index.js
└── style.css
There you go! Your applications is up and running. In the next tutorial we'll see how to deploy it on Paperspace cloud GPUs to make the app more powerful, reliable and accessible.
Add speed and simplicity to your Machine Learning workflow today
|
https://blog.paperspace.com/deploying-deep-learning-models-flask-web-python/
|
CC-MAIN-2022-27
|
refinedweb
| 1,860
| 51.24
|
Web development for storing data
Chapter 1 Introduction
XML is important to known especially in web development for storing data and transport data. Many people thought that XML and HTML is same but actually it is not and XML is for storing and transport data while HTML is use for display data. In this study, I will do research on XML and provide some useful information about XML.
In this seminar, 1st I will explain about usage of XML. Then I will briefly explain the history of XML, for example like sources and versions of XML. Besides that, I also include the key concept and components of XML, example like Tag, element, Attribute and declaration of XML. After that, I will explain about how does XML Work, how to develop XML, XML schema and some error handling of XML. In this part, I will provide some example of coding so can understand better.
Besides that, I also will do some comparison of different XML for Example like compare LINQ to XML with other XML technologies. I will also explain the advantages and disadvantages of XML. I will do a research on what is the importance of XML, example like XML for web service, XML as data and the future usage of XML.
Chapter 2 Definition
XML stands for extensible markup language used for the description and delivery of marked-up electronic text over the web. The purpose of design XML is used for transport and store data. Actually, XML is not a replacement for HTML, because both of them are designed for different purpose. XML is designed to carry and store data; XML is focus on what the data is and it is not designed for display data. While HTML is designed for display data, its focus on how the data looks. So the main different between both of them are XML is about carrying data and HTML is about displaying data.
Extensible Markup Languages (XML) history begins with the development of Standard Generalised Markup Language (SGML) by Charles Goldfarb, along with Ed Mosher and Ray Lorie in the 1970s while working at IBM. In the year of 1970s, SGML was used to create vocabularies which could be mark up the documents with structural tags. Hyper Text Markup Language (HTML) is one of the most useful applications of SGML until now. However, HTML is more about presentation technology and it is unsuitable for carry and store data. Since HTML is unsuitable for data store and data interchange, so XML was created to support platform and architecture independent data interchange.
2.1 History of XML
XML was compiled by an eleven member working group and 150 members interest group. "A record of design decisions and their rationales was compiled by Michael Sperberg-McQueen on December 4, 1997." (View on October 2009, Available from). Designing of XML continued work in 1997, and the W3C recommended XML 1.0 on February 10, 1998.
Actually, XML is comes from SGML. The difference is that the document character is in Unicode format, and it has a fixed delimiter set. "Other sources include text Encoding initiative (TEI), HTML, Extended Reference Concrete Syntax (ERCS). The first version of XML was defined in 1998." (View on October 2009, Available from)
2.2 XML Specification
I had do some research and found that there are some Official XML specification provided by Stylus Studio that can let us gain more knowledge about XML. These are some example of XML specification that can help us in further study about XML. (Viewed on October 2009, Available from).
- XML
- XML Namespaces
- XSLT
- XPath
XML is a simple, very flexible text format derived from SGML. XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere. XML 1.1 updates XML so that it no longer depends on the specific Unicode version. It also adds checking of normalization, and follows the Unicode line ending rules more closely.
XML namespaces provide a way to distinguish between elements that use the same local name but are in fact different. Besides that it also can avoid name clashes. Tag names within a namespace must be unique.
XSLT is use to transform XML documents into other XML documents, for example transform to HTML for display purpose. Besides that, XSL includes an XML vocabulary for specifying the format of data. The transformation is achieved by associating patterns with templates.
It is syntax for defining parts of XML document. The main purpose of XPath is use path expression to find specific pieces of information in XML documents. It is a major element of XSLT.
Chapter 3 XML key concepts and components
- (Unicode) Character
- Processor
- Application
- Markup and Content
- Tag
- Element
- Attribute
- Declaration
XML is able to accept most of the legal Unicode. Almost all of the XML documents are created using string character.
XML processor takes the XML document and DTD then processes the information so that it can be used by application to request the information. The processor is a software module that reads the XML document to find out the structure and content of the XML document. The structure and content can be derived by the processor because XML documents contain self-explanatory data. The processor is a bridge between the XML document and the application that we using. The processor is usually works in the service of application.
XML applications is a software that using XML technologies for example like XPath, XSLT , XML Schema and etc to process and modify data. There are some examples of XML applications:
a. Web Collections
Web Collections are meta-data syntax. Web collections are usually used for scheduling, content labeling, distributed authoring, and etc.
b. E-business applications
XML also very useful in e-business application for example implementation of XML can be used for information interchange, business-to-consumer transactions, and business-to-business transactions.
c. Simple Object Access Protocol (SOAP)
A protocol that is object based and used for information exchange in a decentralized and distributed environment.
XML documents are created from characters that divided into two type, markup and content. All markup either open with the character "<" and close with a ">", or begin with the character "&" and end with a ";". If the text that is not markup, that mean the text is the content.
XML Tags is the beginning and ending of an element. XML tag is a markup construct that open with "<" and close with ">", for example like
Actually, elements are used to describe the data in an XML document so that the data becomes easy to understand. It holds data and other elements or empty. The characters between the open and closing tags are the element's content. It also can contain markup or include other elements which are name as child elements. Example of a simple element,
Attributes are used to specify the content of an element for example specify additional information about the element. An attribute of an element usually use within the opening tag. For example like
XML software uses the declaration to determine how to deal with the subsequent XML content. XML documents may begin its declaration with a prologue, and some information about themselves, for example
The first and second lines are the XML declaration. There are four elements in this example, which are book, img, title and author. Besides that, img have two attributes which are src and alt. The root element is book. (Viewed on October 2009, Available from)
Chapter 4 Uses and how XML works
The uses and design goals of XML are:
- XML was designed for usable over the Internet.
- XML was designed to structure, transport and store data.(exchange information between computers)
- XML shall support a wide variety of applications.
- XML documents should be easy to understand and reasonably clear.
- The design of XML shall be formal and concise.
- XML documents shall be easy to create.
4.1 How XML works
XML is very useful in web development, it usually make the works like share and store data easier to perform. (Viewed on October 2009, Available from)
- Separates the data from HTML
- Makes data easy for sharing
- Makes data easy for transport
- Can easily support any platform
- Makes data more available
If we want to display dynamic data in HTML, we will have to modify the HTML each times the data changes. XML can store the data in separated XML files. So we no need to do any changes to HTML, and HTML can concentrate on its part which is about the layout display.
XML provides a software- and hardware-independent way of storing data because the data is stored in plain text format. So XML data is easy to created and can used and shared by different application easily.
With XML, data can be easily exchanged between two different systems. XML data can be read by many different incompatible applications, so XML really make the data easier to exchange between two different systems.
It is very time consuming if we had to upgrading our systems because we need to convert large number of data and the data might be lost. Xml data is in plain text format so this makes it easier to expand to new environment such as new operating systems, new applications, new browsers and etc.
XML data can be more available and useful because XML data is independent. XML allow different applications to access the data. XML data is available for all kinds of reading machines, for example, computers, voice machines, cell phone, and etc.
4.2 Schemas and Validation
Technically, a schema is an abstract collection of metadata, consisting of a set of schema components: chiefly element and attribute declarations and complex and simple type definitions.
XML processors are classified as valid or non-valid are depend on the validity of XML document. We can check the XML document to see whether it is valid or not. A valid XML document must be well-formed. The well-formed requirement should be simple. It is quite difficult to make an XML document leap from well-formed to valid.
To be valid, the XML document must be validated. A document cannot be validated unless a Document Type Definition (DTD), internal or external, is referenced for the XML processor. If the XML documents need to be valid, it must follow all the rules in the DTD. (Viewed on November 2009, Available from)
DTD (Document Type Definition)
DTD is an example of schema, which is the oldest schema language for XML. DTD does not support newer features of XML, besides that, DTD are lack of expressiveness and readability, for example, DTD only support simple datatypes. Although DTD is an oldest schema language, but DTD still using in many applications.
XML Schema
XML Schema is one of several XML schema languages. It was the first separate schema language for XML to achieve recommendation status by the W3C. Besides that, XML schema is also a newer schema language which is the successor of DTD. XML Schema Definition (XSDs) can describe XML language better than DTD.
4.3 XML syntax and errors handling
- The syntax rules of XML are easy to find out because it is logic and simple.
Chapter 5 Comparisons of different XML
Many people thought that XML document and HTML document are same, but actually both of them are not same. Although XML is a markup language almost same as HTML but XML is not a replacement for HTML. Both of them also designed for different purpose, so we have to know the different between them.
5.1 LINQ to XML
XML has been widely use for design data in many context. For example, XML is basically using on the web configuration files, in Microsoft Office Word files, and in related databases.
LINQ to XML provide the in-memory XML programming interface that can let users to work with XML by using the .NET Framework programming languages. LINQ to XML uses the latest .NET Framework language capabilities and is similar to an updated Document Object Model (DOM) XML programming interface. The LINQ family of technologies provides a consistent query experience for objects (LINQ to Objects), relational databases (LINQ to SQL), and XML (LINQ to XML).
Itis almost same as the Document Object Model (DOM) that brings the XML document into memory. We can modify the document and save it to a file or serialize it and send it over the Internet. Actually, LINQ to XML is different from DOM because LINQ to XML provides a new object model that iseasier to work with and have some advantages about language improvements. (Viewed on November 2009, Available from)
The advantage of LINQ to XML:
- Its integration with Language-Integrated Query (LINQ) and enables us to write queries on the XML document to retrieve the collections likes elements and attributes. (most important)
- Ability to use query as parameters to XElement and XAttribute object constructors enables a functional construction to creating XML trees. Besides that, it also helps developers to transform XML trees to another more easily.
5.2 Comparisons of LINQ to XML with other XML Technologies
In this part, I will do some comparison about LINQ to XML with other XML technologies such as XSLT and XmlReader.
5.3 Advantages of Disadvantages of XML
Advantages of XML:
- It is human-readable and machine-readable format.
- XML is an extendable language, which means we can define our own tags, or use the tags that created by other peoples.
- Searching the data in XML document is easy and efficient.
- It is self-documenting format; it can describe the structure and field names as well as specific values.
- XML can work on any platform.
- XML is system independent so when the data is being exchanged between two systems by using XML, data will not be lost.
- XML is fully compatible with applications such as JAVA, and it can be combined with other application.
(Viewed on November 2009, Available from)
XML is good but there also will have some drawbacks.
Disadvantages of XML:
- The biggest drawback of XML is lack of application processing. There are no browsers yet which can support XML. So if need to read by browser, XML is still need to depend on HTML. XML have to convert to HTML before they are deployed.
- XML is a verbose language, so it totally depends on who is writing it. This will bring some problems to other user.
- XML namespaces are problematic to use and namespace support can be difficult to correctly implement in an XML parser.
- Expressing overlapping (non-hierarchical) node relationships requires extra effort.
(Viewed on November 2009, Available from)
Chapter 6 The important of XML?
One of the problems is because HTML, the markup language used on the Web, which is good at displaying information in different fonts and colors, etc., but it isn't good at describing the structure of information. On the other hand, XML is very good at describing the structure of information, storing data, and exchange data. So XML is needed in this case.
Nowadays, the relational database systems are no longer work independently in process data. For example, most of the traditional databases can't handle video, audio or some complex data, but XML databases can support these kinds of data without any problems.
The other reasons why XML is important are XML is more simplicity which mean that XML documents is more readable for human and also machine. Because of this reason, those developers or programmers can easily understand and create their XML documents. Besides that, XML parsers are also simple to build.
The next reason is because XML is extensibility. XML allow developers to create their own DTDs, effectively creating 'extensible' tag sets that can be used for multiple applications. Besides that, XML itself is being extended with several additional standards that add styles, linking, and referencing ability to the core XML set of capabilities.
Other than that, XML is also interoperability because it can be used on wide variety of platforms and interpreted with a wide variety of tools. XML support multilingual documents and Unicode standards. This is important for electronic business applications. (Viewed on November 2009, Available from)
Chapter 7 Critical Evaluation
Evaluation 7.1 XML and HTML
Material: XML is not a replacement for HTML.
From: [Accessed: 11/11/2009]
Based on the website that I research, I found that there is a statement said that "XML is not a replacement for HTML". In my opinion, I agree with this statement because XML is creating for different purpose and not same as HTML, so XML can't be replacing HTML. HTML is designed for display information purpose while XML is designed for store, describe and exchange data. XML is content-driven and the markup tag is self-defined. While HTML is format-driven and the markup symbols are predefined, the structure of them is different, so XML is not a replacement for HTML. Nowadays, XHTML is a combination of XML and HTML. XHTML was developed to make HTML more extensible and increase interoperability with other data formats.
Evaluation 7.2 Future of XML
Material: The future of XML is still unclear because of conflicting views of XML users.
From: [Accessed 11/11/2009]
Based on the research I do, I had found 1 website about Future of XML and there is a statement said that "The future of XML is still unclear because of conflicting views of XML users." In my opinion, I think that XML really useful in this few years. XML really help in storing information, for example, XML is possible to manage large quantities of information that can't fit in relational database tables. Besides that, XML also can describe the data very well and also simplified information exchange across language barriers.
Nowadays, the specification is growing and become more complex, so XML is no longer simple. Besides that, it also takes longer time to develop XML tools. So in my opinion, I think that XML should be upgrade to new version so that it can have a good result about improvement in quality although the specifications have increase. Besides that, the web application is continue growing and upgraded, so XML is based in web application, it will also grow as well in the:
|
http://www.ukessays.com/essays/information-systems/web-development-for-storing-data.php
|
CC-MAIN-2014-15
|
refinedweb
| 3,051
| 54.42
|
You can't use a language effectively if it doesn't enable you to easily communicate with the outside world, whether through a display, a file system, or a network. Furthermore, a language's I/O facilities should be robust, cohesive, and easy to use. In 1979 I was on a project that used a proprietary language called PL/S III. It had a small keyword list, was nicely block-structured, and was easy to learn but it had no I/O capability whatsoever! I had been out of school only one year and was thoroughly dumbfounded. An I/O-less programming language? But...! What...? How...? Needless to say, I was full of questions. The answer: our local BAL [1] expert wrote assembler language routines to give us the I/O features we needed.
Today's Internet culture will not tolerate such impediments to rapid development. When it comes to I/O power, Java has everything most developers could possibly wish for. In the unlikely case you don't find what you need, there are (as always) classes to extend and interfaces to implement, and as fast you can define int MyInputClass.read(){...}, you've solved your problem.
Abandon printf, All Ye Who Enter Here
If you're looking to save keystrokes, though, look somewhere else. Certainly you've discovered by now that Java is not known for it's economy of expression, and nowhere does verbosity rule and reign so mightily than in using Java's I/O classes. As in other areas of the Java library, flexibility is the name of the game. A quick glance at the java.io package reveals over forty classes spread throughout four hierarchies, with a few extra classes on the side (see Figures 1, 2, 3, 4, and 5). There are even more classes in these hierarchies if you open the lid on the packages java.util.zip and java.security, but I'll leave those for another time. It is indeed a daunting maze at first glance, but there is good design lurking beneath.
The first thing to understand about these hierarchies is that the first two, InputStream and OutputStream, deal with streams of bytes, while the Readers and Writers traffic in characters, which are 16-bit Unicode quantities. If you're working with binary data, therefore, you'll probably want to use byte streams, whereas character streams are suitable for text processing.
The second thing to note is that these classes fall into two broad categories, roughly referred to as low-level and high-level streams. Low-level streams are objects you define independently of any other stream, like a stream connected to files or arrays in memory, or pipes between processes. They include the following:
ByteArrayInputStream & ByteArrayOutputStream FileInputStream & FileOutputStream PipedInputStream & PipedOutputStream CharArrayReader & CharArrayWriter FileReader & FileWriter PipedReader & PipedWriter StringReader & StringWriter
The rest are either abstract classes or high-level streams that act as wrappers to existing streams to add functionality thereto. For example, if you have a FileOutputStream you can make it a BufferedOutputStream for increased efficiency like this:
// First define a file stream: FileOutputStream fs = new FileOutputStream("myfile.out"); // Wrap in a buffered stream: BufferedOutputStream bfs = new BufferedOutputStream(fs);
Or you can do it all at once like this:
BufferedOutputStream bfs = new BufferedOutputStream( new FileOutputStream( "myfile.out"));
The wrapper buffers the output for efficiency when processing large files, and when you call bfs.close the underlying file stream also closes. This design obviates the need for a combinatorial explosion of classes such as BufferedFileOutputStream, BufferedFileInputStream, BufferedByteArrayInputStream, etc. [2].
Overwhelmed yet? I warned you about all the typing! If you read my article in the July 2000 issue about text formatting, you saw a whole family of classes just for formatting different kinds of numbers. The separation of formatting from I/O is good design, just like the separation among low-level and high-level streams, but it keeps your fingers busy and your source files full. Alas, there is no concise printf-like operation in Java Land!
Byte Streams
The basic contracts for all byte streams are defined in the abstract classes InputStream and OutputStream and consist of methods to read or write one or more bytes. The following command-line filter uses the pre-defined streams System.in and System.out to copy standard input to standard output.
import java.io.*; class CopyInput { public static void main(String[] args) throws IOException { int b; while ((b = System.in.read()) != -1) System.out.write(b); } }
Most of the methods found in the java.io package may throw an exception derived from java.io.IOException. In real programs, you'll want to catch and process these exceptions, but for convenience and clarity, in this article I just include a throws clause in the specification of main. As always, consult the online Java API documentation for more detail.
InputStream.read extracts the next byte from its underlying source and returns it as an int for the same reason that getc returns an int in C: so -1, the end-of-file indicator, will be distinct from any byte. OutputStream.write pushes a byte onto its underlying sink. It's important not to use System.out.print here. System.out is actually an instance of PrintWriter, whose print methods do some minimal formatting, which in this case would output the character representation of the byte code value of its argument. For example, in the ASCII character set an opening brace would print as "123" instead of "{".
To copy files with this program, you have to use redirection on the command line, of course, as in:
C:> java CopyFile <infile >outfile
To read command-line arguments as file names explicitly, you can use main's string array parameter, as follows:
import java.io.*; class CopyFile { public static void main(String[] args) throws IOException { // Copy files explicitly: FileInputStream fin = new FileInputStream( args[0]); FileOutputStream fout = new FileOutputStream( args[1]); int b; while ((b = fin.read()) != -1) fout.write(b); fin.close(); fout.close(); } }
Each constructor opens file streams for input/output automatically and throws an exception if the input file doesn't exist or if some other error occurs. You always need to close any top-level stream that you create. (Remember, Java doesn't have destructors like C++!) The program above will of course throw an ArrayIndexOutOfBoundsException if you don't provide two filenames on the command line. The program in Listing 1 combines the flexibility of the CopyInput.java and CopyFile.java above by defaulting to standard input or standard output if you omit any filenames.
The program in Listing 2 shows how easy it is to define your own streams. In the case of an input stream, all you have to do is extend InputStream and override the read method, which returns the next byte from your stream. Here I just return a random byte value. The other read methods that extract an array of bytes are implemented in terms of InputStream.read, so I get those for free. After reading the first byte, I save the next three bytes in a byte array. I then wrap a ByteArrayInputStream around that array and read it again. The method InputStream.read(byte[] arr) returns the number of bytes read, up to arr.length. The InputStream.available method returns the number of bytes that can be read without blocking, which in this case is the entire array. InputStream.mark is like ftell in C it stores a file position that you can return to with the reset method, provided you don't read more than the number of bytes that you passed to mark initially. The skip method is an efficient way of ignoring bytes in a stream. The casts to byte are significant in this example. Remember that ints are actually returned. If I didn't do the cast, then negative numbers would print as their positive two's-complements.
High-Level Streams
The program in Listing 3 illustrates SequenceInputStream, a high-level input stream that wraps an arbitrary number of existing streams so that you can process them in sequence as a single stream. In this example, I treat the three log files depicted in Listings 4, 5, and 6 as a single log file. First I have to open each stream separately and place their stream references into any collection that can yield an Enumeration [3]. That enumeration then becomes the argument to the constructor for my SequenceInputStream, which I process as a single entity. SequenceInputStream.close automatically closes the underlying files' streams.
It is often convenient to write program values out to files so that you can read them back later. You don't really need to know how it's done, nor do you ever plan on reading the intermediate file(s) you just want to reconstitute objects at some future time. This is a well known technique called serialization and is supported by two sets of classes in Java. The DataOutputStream class serializes Java's primitive types, as well as String objects, to an existing OutputStream with methods like writeBoolean, writeInt, writeFloat, etc.; a DataInputStream object reads those serialized bytes and reconstructs the corresponding objects, as shown in Listing 7.
It's instructive to look at the intermediate file, data.dat. Here it is in hexadecimal format (with linefeeds added for clarity):
/* Contents of data.dat in hex: 00 41 00 41 00 05 68 65 6C 6C 6F 3F 80 00 00 00 00 00 01 */
As you can see, Booleans are stored as single bytes with the value 0 for false (and non-zero for true), and a char is stored in two-byte Unicode format. The writeUTF method stores a string in UTF-8 format, which is a standard and efficient way of serializing Unicode strings. The first two bytes (00 05) constitute a short integer representing the number of bytes in the serialized representation of the string, which bytes then follow. Traditional ASCII characters require only one byte, the next 1,919 Unicode code points require two bytes, and the rest require three, so UTF-8 is good for ASCII, but wasteful for Asian characters.
These data streams only work for primitives and strings. You can serialize arbitrary objects, including arrays, however, with ObjectOutputStream and ObjectInputStream. These are very intelligent classes. You can have objects within objects extending other objects, serialize them to a byte stream, and when you deserialize them all the relationships are intact. The program in Listing 8 defines a class Person, which extends class CarbonUnit and contains an instance of class Name and class Date. To serialize a Person to a byte stream, all classes involved must implement the marker interface Serializable, otherwise you get a java.io.NotSerializableException (Person is implicitly serializable since it extends CarbonUnit, which implements the Serializable interface). All non-static, non-transient fields are serialized; if you want to ignore a field during serialization, qualify it with the transient keyword. This typically applies to computed fields or values or references that are cached. When an object is reconstituted, its transient fields are zero initialized.
When I'm ready to serialize a Person object, I wrap a FileOutputStream with an ObjectOutputStream and call writeObject. That's it! To deserialize, I wrap the FileInputStream in an ObjectInputStream and call ReadObject. Easy enough. Since readObject returns an Object, I have to cast to a Person.
How are these methods so smart? First of all, ObjectOutputStream.writeObject determines the actual type information of its argument via reflection, an object introspection capability in Java that I'll discuss in a future article. Also, object references are replaced with local serial numbers that have meaning within each serialization stream, so when objects are serialized, the restored references point to the right objects.
You might want to experiment with this example in a couple of ways. First, remove the "implements Serializable" qualifier from the CarbonUnit class. You'll get no errors, but the id numbers will be wrong in the reconstituted objects (3 and 4, respectively, instead of 0 and 1). Why is that so [4]? And if you try removing the same qualifier from Name, what will happen [5]?
Character Streams
With only a few exceptions, all byte streams have a character-based Reader/Writer counterpart. A character stream version of CopyFile.java above, for example, looks like this:
import java.io.*; class CopyChars { public static void main(String[] args) throws IOException { // Copy files // character-by-character: FileReader fin = new FileReader(args[0]); FileWriter fout = new FileWriter(args[1]); int c; while ((c = fin.read()) != -1) fout.write(c); fin.close(); fout.close(); } }
This version doesn't add any functionality over the byte stream version and is of little use. Character streams do allow you to read a line at a time, however, which can be useful in text processing applications. To read lines you need a BufferedReader. To behave like a command-line filter, the utility in Listing 9 wraps System.in in a BufferedReader and calls BufferedReader.getLine repeatedly. InputStreamReader is a bridge from byte streams to character streams it takes a byte stream and wraps it in a Reader that returns characters [6]. There is also an OutputStreamWriter for converting output byte streams to Writers. Since keeping track of lines is a common task, java.io provides the LineNumberReader class that keeps count for you (see Listing 10).
Token Parsing
A good deal of text processing consists of fishing through input files for data surrounded by delimiters, which are ignored on input. A common situation in data processing requires reading comma-delimited files, such as when exporting data from one database to import into another. With the StreamTokenizer class you can define which characters in a character stream make up tokens and which don't (similar to strtok in C). The program in Listing 11 reads files of employee tokens that come in groups of three: a name, a number, and a title. The first and third fields are strings and can contain spaces. By default, StreamTokenizer recognizes white space as an ordinary (i.e., non-token) character, so to preserve spaces in a token it should be surrounded by quotes, which are subsequently discarded on input. To tag the comma character as a non-token character, call the ordinaryChar method. The parseNumbers method tells the tokenizer to recognize numbers and not just strings in general. A nice feature for compiler writers is the ability to ignore C and C++-style comments (both of which are valid in Java, of course see the calls to slashSlashComments and slashStarComments). The tokenizing loop is driven by a call to nextToken, which returns StreamTokenizer.TT_EOF when input is exhausted. To retrieve a token as a string, simply access the public sval field. For numbers, use nval. Whenever a non-token character is extracted, it is stored in the ttype field.
With fixed-sized records like this, you may prefer to read a line at a time and parse each string by searching for commas. If you want to allow empty fields, then you'll need to use the String class search methods to find commas and then calculate substrings. If no empty fields are allowed, then the StringTokenizer class provides a simpler solution. When I create a StringTokenizer object for each input line in Listing 12, I pass it the characters I'm not interested in (a comma and linefeed, in this case I suppose I should have included other punctuation as well). It collects tokens consisting of all other characters and returns them via the nextToken method. One drawback to this approach is that it does not ignore comments, so I had to remove the leading comment line from the input.
Quite often, though, lines don't matter, like when parsing a Java program. The program in Listing 13 shows how easy it is to extract tokens from a free-form text file. It looks for the class keyword and then reads the next word, interpreting it as a class name. This time the loop inspects the ttype field, which returns TT_WORD if a word was found, TT_NUMBER if a number was found instead (which doesn't apply in this case), TT_EOL if it found the end of a line (which only works if you have previously called eolIsSignificant), and TT_EOF if input was exhausted; otherwise it found a non-token character and returns it.
Summary
Even if you have to type almost to the point of suffering from carpal tunnel syndrome, I'm sure you'll agree that the design of the Java I/O system is Good Work. You have facilities to create a stream that talks to just about anything, with common filtering features such as buffering, counting lines, or literally interpreting program objects. The high-level classes come nicely equipped to function as decorators for the low-level ones, so the number of classes needed is much smaller than it would have been otherwise. And in many of the cases where you have to define your own stream, you can get away with just overriding a single read or write method. In January's article, I'll talk more about files and working with your local file system from Java programs.
Notes
[1] If you entered the programming biz after 1990, you probably don't know that this is an acronym for Basic Assembler Language, which was the lingua franca of IBM mainframes.
.
[3] I chose ArrayList instead of Vector, which is my wont, because the former is one of the new collection classes. If you need a refresher on collections and enumerations, see my previous article in the September 2000 issue of this magazine.[3] I chose ArrayList instead of Vector, which is my wont, because the former is one of the new collection classes. If you need a refresher on collections and enumerations, see my previous article in the September 2000 issue of this magazine.
).
[5] Since Name is a non-transient, non-static field of Person, writeObject attempts to serialize it, which results in a NotSerializableException. The standard Date class is also serializable.[5] Since Name is a non-transient, non-static field of Person, writeObject attempts to serialize it, which results in a NotSerializableException. The standard Date class is also serializable.
[6] How many bytes make up a character depends on the encoding you choose when you create the reader. (There is an optional constructor that takes a second argument for specifying the encoding.)[6] How many bytes make up a character depends on the encoding you choose when you create the reader. (There is an optional constructor that takes a second argument for specifying the encoding.)'s.
|
http://www.drdobbs.com/import-java-basic-stream-io/184401304
|
CC-MAIN-2013-20
|
refinedweb
| 3,116
| 53.51
|
An Introduction to Building Shopify Themes
More and more web designers are turning their skills to new and clever uses. Whether it be icons, themes, fonts, t-shirts, or prints, subsidizing client work by selling products and services online is on the increase—especially in the web community.
A few years ago opening up an online store was a scary and daunting task. Thankfully, the process today is a lot simpler; for some platforms it’s as easy as creating a theme using your existing HTML, CSS, and JavaScript knowledge. Even if you don’t want to open up your own store, having the skills to offer ecommerce to your new and existing clients is a big plus.
Evaluating Ecommerce Platforms
There are plenty of options when it comes to setting up shop online. These range from the humble “Buy Now” PayPal button to rolling your own custom web app with plenty in between. I have tried most offerings over the years, but when it came to choosing a solution for my own company, I decided to use Shopify. Here’s why:
- It’s a fully hosted solution, which means zero server set up and maintenance—a big bonus
- The monthly cost is not prohibitive. Our fees will grow in line with our needs and revenue
- It’s theme based and very designer friendly. We can use our existing markup and CSS
- It allows us to use our merchant account for payments. This was a big plus as we don’t want to use PayPal
- We can use our own domain with Shopify, e.g. shop.useourowndomain.com
- The “app” ecosystem is strong. We’ll need a digital delivery system and FetchApp fits the bill perfectly
- It’s popular. Over 25,000 stores use Shopify, including a lot of well-known web brands such as A Book Apart, United Pixelworkers, Tattly, and Hard Graft
- The company is strong, the product is constantly being developed, and above all they have a great logo!
Shopify is a designer friendly ecommerce platform for designers
If you have built a website before then you are amazingly well equipped to build a theme for Shopify. During this tutorial I’ll be showing you the basics of how a Shopify theme is constructed, dissect a key template, and share a few hints and tips that I have picked up along the way.
Before We Begin
The Shopify test shop interface
In order to start developing your theme you will need a Shopify account. The simplest way to do this is to sign up to the partner program. This will enable you to create and work with a “test shop.” As the name suggests, it’s not a “live” shop—think of it more as a playground. When you are ready to open up your store “proper” you will need to register for a full account.
Once you have registered for the partner program click the “Test Shops” tab, and follow the instructions to set up your new test store. Once it has been created, following the link will take you directly into the Shopify admin. This is the heart of your store, but for our purposes we will be concentrating on the themes tab.
What’s a Theme?
A theme comprises a number of specially named templates as well as the normal kind of assets you would expect for any web project including images, CSS, and JavaScript.
Radiance is a free Shopify theme
There’s one final component that you might not have come across before and that’s the templating language called Liquid. It’s similar in style to other templating tools like Twig and Smarty, and whilst it might feel a little alien at first, I am confident you’ll soon get the hang of it. If you have ever built a theme for WordPress, Tumblr, or Expression Engine, then many of the concepts will be familiar to you. If you haven’t, don’t worry, we’ll go through the main points to get you up and running.
Basic Theme Structure
At the time of writing, test stores are created using the free “Radiance” theme. You’ll find all the files that encompass this theme by entering the “Template Editor” section under the themes tab.
Before we carry on, let’s download the theme so we can look at it more easily. We aren’t going to go into the specifics of this theme, but we’ll use it to give you a taste of how things work and fit together.
To download a ZIP archive of the Radiance theme, follow the “Themes” tab and then the “Manage Themes” link. From here, you can request to “Export Theme.” You will receive an email once it’s available to download.
The online Shopify template editor
Once you have unpacked your ZIP archive, open up the folder. You will notice that there are 5 subfolders within it. These are:
- Assets
- Config
- Layouts
- Snippets
- Templates
Let’s have a more in-depth look at their contents.
Assets
The assets folder is home to all you images, JavaScript, and CSS files. If you are like me, you would normally separate your files into subfolders based on type, that is, one for CSS and another for JavaScript. Be aware that you can’t do this for a Shopify theme; all assets must be in this folder.
Config
Whilst creating Shopify themes is relatively easy, they do offer us the potential to customize them heavily. Config files give us the ability to change the way a site looks by selecting different colors or fonts from the admin area. It’s a little outside the scope of this article, but something worth considering if you ever produce a theme for sale in the Shopify Themes Store.
Layouts
The layout folder usually contains one file called “layout.liquid,” although it can contain more. A layout file enables you to define the outer skin of a page. This approach saves you having to include the same files in each and every template. I liken it to the outer skin of an onion. Our template files (the inner part of the onion) are then wrapped with this layout file (the outer skin) to give us the completed page.
Typically, the “layout.liquid” file will include your opening and closing HTML declarations, links to CSS and JavaScript files, principle navigation and footer. It will also include a special liquid tag
{{ content_for_layout }}, which Shopify uses to add in the relevant subtemplate code and logic.
Don’t be put off by the amount of code in the Radiance theme “layout.liquid” file as there’s a lot going on. The Blankify theme, available on GitHub, features a much more basic “layout.liquid,” which is much easier to digest.
Snippets
Snippets are files containing chunks of reusable code, these also have the “.liquid” file extension. If you plan on having the same piece of code in more than one template (but not every template as you could arguably use your layout file for this) then a snippet is a good option. The Radiance theme contains a lot of snippets, including social links and pagination.
Templates
These files are generally the ones you will spend your time crafting, and represent the inner part of the onion.
In order for your theme to work, you must follow some simple naming conventions, for example, the template that is rendered when you are viewing a product must be called “product.liquid.”
The Radiance theme in Mac OS X Finder
Here’s a list of the required templates that make up your theme:
- 404.liquid—Used when a page is not found
- article.liquid—The article page for the blog function
- blog.liquid—The archive listing page for a blog
- cart.liquid—A list page of all the items in your cart
- collection.liquid—This is used for displaying collections of your products (collections can also be thought of “product categories”)
- index.liquid—The “home page” that you can customize to display one or more products, blog items, and collections
- page.liquid—A basic page for content, ideal for terms and conditions, about pages, etc.
- product.liquid—Used to display the individual products to the customer
- search.liquid—The template to display search results
You will also notice from the image above that the Radiance theme contains a subfolder called customers. Customer templates allow you to customize your customer area, that is, log in pages, order history, and account details such as addresses for delivery.
You can find out more about these templates on the Shopify Wiki. If you don’t have this folder defined your shop will simply use the generic Shopify templates.
Intro to Liquid
You will have noticed that a number of our files have the “.liquid” extension. Liquid files are simply HTML files with embedded code in them. This embedded code is created using curly braces like {{ }} and {% %} so it’s easy to spot.
Liquid does two things really well. Firstly, it allows us to manipulate our output and, secondly, it allows us to include logic in our templates without having to know any back-end code.
Firstly, let’s look at output. Here’s a really simple example. If we want our
<h2> element to display our product title then we can make use of the product data available to use, and do the following in our template:
<h2>{{ product.title }}</h2>
When Shopify renders a product page our Liquid code will be replaced by the title of our product. Notice how liquid uses the “dot” syntax. Our product also has a price attached to it, this would be displayed in a similar way using:
{{ product.price }}
The “dot” allows us to access the different properties of the product and display that data.
We could go further and say that we want to turn our product title into uppercase, that’s fairly easy too:
<h2>{{ product.title | upcase }}</h2>
Liquid can also help us generate elements quickly. For example, an image tag:
{{ 'logo.png' | img_tag }}
When rendered this will produce the following HTML:
<img src="logo.png" alt="" />
Of course this isn’t much use as this has resulted in a relative path to the file, which just won’t work. By adding in an extra filter Shopify will prepend the full path to the file. We can also add in our
alt text for good measure:
{{ 'logo.png' | asset_url | img_tag: 'Site Logo' }}
This will produce the following HTML:
<img src="/files/shops/your_shop_number/assets/logo.png" alt="Site Logo" />
As well as allowing us to use data from our shop, that is, product titles and prices, Liquid enables us to control the flow of the page, this is the logic I referred to earlier.
Let’s say a product is sold out, great news for us, but not so good for our potential customer. By using Liquid we can show a special message if the product is no longer available. Liquid logic is very readable so should hopefully be easy to follow.
Here’s how that could work in your template:
{% if product.available %} Show Add to cart button here {% else %} Display message about when the product will be next available {% endif %}
There’s an easy way to remember the different tag structures, 2 curly braces equals output and 1 curly brace followed by a % equals logic.
Mark Dunkley of Shopify curates a very up-to-date cheat sheet. It’s a must bookmark for anyone working with Shopify themes. As well as being a great reference it’s a quick way to see what variables are available on each template.
Mark Dunkley’s essential Shopify Cheat Sheet
Dissecting a Template
As I mentioned earlier the Radiance theme isn’t the simplest for first time theme designers. I encourage you, however, to have a look at the different files and consider how they all fit together.
When you are ready to build your full theme, I encourage you to read a great entry on the Shopify wiki called “Theme from scratch.”
TextMate users can take advantage of Meeech’s Shopify bundle. Of course, you can always ZIP up your theme and upload it to your test store yourself.
product.liquid
As mentioned earlier, the “product.liquid” template is used when a customer is viewing the product detail page. If you are familiar with WordPress this is very similar in nature to the “single.php” template.
If you have a look at Mark’s cheat sheet, you can see we have a number of product related variables available to us, many of which we will use in our example.
Here’s the code sample in full:
<h2>{{ product.title }}</h2> <p>{{ product.description }}</p> {% %}
Let’s break it down into manageable sections. First up, let’s output a basic title and description. As discussed earlier, these are rendered using the dot syntax:
<h2>{{ product.title }}</h2> <p>{{ product.description }}</p>
Next, we run a quick check to see if our product is available, in other words, is it “in stock”? If it is, the template will show a form that allows the customer to add the product to their cart. Additionally, it will render a select list displaying the different variants of that product.
Variants are one way of organizing your products in Shopify. Let’s say you have a children’s bike that comes in red, white, and blue, and costs £295. One approach here would be to have three variants of the bike product to handle the different colors. There are other approaches, but this works well.
If the product is out of stock the template will render the message, “Sorry, the product is not available.”
{% %}
A product is nothing without images. The next part of the template deals with displaying the images we have added in the admin. This is also a good introduction to another Liquid concept: the “forloop“.
The first line can be confusing at first, but it’s nothing to worry about. Essentially, what it means is:
“For every image associated with our product, do the following.”
Every Liquid loop must end with an “endfor”, you’ll notice this further down. Without this, your site can get very messy so remember to include it. It’s like a closing tag in HTML.
Inside our “for loop”, we run a little check to see if we are on the very first loop using
{% if forloop.first %}. If we are, we output a large version of the image. If we are further along, we output a small version of our image that is linked to our full size image:
{% %}
And that is the “product.liquid” template. You can, of course, include other elements, perhaps links to other collections or similar products.
Learning From Others
Liquid, whilst very powerful, is rather straightforward. Once you have worked out how to use the dot syntax, the different template variables, and mastered loops, there really isn’t too much else to learn.
Each template works in a very similar way; for example, the collections template gives you access to data relating to the currently viewed collection of products. Likewise, the cart page will give you access to variables relevant to displaying the customer’s cart contents.
I learned a lot by working out how others have constructed their templates, for example, the collection.liquid template in Blankify—there’s really not much to it.
Blankify is an old but useful Shopify starter theme
Next Steps
I hope this tutorial has given you a good introduction to the elements involved in building a Shopify theme. I have included some great resources below that you should definitely check out as you progress and go deeper into your theme building.
If you have any questions please drop a comment below.
|
https://www.sitepoint.com/an-introduction-to-building-shopify-themes/
|
CC-MAIN-2018-51
|
refinedweb
| 2,640
| 71.55
|
- Type:
Bug
- Status: Resolved (View Workflow)
- Priority:
Minor
- Resolution: Duplicate
- Component/s: workflow-api-plugin
-
- Similar Issues:
Desired behaviour
- Duration should be 20s for the whole pipeline.
- ed1 node duration should be 20s
- ed2 node duration should be 10s
Undesired behaviour
- Duration should be 20s for the whole pipeline.
- ed1 node duration should be 20s
- ed2 node duration should be 10s but is reported as 20s
Example
def map = [:] map['ed1'] = { node() { sleep 20 } } map['ed2'] = { node() { sleep 10 } } stage('Stage 1') { parallel map }
Original request
Hi,
the execution time of a stage containing two parallel jobs shows, after the first job has finished, only the execution time of the first job, as long as the second job is running.
After the second job has finished, the displayed time is updated to the right runtime.
It would be nice if the time could be right during the execution of the job as well.
- blocks
JENKINS-39158 A completed parallel branch isn't showing up as completed in karaoke mode
- Closed
- duplicates
JENKINS-51208 Execution time of parallel steps displayed as max time among all steps in block
- Open
- is blocked by
JENKINS-41685 incorrect event received with workflow-api 2.8
- Closed
- is duplicated by
JENKINS-40031 Progress bar in stage not shown correctly if using parallel step in stage
- Closed
- relates to
JENKINS-43746 Update step heading to include selected parallel and live duration
- Closed
- links to
-
-
-
This problem seems to be occurring for us on a newer version of Blue Ocean (Jenkins Core 2.73.3, Blue Ocean 1.4.2 + dependencies, Pipeline 2.5, Pipeline: API 2.26).
We have a parallel step where each of the individual nodes has a clearly different runtime, but Blue Ocean (and the Blue Ocean API) report all the parallel nodes as having roughly the same runtime (+/- a few ms).
See above comment; this issue is happening for us on the latest version of Blue Ocean.
I created a new ticket to track my issue, which may be different from this one.
Sam Van Oort I am going to backport this dependency upgrade to our 1.1 branch so we don't have people confused about it.
|
https://issues.jenkins-ci.org/browse/JENKINS-38536?attachmentOrder=desc
|
CC-MAIN-2019-35
|
refinedweb
| 366
| 56.69
|
Debugging¶
You can connect the debugger in your editor, for example with Visual Studio Code or PyCharm.
Call
uvicorn¶
In your FastAPI application, import and run
uvicorn directly:
import uvicorn from fastapi import FastAPI app = FastAPI() @app.get("/") def root(): a = "a" b = "b" + a return {"hello world": b} if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000)
About
__name__ == "__main__"¶
The main purpose of the
__name__ == "__main__" is to have some code that is executed when your file is called with:
$ python myapp.py
but is not called when another file imports it, like in:
from myapp import app
More details¶
Let's say your file is named
myapp.py.
If you run it with:
$ python myapp.py
then the internal variable
__name__ in your file, created automatically by Python, will have as value the string
"__main__".
So, the section:
uvicorn.run(app, host="0.0.0.0", port=8000)
will run.
This won't happen if you import that module (file).
So, if you have another file
importer.py with:
from myapp import app # Some more code
in that case, the automatic variable inside of
myapp.py will not have the variable
__name__ with a value of
"__main__".
So, the line:
uvicorn.run(app, host="0.0.0.0", port=8000)
will not be executed.
Info
For more information, check the official Python docs.
Run your code with your debugger¶
Because you are running the Uvicorn server directly from your code, you can call your Python program (your FastAPI application) directly from the debugger.
For example, in Visual Studio Code, you can:
- Go to the "Debug" panel.
- "Add configuration...".
- Select "Python"
- Run the debugger with the option "
Python: Current File (Integrated Terminal)".
It will then start the server with your FastAPI code, stop at your breakpoints, etc.
Here's how it might look:
If you use Pycharm, you can:
- Open the "Run" menu.
- Select the option "Debug...".
- Then a context menu shows up.
- Select the file to debug (in this case,
main.py).
It will then start the server with your FastAPI code, stop at your breakpoints, etc.
Here's how it might look:
|
https://fastapi.tiangolo.com/tutorial/debugging/
|
CC-MAIN-2020-45
|
refinedweb
| 360
| 75.61
|
Trying to improve my skills by creating a couple of scripts from scratch, i've been referencing an ancient script and can't figure out why something has been done the way it as. [code below]
I'm guessing it has been done to create some sort of template system. But what alternative would there be ? as i understand this is not the best way to do it.
At the end of the script it runs everything through 2 functions parse_if and parse
$style_gallery = parse_if ( $style_gallery, 'there_is_album_description', $description != '' );
$style_gallery = parse_if ( $style_gallery, 'there_are_pages', $pages_links !== '' );
$style_gallery = parse_if ( $style_gallery, 'browse_albums', 1 );
$style_gallery = parse_if ( $style_gallery, 'view_image', 0 );
$style_gallery = parse_if ( $style_gallery, 'there_are_albums', $albums_count );
$style_gallery = parse_if ( $style_gallery, 'there_are_images', $images_count );
print parse ( $style_gallery, array ( '{navigation}' => $navigation, '{show_albums}' => $albums_table, '{show_images}' => $images_table, '{show_pages}' => $pages_links, '{show_album_description}' => $description ) );
function parse ( &$template, $data, $x = '' )
{
if ( !is_array ( $data ) )
{
return str_replace ($data, $x, $template);
}
else
{
return str_replace ( array_keys($data), array_values($data), $template );
}
}
function parse_if ( &$template, $varname, $varval, $false_replace = '' )
{
$$varname = $varval;
return preg_replace ( '#{if ' . $varname . '}(.+?){endif ' . $varname .'}#ise', '($' . $varname . ')?"$1":"$false_replace"', $template );
}
finally it is passed to a 'template'
$style_gallery = <<<EOT
<html>
..
</html>
EOT;
So has this been done solely for the benefit of the template? is this a good way to go about it? what is the alternative? hope this makes sense i'm still a beginner
This topic is now closed. New replies are no longer allowed.
|
http://community.sitepoint.com/t/string-eot/15224
|
CC-MAIN-2015-22
|
refinedweb
| 224
| 55.74
|
JAVA Program for Square Star Pattern
Printing Square Star PatternIn this JAVA program we will be coding a star box pattern which will have n number of stars in it rows and n of stars in column hence it will be a n x n square star pattern . The user will input a value, that will be used for determining the number of rows and columns of the pattern, and than we will use “for loop”, for printing stars at the desired places
Prerequisite:
Basic knowledge of Java language and loops
Algorithm:
- Take number of rows as input from the user (length of side of the square) and store it in any variable (‘n‘ in this case).
- Run a loop ‘i‘ number of times to iterate through the rows . From i=0 to i<n. The loop should be structured as for (int i = 0; i
- Run a nested loop inside the previous loop to iterate through the columns. From j=0 to j<n The loop should be structured as for (int j=0 ; j
- Print ‘*’ inside the nested loop to print ‘*’s in all the columns of a row.
- Move to the next line by printing a new line. System.out.println();
Code in Java:
import java.util.Scanner; public class Pattern1 { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("Enter no"); int n = sc.nextInt(); for (int i = 0; i<n ; i++) { for (int j=0 ; j<n ; j++) System.out.print("*"); System.out.println(); } } } This code is contributed by Shubham Nigam (Prepinsta Placement Cell Student)
Login/Signup to comment
One comment on “JAVA Program for Square Star Pattern”
Thank you prepinsta for publishing my code…
|
https://prepinsta.com/java-program/square-star-pattern/
|
CC-MAIN-2021-43
|
refinedweb
| 286
| 66.27
|
- 28 Aug, 2020 1 commit
- 26 Aug, 2020 1 commit
We don't have any ARM64 CI Runner at this time.
- 25 Aug, 2020 1 commit
- 19 Aug, 2020 2 commits
- 17 Aug, 2020 1 commit
Also sort "VENDOR-VERSION-ARCH" variables.
- 12 Aug, 2020 1 commit
- 03 Jul, 2020 1 commit
- 27 Jun, 2020 1 commit
-
- 26 Jun, 2020 1 commit
Fedora's 'updates-testing' repository has fixed version of the krb5 package. Lets install it for the sake of 'tsiggss' and 'nsupdate' system tests.
- 25 Jun, 2020 2 commits
- 24 Jun, 2020 8 commits
Also drop other (unused) Linux distributions.
Pointing to repos in $(CENTOS_MIRROR_BASE_URL)/7/ namespace means using always the latest CentOS 7 repos (7.8 at this point), even if the install ISO was CentOS 7.7. A more version-specific repos should be used during the installation.
- 23 Jun, 2020 1 commit
Current backtraces from cores produced by system tests miss a lot information on Alpine Linux. Adding the 'musl-dbg' package makes them on par with the rest of C libraries. Addition size is about 3 MB.
- 18 Jun, 2020 1 commit
- 17 Jun, 2020 2 commits
The combination of current CentOS/EPEL 7 versions of mock and GPGME triggers the following error when YUM invoked by mock running in a Docker container attempts to verify repository metadata: gpgme.GpgmeError: (7, 32870, u'Inappropriate ioctl for device') This prevents any CentOS 7 RPM packages from being built in GitLab CI. Work around the problem by forcing mock to run YUM without a PTY attached.
YUM for CentOS 6 does not support the "ip_resolve" configuration option, so the tweaks applied in the Dockerfile are not fully effective. Furthermore, hardcoded file paths make those tweaks prone to breaking due to package updates. It is also simpler to disable IPv6 globally after creating a Docker container than trying to patch all places where it can potentially be used at image build time.
- 03 Jun, 2020 2 commits
- 01 Jun, 2020 1 commit
- 25 May, 2020 1 commit
- 21 May, 2020 3 commits
The "tzdata" Ubuntu package is no longer a dependency of any of the packages we install on that system. Make sure it always gets installed by explicitly including it in the list of packages to install on Debian/Ubuntu, so that the "time_test" BIND unit test can pass.
As we plan to at least update the Python QA tools used for BIND in GitLab CI after each BIND release goes public, add Debian buster to the list of operating system whose images are rebuilt on a monthly basis to make sure any potential issues caused by OS updates are caught in due time.
- Ondřej Surý authored and Michał Kępień committed
- 18 May, 2020 1 commit
With the introduction of Python-based system tests in BIND, all operating system images used in GitLab CI should now have pytest and the "requests" module installed. Additionally, the pip tool should also be available to facilitate prototyping tests in merge requests by eliminating the need to install pip in each CI job.
- 13 May, 2020 2 commits
- 29 Apr, 2020 6 commits
Otherwise, "pip install" fails.
One of the dependencies of the Cloudsmith CLI now needs this package.
The python-dnspython package was dropped from Debian "sid" and thus can no longer be installed on that system. Since this move is a part of a larger initiative to remove Python 2 from Debian, there is little sense in trying to implement Dockerfile workarounds for this specific package. Instead, remove Python 2 packages from all our Debian Docker images except the "buster" one (to retain some Python 2 test coverage for BIND branches other than "master").
docker/bind9/ubuntu-template symlinks to docker/bind9/debian-template, so GitLab CI will never trigger any Ubuntu jobs automatically unless the symlink itself changes. Ensure Ubuntu jobs are triggered when docker/bind9/debian-template is modified, to reflect actual Dockerfile dependencies.
|
https://gitlab.isc.org/mnowak/images/-/commits/main
|
CC-MAIN-2022-21
|
refinedweb
| 658
| 57.81
|
Advanced mathematical computation using Math module in java:
Math class is available in java.lang package and it has two constants and more than 40 static methods to do some advanced mathematical operations easily.
In this tutorial, we will check how to use these methods and what are those constants :
Constants:
Math module contains two constants :** E** and PI Math.E : It is a double value closer to** ‘e’** , base of natural algorithms Math.PI : It is a double value closer to ‘pi’
Accessing Math module :
For printing these values and to use methods of Math module we can use static import , which will allow us to access values and methods without writing ‘Math.’ in front of anything . Import statement is look like below :
import static java.lang.Math.*
Now let’s print both of these constants :
import static java.lang.Math.*; public class Test { public static void main(String[] args){ System.out.println("Math.E = "+E); System.out.println("Math.PI = "+PI); } }
It will give us the following outputs :
Math.E = 2.718281828459045 Math.PI = 3.141592653589793
Find absolute value using Math module in java :
To find the absolute value , we can abs() method of Math module. abs method can take double, float, int or long and it returns absolute value of the argument.
import static java.lang.Math.*; public class Test { public static void main(String[] args) { double mDouble = 22.23456E231; long mLong = 92233720368547758L; float mFloat = 12.221123432f; int mInt = 133; System.out.println("abs of the double " + abs(mDouble)); System.out.println("abs of the long " + abs(mLong)); System.out.println("abs of the float " + abs(mFloat)); System.out.println("abs of the int " + abs(mInt)); } }
Output :
abs of the double 2.223456E232 abs of the long 92233720368547758 abs of the float 12.221124 abs of the int 133
Find ceil, floor, rint and double :
ceil, floor and rint takes one double as argument and returns as a double . ceil : return value (double) >= argument floor : return value (double) <= argument rint : returns the int as double which is closest to the argument round : It takes double or float as argument and returns the closest long or int respectively
import static java.lang.Math.*; public class Test { public static void main(String[] args) { double mDouble = 22.24; System.out.println("ceil of 22.24 " + ceil(mDouble)); System.out.println("floor of 22.24 " + floor(mDouble)); System.out.println("rint of 22.24 " + rint(mDouble)); System.out.println("round of 22.24 " + round(mDouble)); } }
Output :
ceil of 22.24 23.0 floor of 22.24 22.0 rint of 22.24 22.0 round of 22.24 22
Find minimum and maximum of two values :
Minimum and maximum of two values can be find using ‘min’ or ‘max’ method of Math module. It takes two arguments to compare and both the arguments should be double,float , int or long.
import static java.lang.Math.*; public class Test { public static void main(String[] args) { double value1 = 22.24; double value2 = 22.25; System.out.println("Maximum of value1 and value2 is " + max(value1,value2)); System.out.println("Minimum of value1 and value2 is " + min(value1,value2)); } }
Output :
Maximum of value1 and value2 is 22.25 Minimum of value1 and value2 is 22.24
Find exponential, logarithm, power and square root using Math module :
To find exponential, logarithm, power and square root : exp, log , pow and sqrt methods are used respectively . exp, log and sqrt takes one double as argument and returns the result as double. pow takes two doubles : first one is the base and second one is the exponent.
import static java.lang.Math.*; public class Test { public static void main(String[] args) { double value1 = 22.24; double value2 = 22.25; double value3 = 4; double value4 = 2; System.out.println("Exponential of value1 is " + exp(value1)); System.out.println("logarithm of value2 is " + log(value2)); System.out.println("Square root of value3 is " + sqrt(value1)); System.out.println("Power of value4 as base and value3 as exponent is " + pow(value4, value3)); } }
Output :
Exponential of value1 is 4.557317409621067E9 logarithm of value2 is 3.1023420086122493 Square root of value3 is 4.715930449020639 Power of value4 as base and value3 as exponent is 16.0
Trigonometric functions in Math module of Java :
Math module contains some basic trigonometric functions. In the below example, we will check how to find sine, cosine, tangent, arcsine, arccosine, arctangent and how to convert argument to degree or radian. For sine, cosine and tangent, name of the methods used are : sin, cos, tan . Similarly for arc values, asin, acos and atan is used. toDegrees converts an argument to degree and toRadians converts an argument to radians. Each method takes one argument as double.
import static java.lang.Math.*; public class Test { public static void main(String[] args) { double value1 = 30.0; double value2 = 60.0; double value3 = 45.0; System.out.println("Sine of 30.0 is " + sin(toRadians(value1))); System.out.println("Cosine of 60.0 is " + cos(toRadians(value2))); System.out.println("Tangent of 45.0 is " + tan(toRadians(value3))); System.out.println("arcsine of sin(30) is " + toDegrees(asin(sin(toRadians(value1))))); System.out.println("arccosine of cos(60) is " + toDegrees(acos(cos(toRadians(value2))))); System.out.println("arctangent of tan(45.0) is " + toDegrees(atan(tan(toRadians(value3))))); } }
Output :
Sine of 30.0 is 0.49999999999999994 Cosine of 60.0 is 0.5000000000000001 Tangent of 45.0 is 0.9999999999999999 arcsine of sin(30) is 29.999999999999996 arccosine of cos(60) is 59.99999999999999 arctangent of tan(45.0) is 45.0
Find random number using random() method :
We can get one random number in the range of 0.0 and 1.0 using Math.random() method. To get a number in different range, simply multiply the output with the top range value. e.g. , to get one random number between 0 and 100, we will have to use the following :
(int)(Math.random() * 100)
That’s it . If you have any queries, do leave us a comment below. And, please share this tutorial and website with your friends :) reference :_ oracle doc_
|
https://www.codevscolor.com/java-tutorial-1-java-math-module
|
CC-MAIN-2020-50
|
refinedweb
| 1,020
| 54.08
|
Hello,
I am new to programming and I am trying to write a program that will open Internet Explorer and then add text to an input box and then press enter... here is an example:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int open_ie;
open_ie = system("IEXPLORE.EXE");
When the window opens, I would like to add text to googles input box and press enter. I have tried using fputs(), but I don't think that I am targeting the open window properly (or if fputs() is even correct). I'm not sure how to press enter, or automate any keyboard functionality.
I understand, by reading the FAQ, that ShellExecute() would be a better option than using system(), but I have used system() in the past on Linux machines and I'm a little more familiar (emphasis on little).
Language: C
OS: Windows 2000
Compilier: Turbo C
Any advice would be very helpful.
|
http://cboard.cprogramming.com/windows-programming/34765-browser-help.html
|
CC-MAIN-2016-30
|
refinedweb
| 154
| 71.85
|
Disclaimer: I gave a talk of the same title at Kubernetes Forum Delhi last week. You may watch the
video on YouTube if you prefer that. (Update: The original video was removed by CNCF. It has been reuploaded here) Additionally this post also serves as a reference for the commands used in the demos.
I have been programming for almost six years now and have used containers for nearly the entirety of that time. For what comes with being a programmer, curiosity got the better of me and I started asking around the question, that what is a container? One of the answers were along the lines of the following:
“Oh they’re like virtual machines but only that they do not have their own kernel and share the host’s kernel.”
This led me to believe for a long time that containers are a lighter form of virtual machines. And they felt like magic to me. Only when I started digging into the internals of a container much later did I realize that this quote felt very true:
Any sufficiently advanced technology is indistinguishable from magic.
— Sir Arthur Charles Clarke, author of 2001: A Space Odyssey
And I have always tried to find a way of explaining things that look like the following at first or second glance:
And see if they can be explained in a much easier way:
And the first thing I learned is that, there is really no such things as a container at all. And I found that what we know as a container, is made up of two Linux primitives:
- Namespaces
- Control groups (cgroups)
Before we look into what they are and how they help form the abstraction known as a container, it is important to understand how new processes are created and managed in Linux. Let us take a look at the following diagram:
In the above diagram, the parent process can be thought of as an active shell session, and the child process can be thought of as any command being run in the shell, for eg: ls, pwd. Now, when a new command is run, a new process is created. This is done by the parent process by making a call to the function
fork. While it creates a new and independent process, it returns the process ID (PID) of the child process to the parent process that invoked the function
fork. And in due course of time, both the parent and the child can continue to execute their tasks and terminate. The child PID is important for the parent to keep track of the newly created process. We will come back to this later in this blog post. If you’re interested to go deeper into the semantics of
fork, I wrote a more detailed blog post in the past describing this and how to do that with code. You may read it here.
Namespaces
So now that have an idea about how new processes are created in Linux, let us try and understand what namespaces help us achieve.
Namespaces are an isolation primitive that helps us to isolate various types of resources. In Linux, it is possible to do this for seven different type of resources at the moment. They are, in no specific order:
- Network namespace
- Mount
- UTS or Hostname namespace
- Process ID or PID namespace
- Inter process communication or IPC namespace
- cgroup namespace
- User namespace
I won’t go into detail about what each of them does in this post, as there is already a lot of literature on that and the man pages are possibly the best resource for them. Instead, I will try to explain network namespaces in ths post and see how it helps us to isolate network resources. But before that, it is important to note that by default each of these namespaces already exists in the system and are called the host namespaces or the default namespaces. For example, the default network namespace in a system contains network interface cards for WIFI and / or the ethernet port if there’s one.
All the infromation about a process is contained under
procfs, which is typifcally mounted on
/proc. Running
echo $$ will give us the PID of the currently running process:
$ echo $$ 448884
And if look inside
/proc/<PID>/ns we will the list of namespaces used by that process. For example:
$ ls /proc/448884/ns -lh total 0 lrwxrwxrwx 1 root root 0 Feb 23 19:00 cgroup -> 'cgroup:[4026531835]' lrwxrwxrwx 1 root root 0 Feb 23 19:00 ipc -> 'ipc:[4026531839]' lrwxrwxrwx 1 root root 0 Feb 23 19:00 mnt -> 'mnt:[4026531840]' lrwxrwxrwx 1 root root 0 Feb 23 19:00 net -> 'net:[4026532008]' lrwxrwxrwx 1 root root 0 Feb 23 19:00 pid -> 'pid:[4026531836]' lrwxrwxrwx 1 root root 0 Feb 23 19:00 pid_for_children -> 'pid:[4026531836]' lrwxrwxrwx 1 root root 0 Feb 23 19:00 user -> 'user:[4026531837]' lrwxrwxrwx 1 root root 0 Feb 23 19:00 uts -> 'uts:[4026531838]'
For each namespace, there is a file which is a symbolic link[1] to ID of the namespace. So for the network namespace, the ID of the namespace in the above example is
net:[4026532008] while
4026532008 is the inode number. For two processes in the same namespace, this number is the same.
On Linux, to create a new namespace, we can use the system call
unshare. And to create a new network namespace we need to add the flag
-n. So in a shell session with root privileges, we will do:
# unshare -n
We can look into the
/proc/<PID>/ns directory to verify that we have indeed created a new namespace:
# ls -l /proc/$$/ns/net lrwxrwxrwx 1 root root 0 Feb 23 18:46 /proc/447612/ns/net -> 'net:[4026533490]'
The namespace ID is different than what we see above for the host network namespace. And running the command
ip link after this will only show us the loopback interface:
# ip link 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
If there are any network interfaces like the WIFI card or the ethernet port, they won’t show up at all. In fact, if we tried to run
ping 127.0.0.1, something we mostly take for granted to work won’t work either:
# ping 127.0.0.1 ping: connect: Network is unreachable
But why did the above happen? Let us try to understand that.
At first we created a new network namespace, the very act isolated the network resources already in the default namespace. And the only interface available to us in this new namespace is the
loopback interface. However it does not have an IP address assigned to it yet, as a result of which
ping 127.0.0.1 does not quite work. This can be verified by running:
# ip address 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Which shows that not only does this interface does not have an IP address at the moment, its
state is also set to
DOWN. Running the following commands would fix that:
# ip address add dev lo local 127.0.0.1/8 # ip link set lo up
At first we assigned the IP address
127.0.0.1 to that interface and set the state of the interface to
UP and thus making it available to listen for incoming network packets. And now
ping would work as expected:
# ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.060 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.071 ms
To understand the concept of isolation, we will go forward with trying to get this new network interface, (let’s call it CHILD) to talk to the host network namespace and vice versa.
To aid our understanding, we will set the
PS1 variable in this shell to something easily identifiable:
# export PS1="[netns: CHILD]# " [netns: CHILD]#
And we will also spawn a new terminal with root access so that the shell running in it belongs to the host network namespace. Once again we will set the
PS1 variable to help with identifying the host namespace easily:
# export PS1="[netns: HOST]# " [netns: HOST]#
Running the
ip link command on this interface would show the currently installed network interfaces in the system. For example:
[netns: HOST]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 3: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000 link/ether 0e:94:18:de:da:b3 brd ff:ff:ff:ff:ff:ff 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:ad:0f:83:cc brd ff:ff:ff:ff:ff:ff 11: wlp61s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DORMANT group default qlen 1000 link/ether fa:3d:a9:90:95:5d brd ff:ff:ff:ff:ff:ff
To list all the network namespaces in the system we can run:
[netns: HOST]# ip netns list
But that will produce an empty output if readers have been following along. So does that mean the command didn’t work or we did something wrong there, even though we created a new network namespace earlier? The answer to both of the questions is a no. As everything is a file in UNIX[2], the
ip command looks for network namespaces in the directory
/var/run/netns. And currently that directory is empty. So we will first create an empty file and then try running that command again:
[netns: HOST]# touch /var/run/netns/child [netns: HOST]# ip netns list Error: Peer netns reference is invalid. Error: Peer netns reference is invalid. child
We do see the
child namespace in the list, but we also see an error. This exists because we have not yet mapped the shell running the new namespace to this file yet. To do that, we will bind mount the
/proc/<PID>/ns/net file to the new file we created above. This can be done by executing the following in the shell running the child network namespace:
[netns: CHILD]# mount -o bind /proc/$$/ns/net /var/run/netns/child [netns: CHILD]# ip netns list child
And this time the command to list the network namespaces works without any errors. This means that we have associated the namespace with the ID
4026533490 to the file at
/var/run/netns/child and the namespace is now persistent.
Now we need to find a way to get the host and the child network namespace to talk to each other. To do this, we will create a pair of virtual ethernet devices in the host network namespace:
[netns: HOST]# ip link add veth0 type veth peer name veth1
In this command we create a virtual ethernet device named
veth0 while the other end of this pair device is called
veth1. We can verify this by running:
[netns: HOST]# ip link | grep veth 35: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 36: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
At the moment, both of these devices exist in the host namespace. If we run
ip link in the child network namespace, it will only show the
loopback address as was the case previously:
[netns: CHILD]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
So what can we do to make one of the veth devices show up in the child namespace? To do that, we will run the following command in the host network namespace, because that is where the veth devices currently exist:
[netns: HOST]# ip link set veth1 netns child
Here we are instructing the
veth1 network device to be assigned to the namespace
child. Looking at
ip link in this namespace will not show the
veth1 device any longer:
[netns: HOST]# ip link | grep veth 36: veth0@if35: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
While on the other hand,
veth1 now appears in the child network namespace:
[netns: CHILD]# ip link | grep veth 35: veth1@if36: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
We have two more steps before we can make them to talk to each other, which are to assign an IP address to each
veth device and to set the state to up. So let’s do that quickly:
[netns: HOST]# ip address add dev veth0 local 10.16.8.1/24 [netns: HOST]# ip link set veth0 up
We can verify the results of the commands with:
[netns: HOST]# ip address | grep veth -A 5 36: veth0@if35: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000 link/ether 32:c7:79:c7:e2:e0 brd ff:ff:ff:ff:ff:ff link-netns child inet 10.16.8.1/24 scope global veth0 valid_lft forever preferred_lft forever
And the same for the child namespace:
[netns: CHILD]# ip address add dev veth1 local 10.16.8.2/24 [netns: CHILD]# ip link set veth1 up
[netns: CHILD]# ip address | grep veth -A 5 35: veth1@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 5a:62:dd:40:a6:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.16.8.2/24 scope global veth1 valid_lft forever preferred_lft forever inet6 fe80::5862:ddff:fe40:a6f1/64 scope link valid_lft forever preferred_lft forever
Finally, we should be able to ping each other:
[netns: HOST]# ping 10.16.8.2 PING 10.16.8.2 (10.16.8.2) 56(84) bytes of data. 64 bytes from 10.16.8.2: icmp_seq=1 ttl=64 time=0.086 ms 64 bytes from 10.16.8.2: icmp_seq=2 ttl=64 time=0.099 ms 64 bytes from 10.16.8.2: icmp_seq=3 ttl=64 time=0.100 ms
[netns: CHILD]# ping 10.16.8.1 PING 10.16.8.1 (10.16.8.1) 56(84) bytes of data. 64 bytes from 10.16.8.1: icmp_seq=1 ttl=64 time=0.057 ms 64 bytes from 10.16.8.1: icmp_seq=2 ttl=64 time=0.090 ms 64 bytes from 10.16.8.1: icmp_seq=3 ttl=64 time=0.118 ms
Voila! We did it! I hope that helps with understanding namespaces better. What we did above can be best described by this image of two children talking to each other with a string telephone made up of tin cans and a long string. In this image, the children can be thought of as the namespaces while the tin cans are analogous to the virtual ethernet devices we created and used for sending and receiving network traffic.
cgroups
Next up is cgroups. They help us in controlling the amount of the resources that a process can consume. The best examples for this are CPU and memory. And the best use case to do this is to avoid a process from accidentally using all the available CPU or memory and choking the entire system from doing anything else. The cgroups reside under the
/sys/fs/cgroup directory. Let us take a look at the contents:
# ls /sys/fs/cgroup/ -lh total 0 dr-xr-xr-x 5 root root 0 Feb 17 01:05 blkio lrwxrwxrwx 1 root root 11 Feb 17 01:05 cpu -> cpu,cpuacct lrwxrwxrwx 1 root root 11 Feb 17 01:05 cpuacct -> cpu,cpuacct dr-xr-xr-x 5 root root 0 Feb 17 01:05 cpu,cpuacct dr-xr-xr-x 2 root root 0 Feb 17 01:05 cpuset dr-xr-xr-x 5 root root 0 Feb 17 01:05 devices dr-xr-xr-x 2 root root 0 Feb 17 01:05 freezer dr-xr-xr-x 2 root root 0 Feb 17 01:05 hugetlb dr-xr-xr-x 9 root root 0 Feb 20 00:24 memory lrwxrwxrwx 1 root root 16 Feb 17 01:05 net_cls -> net_cls,net_prio dr-xr-xr-x 2 root root 0 Feb 17 01:05 net_cls,net_prio lrwxrwxrwx 1 root root 16 Feb 17 01:05 net_prio -> net_cls,net_prio dr-xr-xr-x 2 root root 0 Feb 17 01:05 perf_event dr-xr-xr-x 5 root root 0 Feb 17 01:05 pids dr-xr-xr-x 2 root root 0 Feb 17 01:05 rdma dr-xr-xr-x 5 root root 0 Feb 17 01:05 systemd dr-xr-xr-x 5 root root 0 Feb 17 01:06 unified
Each directory is a resource whose usage can be controlled. To create a new
cgroup, we need to create a new directory inside one of these resources. For example, if we intended to create a new
cgroup to control memory usage, we would create a new directory (the name is upto us) under the
/sys/fs/cgroups/memory path. So let’s do that:
# mkdir /sys/fs/cgroup/memory/child
And let us take a look inside this directory. If you’re thinking why bother because we just created the directory and it should be empty, read on:
# ls -lh /sys/fs/cgroup/memory/demo/ total 0 -rw-r--r-- 1 root root 0 Feb 24 12:29 cgroup.clone_children --w--w--w- 1 root root 0 Feb 24 12:29 cgroup.event_control -rw-r--r-- 1 root root 0 Feb 24 12:29 cgroup.procs -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.failcnt --w------- 1 root root 0 Feb 24 12:29 memory.force_empty -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.failcnt -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.limit_in_bytes -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.max_usage_in_bytes -r--r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.slabinfo -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.tcp.failcnt -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.tcp.limit_in_bytes -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.tcp.max_usage_in_bytes -r--r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.tcp.usage_in_bytes -r--r--r-- 1 root root 0 Feb 24 12:29 memory.kmem.usage_in_bytes -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.limit_in_bytes -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.max_usage_in_bytes -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.memsw.failcnt -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.memsw.limit_in_bytes -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.memsw.max_usage_in_bytes -r--r--r-- 1 root root 0 Feb 24 12:29 memory.memsw.usage_in_bytes -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.move_charge_at_immigrate -r--r--r-- 1 root root 0 Feb 24 12:29 memory.numa_stat -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.oom_control ---------- 1 root root 0 Feb 24 12:29 memory.pressure_level -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.soft_limit_in_bytes -r--r--r-- 1 root root 0 Feb 24 12:29 memory.stat -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.swappiness -r--r--r-- 1 root root 0 Feb 24 12:29 memory.usage_in_bytes -rw-r--r-- 1 root root 0 Feb 24 12:29 memory.use_hierarchy -rw-r--r-- 1 root root 0 Feb 24 12:29 notify_on_release -rw-r--r-- 1 root root 0 Feb 24 12:29 tasks
Turns out operating system creates a whole bunch of files for every new directory. Let us take a look at one of the files:
# cat /sys/fs/cgroup/memory/demo/memory.limit_in_bytes 9223372036854771712
The value in this file dictates the maximum memory that a process can use if it is part of this cgroup. Let us set this value to a much smaller number, say 4MB, but in bytes:
# echo 4000000 > /sys/fs/cgroup/memory/demo/memory.limit_in_bytes
And let us look inside this file:
# cat /sys/fs/cgroup/memory/demo/memory.limit_in_bytes 3997696
While this is not exactly what we wrote into the file, it is approximately 3.99 MB. My guess is that this has something to do with memory alignment which is managed by the operating system. I haven’t researched this futher at the moment. (If you know the answer, please let me know!)
Now let us start a new process in a new hostname namespace:
# unshare -u
This starts a new shell process. Let us try to run a command, like
wget which I know needs more than 4MB memory to function:
# wget wikipedia.org URL transformed to HTTPS due to an HSTS policy --2020-02-24 12:36:58-- Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt' Resolving wikipedia.org (wikipedia.org)... 103.102.166.224, 2001:df2:e500:ed1a::1 Connecting to wikipedia.org (wikipedia.org)|103.102.166.224|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: [following] --2020-02-24 12:36:58-- Resolving ()... 103.102.166.224, 2001:df2:e500:ed1a::1 Connecting to ()|103.102.166.224|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 76776 (75K) [text/html] Saving to: ‘index.html’ index.html 100%[============================================>] 74.98K 362KB/s in 0.2s 2020-02-24 12:36:59 (362 KB/s) - ‘index.html’ saved [76776/76776]
Now we noticed that the command worked. That is because this process is part of the default cgroup. To make it part of the new cgroup, we need to write the PID of this process to the
cgroup.procs file:
# echo $$ > /sys/fs/cgroup/memory/demo/cgroup.procs
And let us look inside the contents of this file:
# cat /sys/fs/cgroup/memory/demo/cgroup.procs 468401 468464
There seems to be two entries here. The first entry is the PID of the shell process that we wrote to the file. The other is the PID of the
cat process that we run. This is because all child processes are part of the same cgroup as the parent by default. And once the process terminates, the PID is automatically removed from the file. If we run the same command again, we will still find two entries, but the second one would be different:
# cat /sys/fs/cgroup/memory/demo/cgroup.procs 468401 468464
And now let us try to run the
wget command once again:
# wget wikipedia.org URL transformed to HTTPS due to an HSTS policy --2020-02-24 12:44:26-- Killed
The process gets killed immediately because it was trying to use more memory than the cgroup it is part of currently permits. Pretty neat I’d say.
Addendum
So while
namespaces and
cgroups allow to isolate and control the usage of resources and form the core of the abstraction popularly known as containers there are two more concepts that are used for enhancing the isolation further:
Capabilities: It limits the use of root privileges. Sometimes we need to run processes that need elevated permissions to do one thing but running it as root is a security risk because then the process can do pretty much anything with the system. To limit this, capabilities provide a way of assigning special privileges without giving system wide root privileges to a process. One example is if we need a program to be able to manage network interfaces and related operations, we can grant the program the capability
CAP_NET_ADMIN.
Seccomp: It limits the use of syscalls. To ramp down on security even further, it is possible to use them to block syscalls that can cause additional harm. For example blocking
killsyscall will prevent the processes from being able to terminate or send signals to other processes.
Recap
So while
namespaces allow us to isolate the type of resource,
cgroups help us to control the amount of resource usage by a process. And
capabilities limit the use of root privileges by breaking down operations into different types of capabilities. Finally
seccomp helps to block processes from invoking unwanted syscalls. These concepts combined together form a container, which is a nicer abstraction than having to worry about all of these at the same time.
One final note
The diagram about
fork earlier in this post is slightly incomplete. Here is a more complete diagram:
As noted earlier
fork returns the child’s PID to the parent process, and it uses this PID to “wait” for the child process to finish execution. This is done by the
waitpid syscall. This is important to avoid zombie processes and is known as reaping. Once a child process has terminated, it is the responsibility of the parent to ensure any resources allocated for the child process are cleaned up. In a nutshell, this is the job of a container runtime or a container engine. It spawns new conatiners or child processes and ensures the resources are cleaned up once the container has terminated.
References
I found a lot of information about
namespacesin this amazing seven part series on lwn.net:.
Julia Evans’ post on What even is a container is a brilliant guide for grasping the concepts quickly:
man pages for
namespaces,
unshareand
cgroupshave been very helpful as well and is a recommended reading.
That is all for now. I hope this post was helpful and containers don’t feel like magic anymore.
Footnotes:
[1]: Quoting the man page for
namespaces: “In Linux 3.7 and earlier, these files were visible as hard links. Since Linux 3.8, they appear as symbolic links.”
[2]:
|
https://indradhanush.github.io/blog/life-of-a-container/
|
CC-MAIN-2022-40
|
refinedweb
| 4,461
| 69.62
|
For.
This solution here is a variation of your option 1 but without having to
maintain multiple copies of the DLL's.
Add a new folder to your Solution1 (say Third_Party_DLL). Right CLick the
folder -> Add existing item and instead of adding the DLL's physically here
(create a link to these third party DLL's from the location where you have
saved it). Now after adding the DLL links in the folder, add reference to
the DLL's in your project from this folder (Third_party_Dll).
Now when you are getting the latest of the solution file, it should get
your DLL's too. I havent tried this out myself for Dll's but have done this
for maintaining a single copy of the AssemblyVersion file and it worked
well. Try and let me know.
Add Vs Add Link
You can setup a custom probing path for your 3rd party assembly. Please
note that this approach won't work with ASP.NET applications (they don't
support custom probing paths).
You could try to use a tool which changes the pages in your browser
on-the-fly, for example Greasemonkey.
Another approach is to use to use a scraping library, something like jsoup
(others and comparison).
JSON.NET is library for serializing and deserializing .NET objects to JSON.
It has nothing to do with sending HTTP requests. You could use a WebClient
for this purpose.
For example here's how you could call the API:
string url =
"";
using (var client = new WebClient())
{
client.Headers[HttpRequestHeader.Authorization] = "Bearer
6bfd44fbdcdddc11a88f8274dc38b5c6f0e5121b";
client.Headers[HttpRequestHeader.ContentType] = "application/json";
client.Headers["X-IFORM-API-REQUEST-ENCODING"] = "JSON";
client.Headers["X-IFORM-API-VERSION"] = "1.1";
MyViewModel model = ...
string jsonSerializedModel = JsonConvert.Serialize(model); // <--
Only here you need JSON.NET to serialize your model to a JSON string
The only way to launch ProCamera is to ensure it has its own custom URL
scheme. If there is none, you, unfortunately, are unable to launch it from
your app.
Update: try this one procameraapp://
You can put third party JARs in a lib/ folder in the EAR and add
<library-directory>lib</library-directory>
to your application.xml.
import requests
s = requests.session()
r = s.get('')
r = s.get('')
for cookie in s.cookies:
print(cookie)
Using: Selenium + PhantomJS
from selenium import webdriver
cookie_file_path = 'cookie.txt'
args = ['--cookies-file={}'.format(cookie_file_path)]
driver = webdriver.PhantomJS(service_args=args)
driver.get('')
driver.get('')
with open(cookie_file_path) as f:
print(f.read())
Output (wrapped):
[General]
cookies="@Variant(x7fx16QList<QNetworkCookie>x1a
xd6NID=67=SZetUV-oLq_M8ik40VT2GEIb45LMaXkhm6H3zx1wULO52qkCHPc9AML_p5eubW4zL
Ms158YAYKQTdCJzb4mInix_Zek6P8Ej1XZh9h5Ng3I7X4gZuE_S-Fl2YpaSYd9B; HttpOnly;
expir
es=Wed, 18-Dec-2013 02:44:31 GMT; domain=.google.co.kr; path=/
Try following a JNI tutorial like this one :
First, follow the tutorial to get Java methods that can call into C++
methods.
Then, modify the C++ methods to be similar to your code above, calling in
to your third party DLL.
Yes, R can load it using dyn.load. You may or may not be able to actually
call the functions it exports, though. Unless the functions' arguments
correspond to what R can handle, they won't be usable. If this is the case,
you can write a wrapper dll that acts as a translation layer between it and
R.
No, it's impossible.
You can only read cookies that come from your own domain. The other cookies
are not even sent to your server.
If you could read every cookies the user has, you would be able to obtain
personal information such as their facebook password, connection to their
bank etc.
It would be a serious breach of privacy, and as such, browsers do not send
cookies except those from the same domain.
It's hard to say for certain since you haven't provided code for your
service but my guess is that the call to your service method is returning
before your blackbox component fires the event. One thing you could look
at would be using a WCF duplex service which would allow you to publish an
event from the server to the client.
WCF Duplex Services
Try this,
Try to modify the amount of filter textbox in below code.
$('k-widget,.k-numerictextbox').find('span').find('input:eq(0)').live("focus",
function (e) {}
you dont need to enable ssl on your entire website. you can enable it for
specific pages using .htaccess
try this
If you are familiar with mod_rewrite and regex a little bit, you should
have no problems reading these rules -- comments are present explaining
what particular rule does. the rest -- regex basics:
Options +FollowSymLinks -MultiViews
RewriteEngine On
RewriteBase /
# force https for /login.php and /register.php
RewriteCond %{HTTPS} =off
RewriteRule ^(login|register).php${HTTP_HOST}%{REQUEST_URI}
[R=301,L]
# don't do anything for images/css/js (leave protocol as is)
RewriteRule .(gif|jpe?g|png|css|js)$ - [NC,L]
# force http for all other URLs
RewriteCond %{HTTPS} =on
RewriteCond %{REQUEST_URI} !^/(login|register).php$
RewriteRule .*{HTTP_HOST}%{REQUEST_URI
Securing a tenant's client access
Securing a tenant's 3rd party client access to your Javascript poses a
unique set of challenges. Most of the difficulty in this solution stems
from the fact that the authentication mechanism must be present in the
tenants web content and delivered from their clients browser. Standard
client<>server authentication mechanisms such as sessions, cookies,
custom headers, referrers and IP address restriction do not apply well due
to the extended nature of the transaction.
This article by Bill Patrianakos provides a solution by using a dynamic key
request that provides an access token to the tenant's client.
Patrianakos provides some good information on Third Party tenant
relationships and discusses some the limitations of this model in his
article.
Secu
You are passing a StringBuffer to something that is expecting a particular
structure, as per
You should take a look at Getting Text from SysListView32 in 64bit
Place this:
$("select > option[value*='FL']").remove();
Inside this:
$("#country").on("change", function(){
$("#fields_state > option[value*='FL']").remove();
}
That'll only remove when the select is changed. Put that in your JS and
you'll be golden.
EDIT
The best solution would be to remove the onchange all together. Then place
the code in your JS, like so:
$(document).ready(function() {
$("#country").on("change", function(){
ChangeCountryAndStateLabel({
'obj' : this,
'targetObj' : '#state_cus1',
'sel' : '',
'id' : 'fields_state'
},{
'obj' : this,
'targetObj' : '#fields_state_region',
'template' : '<LABEL>'
});
$("#fields_stat
The command and its parametets must be separate strings, do not join them
into a single string.
At a closer look, the error message actually is clear about it (note where
the quotes are):
Command execution failed: "C:\PATH\TO\3rdparty.exe --flags-omitted" file
does not exist.
put into your items:{}
new Ext.Panel({id: 'someID',title:'someTitle',hmtl:'<your
html>',...})
didn't test but should work
another way wich i'd prefer is using loader:{url:<url to the site you
want to show>,renderer:'frame',...} instead of html in the panel
Its a user setting which keyboard they use. It would be terribly
frustrating if every app could force their own keyboard on the users.
You can start an intent to open the keyboard settings, to give people the
option to use your keyboard.
There should not be ways for doing this.
There are methods to snoop users' browser history and other sensitive user
information, but almost all these are either browser vulnerabilities,
XSS-vulnerabilities or just plain privacy violations. You ought not base
anything that is meant to be sustainable on these methods, as they are
likely to be either fixed and in many cases are illegal.
You certainly cannot stop a method that is already running on another
computer (through a webservice). The cancellation token for tasks can only
stop the task from starting when the token is set. Once the task has
started there is no way back for the same reason that Thread.Abort should
(almost) never be used.
You're a bit unclear on what exactly you're doing, and how it fails.
If you're trying to use the framework from a .cpp file, you need indeed to
also set QMAKE_CXXFLAGS += -F/Library/Frameworks, and possibly also
QMAKE_CFLAGS (the example in the bug report assumes the framework exposes
Objective-C bindings. I will correct that)
Try this, and re-open the bug report with logs and ways to reproduce if it
still fails.
(The missing highlight of Qt Creator is not relevant, they are both still
qmake variables.)
What you're thinking of is called a URL-Scheme.
I trying to look up the info, but I think that the developers at Flickr
didn't think this through. An example post of my search is right here.
It seems that Flickr only has the flickr:// scheme, which opens the app,
but not more than that. It's a shame.
You should not include your 3rd party app urls anywhere so remove that line
from your root urls, instead append this to your Project model:
@app_models.permalink
def get_absolute_url(self):
return ('project_detail', 'zipfelchappe.urls', (), {
'slug': self.slug,
})
Or whatever name your detail view can be reversed by (see your zipfelchappe
urls module)
This will make the views available to the feincms system as if the current
page is the app's root. If this is not what you intend to do you shouldn't
bother integrating the app within the cms page structure at all.
You could however "inject" an url within your feincms navigation by using a
PagePretender as explained in the documentation.
Apologies in advance if I misunderstood your question, but it sounds like
you'd like to use JavaScript from another location on your site.
Using the example above, here's what that would look like:
<html>
<head>
<title>Title of the document</title>
</head>
<body>
The content of the document......</p>
The <a href="">link</a> of the
document ......
<script type="text/javascript"
src=""></script>
</body>
</html>
You could also link to it in the <head> instead, but it's better for
performance if the scripts are placed in the footer.
The first step is to identify an encryption scheme that all three systems
support. I think AES-256 is a good scheme but it's not supported directly
by Python (it needs the PyCrypto module).
You should not try to implement your own scheme; cryptography is hard and
even harder to get right. Just to give you an example: It took the world's
experts on cryptography several years to build AES. So unless you're
smarted than the whole world, your algorithm will be flawed.
That also means you might have to resort to an external library when a good
scheme isn't supported by all three runtimes since writing a good
cryptographic algorithm is also very hard.
After you have identified a schema, encode some test data to make sure each
system can properly en- and decode it. These unit tests will make d
That link you provided is actually not a Google Maps API application. It is
a shared map via Google Maps - which has nothing to do with the javascript
API. If you see the end of the URL you'll see the parameter
&output=embed. Remove that from the URL and it should take you to the
original map created on Google Maps itself, in here you can export the
points via the KML download link.
Why do you placed your libary into a package? Thats not how you do that. Do
this instead:
Go to project properties by right clicking on project.
Then click on Libraries tab, You will see Compile, Run, Compile Tests, Run
Tests tabs.
Click on Compile tab (the first tab, selected by default)
Click on Add JAR/Folder button at right
Then browse and select the jar file(not the folder) you want to include.
Included jar file will show on the following box of Compile tab.
Click on OK button.
Finished.
Than it will be added to you project and classpath.
You imported a complete folder into the project, but you only have a single
jar file.
|
http://www.w3hello.com/questions/-Best-3rd-party-ASP-NET-Datagrid-
|
CC-MAIN-2018-17
|
refinedweb
| 2,007
| 65.12
|
Python Chat Bot Tutorial – AI Chatbot with Deep Learning (BONUS)
[ad_1]
This is just a quick bonus video for any of you interested in some of the applications of the chat bot. I show you how I’ve used it in my discord server and how to add whats known as a confidence for our bots responses. This way when the bot can give an reasonable answer if its not sure what the user is saying. Tutorials
– Python AI Chatot
– AI chat bot tutorial
Source
[ad_2]
Comment List
Hope you guys enjoyed the series! Let me know what you want to see next 🙂
How you gets 1.0 accuracy
This is awesome. Thank you so much for your high quality videos and strong content.
Nice, How to detect date while I’m passing as string example “set remainder on 25th November” can you extract date as 25/11/2020 it’s possible if possible please make one video thank you
bro help me iam getting this error
C:UsersHPDocumentsMy Bots>bot.py
C:UsersHPDocumentsMy Botsbot.py:39: SyntaxWarning: "is" with a literal. Did you mean "=="?
words = [stemmer.stem(w.lower()) for w in words if w is "?"]
2020-11-11 09:46:41.514314: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-11-11 09:46:41.518000: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "C:UsersHPDocumentsMy Botsbot.py", line 6, in <module>
import tflearn
File "C:UsersHPAppDataLocalProgramsPythonPython38libsite-packagestflearn__init__.py", line 4, in <module>
from . import config
File "C:UsersHPAppDataLocalProgramsPythonPython38libsite-packagestflearnconfig.py", line 5, in <module>
from .variables import variable
File "C:UsersHPAppDataLocalProgramsPythonPython38libsite-packagestflearnvariables.py", line 7, in <module>
from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope
ModuleNotFoundError: No module named 'tensorflow.contrib'
Thanks! Could finish your tutorial, very well explained 🙂
really helpful thankyou Tim!
Can you please add a thinking possibility to this bot, so he has some kind of memory so he knew what was said before. And how to deploy it on a website
Can we download this from github?
If any of you want to know how to deploy this on Discord then go to my github repository:
Make sure you read the README if you want to make it work!
how to connect that python code with discord could you please tell me
best one.. very clear and conscise..
@Tech With Tim please upload this chat bot tutorial code in github
Module nltk has not attribute word error
I am up to this step now but whenever I add a new tag, the chatbot does not understand. Any suggestions please?
Im getting this error, am I missing something?
File "main.py", line 113, in chat
print(random.choice(responses))
UnboundLocalError: local variable 'responses' referenced before assignment
I mean, it makes sense, its just assigned in the if statement, but i dont get why is it not entering the if 🙁
Isabel De Clercq, founder of connect|share|lead talks about leadership in lockdown on Engati CX. She believes that leaders should talk about what they have learned on the digital channels. She says that leaders also need to ask what have their employees learnt in the meetings, they need encouragement.
Awesome project! I know I am a little late, but I was thinking, would it be possible to tell the bot to add an intent? And obviously, not just tell him, but he should add it as well, so then you wouldn't have to add an intent manually, then it would feel more like an AI and you would be able to actually learn him things. Thank you!
How to add Url to json file? For example when a user says Google.. Then the response of the bot to open the Url
Great work sir.Wanted a video on how to integrate a bot into some application. !
7:38 actually
7:48 great work
Hi Tim,
Do you have the code where you implemented this code into your discord server?
I enjoyed the series very much! Thank you for the brilliant tutorials and content! I'm am able to kickstart my school project because of this. I have to add additional functionality to make it into chatbot for E-Learning. >:)
Do you have any videos on how to create an intents file from data. like a twitter dump or reddit comments?
This is among the best tutorials i have watched but how do implement it in the browser with HTML and CSS
|
https://openbootcamps.com/python-chat-bot-tutorial-ai-chatbot-with-deep-learning-bonus/
|
CC-MAIN-2021-25
|
refinedweb
| 774
| 66.33
|
This notebook contains a demonstration of new features present in the 0.49.0 release of Numba. Whilst release notes are produced as part of the
CHANGE_LOG, there's nothing like seeing code in action! This release contains some significant changes to Numba internals and, of course, some exciting new features!
This release drops support for Python 2 both for users and in the code base itself. It also raises the minimum supported versions of related software as follows:
It's still possible to build with NumPy 1.11 but runtime support is for 1.15 or later.
The core developers would like to offer their thanks to all users for their understanding and support with respect to these changes. If you need help migrating your code base to 0.49 due to this refactoring, try one of:
The new features are split into sections based on use case...
ordand
chr
First, import the necessary from Numba and NumPy...
from numba import jit, njit, config, __version__, prange from numba.typed import List config.NUMBA_NUM_THREADS = 4 # for this demo, pretend there's 4 cores on the machine from numba.extending import overload import numba import numpy as np assert tuple(int(x) for x in __version__.split('.')[:2]) >= (0, 49)
Numerous users have asked for the ability to dynamically control, at runtime, the number of threads Numba uses in parallel regions. Numba 0.49 brings this functionality, it is modelled after
OpenMP as this is a model familiar to a lot of users. Documentation is here.
The API consists of two functions:
numba.get_num_threads()- returns the number of threads currently in use.
numba.set_num_threads(nthreads)- sets the number of threads to use to
nthreads.
these functions themselves are thread and fork safe and are available to call from both Python and JIT compiled code!
For those interested, the implementation details are here, as a warning, they are somewhat gnarly!
Now, a demonstration:
from numba import get_num_threads, set_num_threads # Discover thread mask from Python print("Number of threads: {}".format(get_num_threads())) # Set thread mask from Python set_num_threads(2) # Check it was set print("Number of threads: {}".format(get_num_threads())) @njit def get_mask(): print("JIT code, number of threads", get_num_threads()) # Discover thread mask from JIT code get_mask() @njit def set_mask(x): set_num_threads(x) print("JIT code, number of threads", get_num_threads()) # Set thread mask from JIT code set_mask(3)
Something more complicated, limiting threads in use:
@njit(parallel=True) def thread_limiting(): n = 5 mask1 = 3 mask2 = 2 # np.zeros is parallelised, all threads are in use here A = np.zeros((n, mask1)) # only use mask1 threads in this parallel region set_num_threads(mask1) for i in prange(mask1): A[:, i] = i # only use mask2 threads in this parallel region set_num_threads(mask2) A[:, :] = np.sqrt(A) return A print(thread_limiting()) # Uncomment and run this to see the parallel diagnostics for the function above # thread_limiting.parallel_diagnostics(thread_limiting.signatures[0], level=3)
It should be noted that once in a parallel region, setting the number of threads has no effect on the region that is executing, it does however impact subsequent parallel region launches. For example:
mask = config.NUMBA_NUM_THREADS - 1 # create a mask # some constants based on mask size N = config.NUMBA_NUM_THREADS M = 2 * config.NUMBA_NUM_THREADS @njit(parallel=True) def child_func(buf, fid): M, N = buf.shape for i in prange(N): # parallel write into the row slice buf[fid, i] = get_num_threads() @njit(parallel=True) def parent_func(nthreads): acc = 0 buf = np.zeros((M, N)) print("Parent: Setting mask to:", nthreads) set_num_threads(nthreads) # set threads to mask print("Parent: Running parallel loop of size", M) for i in prange(M): local_mask = 1 + i % mask # set threads in parent function set_num_threads(local_mask) # only call child_func if your thread mask permits! if local_mask < N: child_func(buf, local_mask) # add up all used threadmasks print("prange index", i, ". get_num_threads()", get_num_threads()) acc += get_num_threads() return acc, buf print("Calling with mask: {} and constants M = {}, N = {}".format(mask, M, N)) got_acc, got_buf = parent_func(mask) print("got acc = {}".format(got_acc)) # expect sum of local_masks in prange(M) loop print("expect acc = {}".format(np.sum(1 + np.arange(M) % mask))) # Output `buf` should only be written to in rows with index < N as # the thread mask would forbid it, the contents of the rows is the thread mask print(got_buf)
For quite some time Numba has been able to pass around Numba JIT decorated functions as objects, these, however, have been seen by Numba as different types even if they have identical signatures. Numba 0.49.0 brings a new experimental feature that makes function objects first class types such that functions with the same signatures can be see has being "of the same type" for the purposes of type inference. Further
cfuncs, JIT functions and a new "Wrapper address protocol" based functions are all supported to some degree. Documentation is here.
An example:
@njit("intp(intp)") def foo(x): return x + 1 @njit("intp(intp)") def bar(x): return x + 2 @njit("intp(intp)") def baz(x): return x + 3 @njit def apply(arg, *functions): for fn in functions: # to iterate over a container it must contain "all the same types" arg = fn(arg) return arg apply(10, foo, bar, baz)
from numba.typed import List print(List(range(10))) x = [4., 6., 2., 1.] print(List(x)) # also works in JIT code @njit def list_ctor(x): return List(x), List((1, 2, 3, 4)) list_ctor(np.arange(10.))
@njit def demo_ord_chr(): alphabet = 'abcdefghijklmnopqrstuvwxyz' lord = List() lchr = List() for idx, char in enumerate(alphabet, ord('A')): lord.append(ord(char)) lchr.append(chr(idx)) return lord, lchr demo_ord_chr()
def some_func(x): return x + 1 def consumer(func, *args): if not numba.extending.is_jitted(func): print("Not JIT wrapped, will wrap and compile!") func = njit(func) return func(*args) consumer(some_func, 10)
NAT = np.datetime64('NaT') dt = np.dtype('<M8') @njit def demo_numpy(): a = np.empty((5, 3, 2), dt) out = np.zeros_like(a, np.bool_) # iterate with ndindex for x in np.ndindex(a.shape): if np.random.random() < 0.5: a[x] = NAT count = 0 # now iterate directly for twoDarr in a: for oneDarr in twoDarr: for item in oneDarr: if np.isnat(item): count += 1 # use ufunc ufunc_count = np.isnat(a).sum() assert count == ufunc_count demo_numpy()
Due to long standing issues in the internal implementation of parallel regions (that they are based on Generalized Universal Functions), functions with
parallel=True have not supported
tuple "arguments" to these regions. This is a bit of a technical detail, but is now fixed, so common things like expressing a loop nest iteration limits from an array shape works as expected.
@njit(parallel=True) def demo_tuple_in_prange(A): for i in prange(A.shape[0]): for j in range(A.shape[1]): for k in range(A.shape[2]): A[i, j, k] = i + j + k x = 4 y = 3 z = 2 A = np.empty((x, y, z)) demo_tuple_in_prange(A) print(A)
Prior to Numba 0.49, if a user forgot to specify a launch configuration to a CUDA kernel a default configuration of one thread and one block was used. This lead to hard to explain behaviours for example, code that worked by virtue of running in this minimum configuration, or code that exhibited strange performance characteristics.
As a result, in Numba 0.49, it is now a requirement for all CUDA kernel launches to be explicitly configured in both the CUDA simulator and on real hardware. Example:
config.ENABLE_CUDASIM = 1 from numba import cuda @cuda.jit def kernel(x): print("In the kernel", cuda.threadIdx) # bad launch, no configuration given try: kernel(np.arange(10)) except ValueError as e: print(e) # good launch, configuration specified kernel[2, 4](np.arange(10))
Whilst not possible to demonstrate this feature in the current notebook, Numba 0.49 gains an External Memory Management (EMM) Plugin interface. When multiple CUDA-aware libraries are used together, it may be preferable for Numba to defer to another library for memory management. The EMM Plugin interface facilitates this, by enabling Numba to use another CUDA-aware library for all allocations and deallocations. Documentation for this feature is here.
There's three changes that may be of interest to those working on Numba extensions or with Numba IR:
Numba 0.49 contains the start of an important change to Numba's internal representation (IR). The change is essentially that the IR is now coerced into static single assignment (SSA) form immediately prior to when type inference is performed. This fixes a number of bugs and makes it considerably easier to write more advanced optimisation passes. It's hoped that SSA form can be extended further up the compilation pipeline as time allows.
A quick demonstration that shows SSA form and the new syntax highlighted dumps in action:
config.COLOR_SCHEME = 'light_bg' # colour scheme highlighting for a light background config.HIGHLIGHT_DUMPS = '1' # request dump highlighting config.DEBUG_PRINT_WRAP = 'reconstruct_ssa' # print IR both sides of the SSA reconstruction pass @njit def demo_ssa(x): if x > 2: a = 12 elif x > 4: a = 20 else: a = 3 return a print(demo_ssa(5)) # switch it off again! config.DEBUG_PRINT_WRAP = ''
|
https://nbviewer.jupyter.org/github/numba/numba-examples/blob/master/notebooks/Numba_049_release_demo.ipynb
|
CC-MAIN-2020-45
|
refinedweb
| 1,515
| 56.15
|
SSL_set_shutdown, SSL_get_shutdown - manipulate shutdown state of an SSL
connection
#include <openssl/ssl.h>
void SSL_set_shutdown(SSL *ssl, int mode);
int SSL_get_shutdown(const SSL *ssl);
SSL_set_shutdown() sets the shutdown state of ssl to mode.
SSL_set_shutdown()
SSL_get_shutdown() returns the shutdown mode of ssl.
SSL_get_shutdown()
The shutdown state of an ssl connection is a bitmask of:
No shutdown setting, yet.
A ``close notify'' shutdown alert was sent to the peer, the connection is
being considered closed and the session is closed and correct..
SSL_set_shutdown() does not return diagnostic information.
SSL_get_shutdown() returns the current setting.
ssl(3), SSL_shutdown(3),
SSL_CTX_set_quiet_shutdown(3),
SSL_clear(3), SSL_free(3)
|
http://www.openssl.org/docs/ssl/SSL_set_shutdown.html
|
CC-MAIN-2014-42
|
refinedweb
| 101
| 51.34
|
has proven to be an incredibly useful and popular visualization tool, but even avid users will admit it often leaves much to be desired. There are several valid complaints about Matplotlib that often come up:
DataFrames. In order to visualize data from a Pandas
DataFrame, you must extract each
Seriesand often concatenate them together into the right format. It would be nicer to have a plotting library that can intelligently use the
DataFramelabels in a plot.
An answer to these problems is Seaborn. Seaborn provides an API on top of Matplotlib that offers sane choices for plot style and color defaults, defines simple high-level functions for common statistical plot types, and integrates with the functionality provided by Pandas
DataFrames.
To be fair, the Matplotlib team is addressing this: it has recently added the
plt.style tools discussed in Customizing Matplotlib: Configurations and Style Sheets, and is starting to handle Pandas data more seamlessly.
The 2.0 release of the library will include a new default stylesheet that will improve on the current status quo.
But for all the reasons just discussed, Seaborn remains an extremely useful addon.
import matplotlib.pyplot as plt plt.style.use('classic') %matplotlib inline import numpy as np import pandas as pd
Now we create some random walk data:
# Create some data rng = np.random.RandomState(0) x = np.linspace(0, 10, 500) y = np.cumsum(rng.randn(500, 6), 0)
And do a simple plot:
# Plot the data with Matplotlib defaults plt.plot(x, y) plt.legend('ABCDEF', ncol=2, loc='upper left');
Although the result contains all the information we'd like it to convey, it does so in a way that is not all that aesthetically pleasing, and even looks a bit old-fashioned in the context of 21st-century data visualization.
Now let's take a look at how it works with Seaborn.
As we will see, Seaborn has many of its own high-level plotting routines, but it can also overwrite Matplotlib's default parameters and in turn get even simple Matplotlib scripts to produce vastly superior output.
We can set the style by calling Seaborn's
set() method.
By convention, Seaborn is imported as
sns:
import seaborn as sns sns.set()
Now let's rerun the same two lines as before:
# same plotting code as above! plt.plot(x, y) plt.legend('ABCDEF', ncol=2, loc='upper left');
Ah, much better!
The main idea of Seaborn is that it provides high-level commands to create a variety of plot types useful for statistical data exploration, and even some statistical model fitting.
Let's take a look at a few of the datasets and plot types available in Seaborn. Note that all of the following could be done using raw Matplotlib commands (this is, in fact, what Seaborn does under the hood) but the Seaborn API is much more convenient., which Seaborn does with
sns.kdeplot:
for col in 'xy': sns.kdeplot(data[col], shade=True)
Histograms and KDE can be combined using
distplot:
sns.distplot(data['x']) sns.distplot(data['y']); that can be passed to
jointplot—for example, we can use a hexagonally based histogram instead:
with sns.axes_style('white'): sns.jointplot("x", "y", data, kind='hex')
When you generalize joint plots to datasets of larger dimensions, you end up with pair plots. This is very useful for exploring correlations between multidimensional multidimensional relationships among the samples is as easy as calling
sns.pairplot:
sns.pairplot(iris, hue='species', size=2.5);
tips = sns.load_dataset('tips') tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill'] grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True) grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'): g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box") g.set_axis_labels("Day", "Total Bill");
with sns.axes_style('white'): sns.jointplot("total_bill", "tip", data=tips, kind='hex')
The joint plot can even do some automatic kernel density estimation and regression:
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
Time series can be plotted using
sns.factorplot. In the following example, we'll use the Planets data that we first saw in Aggregation and Grouping:
planets = sns.load_dataset('planets') planets.head()
with sns.axes_style('white'): g = sns.factorplot("year", data=planets, aspect=2, kind="count", color='steelblue') g.set_xticklabels(step=5)
We can learn more by looking at the method of discovery of each of these planets:
with sns.axes_style('white'): g = sns.factorplot("year", data=planets, aspect=4.0, kind='count', hue='method', order=range(2001, 2015)) g.set_ylabels('Number of Planets Discovered')
For more information on plotting with Seaborn, see the Seaborn documentation, a tutorial, and the Seaborn gallery.
Here we'll look at using Seaborn to help visualize and understand finishing results from a marathon. I've scraped the data from sources on the Web, aggregated it and removed any identifying information, and put it on GitHub where it can be downloaded (if you are interested in using Python for web scraping, I would recommend Web Scraping with Python by Ryan Mitchell). We will start by downloading the data from the Web, and loading it into Pandas:
# !curl -O
data = pd.read_csv('marathon-data.csv') data.head()
By default, Pandas loaded the time columns as Python strings (type
object); we can see this by looking at the
dtypes attribute of the DataFrame:
data.dtypes
age int64 gender object split object final object dtype: object
Let's fix this by providing a converter for the times:
def convert_time(s): h, m, s = map(int, s.split(':')) return pd.datetools.timedelta(hours=h, minutes=m, seconds=s) data = pd.read_csv('marathon-data.csv', converters={'split':convert_time, 'final':convert_time}) data.head()
data.dtypes
age int64 gender object split timedelta64[ns] final timedelta64[ns] dtype: object
That looks much better. For the purpose of our Seaborn plotting utilities, let's next add columns that give the times in seconds:
data['split_sec'] = data['split'].astype(int) / 1E9 data['final_sec'] = data['final'].astype(int) / 1E9 data.head()
To get an idea of what the data looks like, we can plot a
jointplot over the data:
with sns.axes_style('white'): g = sns.jointplot("split_sec", "final_sec", data, kind='hex') g.ax_joint.plot(np.linspace(4000, 16000), np.linspace(8000, 32000), ':k')
The dotted line shows where someone's time would lie if they ran the marathon at a perfectly steady pace. The fact that the distribution lies above this indicates (as you might expect) that most people slow down over the course of the marathon. If you have run competitively, you'll know that those who do the opposite—run faster during the second half of the race—are said to have "negative-split" the race.
Let's create another column in the data, the split fraction, which measures the degree to which each runner negative-splits or positive-splits the race:
data['split_frac'] = 1 - 2 * data['split_sec'] / data['final_sec'] data.head()
Where this split difference is less than zero, the person negative-split the race by that fraction. Let's do a distribution plot of this split fraction:
sns.distplot(data['split_frac'], kde=False); plt.axvline(0, color="k", linestyle="--");
sum(data.split_frac < 0)
251
Out of nearly 40,000 participants, there were only 250 people who negative-split their marathon.
Let's see whether there is any correlation between this split fraction and other variables. We'll do this using a
pairgrid, which draws plots of all these correlations:
g = sns.PairGrid(data, vars=['age', 'split_sec', 'final_sec', 'split_frac'], hue='gender', palette='RdBu_r') g.map(plt.scatter, alpha=0.8) g.add_legend();
It looks like the split fraction does not correlate particularly with age, but does correlate with the final time: faster runners tend to have closer to even splits on their marathon time. (We see here that Seaborn is no panacea for Matplotlib's ills when it comes to plot styles: in particular, the x-axis labels overlap. Because the output is a simple Matplotlib plot, however, the methods in Customizing Ticks can be used to adjust such things if desired.)
The difference between men and women here is interesting. Let's look at the histogram of split fractions for these two groups:
sns.kdeplot(data.split_frac[data.gender=='M'], label='men', shade=True) sns.kdeplot(data.split_frac[data.gender=='W'], label='women', shade=True) plt.xlabel('split_frac');
The interesting thing here is that there are many more men than women who are running close to an even split! This almost looks like some kind of bimodal distribution among the men and women. Let's see if we can suss-out what's going on by looking at the distributions as a function of age.
A nice way to compare distributions is to use a violin plot
sns.violinplot("gender", "split_frac", data=data, palette=["lightblue", "lightpink"]);
This is yet another way to compare the distributions between men and women.
Let's look a little deeper, and compare these violin plots as a function of age. We'll start by creating a new column in the array that specifies the decade of age that each person is in:
data['age_dec'] = data.age.map(lambda age: 10 * (age // 10)) data.head()
men = (data.gender == 'M') women = (data.gender == 'W') with sns.axes_style(style=None): sns.violinplot("age_dec", "split_frac", hue="gender", data=data, split=True, inner="quartile", palette=["lightblue", "lightpink"]);
Looking at this, we can see where the distributions of men and women differ: the split distributions of men in their 20s to 50s show a pronounced over-density toward lower splits when compared to women of the same age (or of any age, for that matter).
Also surprisingly, the 80-year-old women seem to outperform everyone in terms of their split time. This is probably due to the fact that we're estimating the distribution from small numbers, as there are only a handful of runners in that range:
(data.age > 80).sum()
7
Back to the men with negative splits: who are these runners? Does this split fraction correlate with finishing quickly? We can plot this very easily. We'll use
regplot, which will automatically fit a linear regression to the data:
g = sns.lmplot('final_sec', 'split_frac', col='gender', data=data, markers=".", scatter_kws=dict(color='c')) g.map(plt.axhline, y=0.1, color="k", ls=":");
|
https://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/matplotlib/04.14-Visualization-With-Seaborn.ipynb
|
CC-MAIN-2021-43
|
refinedweb
| 1,725
| 58.08
|
Introduction to Rails Commands
Ruby on Rails is a web development framework written in Ruby Rails programming language, which is designed in such a way to make web application programming easier by making things a developer needs to start are installed before. In Ruby, we need to write a very less amount of code, whereas in other programming languages and frameworks. Web development using Ruby is also more fun. The main principles of Ruby programming language for web application development do not repeat yourself and convention over configuration. Ruby is a high-level programming language which is similar to interpreted languages such as Python, Perl, etc and object-oriented like Java, Ada, etc.
Basic Rails Commands
Following are the basic commands.
1. How to start a web server in Ruby Rails?
In Ruby, the web server will run virtually under any web server, but the best way to develop a web application using Rails is by using an in-built WEBrick server.
To start a web server we need to do the following steps:
- cd ruby/library
- Rails server
- and we need to open the browser and enter on port 3000, and output will be as below:
2. How to set up database in Rails?
In Rails, we can set up different types of databases, and we will set up MySQL database, and we use the root as user id for the application, and we can perform different operations such as create a database, granting privileges etc. and sample output is as below:
3. How to create Active Record Files in Ruby?
In Ruby, we can create active record files for any project using a few commands of Rails. We will create active record files for library application with records as book and subject as below:
- rails script/generate model Book
- rails script/generate model subject
And it generates the output code as below:
4. How to create association/relations between models in Ruby?
In Rails command, we can create the association between models, and there are three types of associations such as one-to-one, one-to-many, many-to-many, and the sample code is as shown below. It creates a singular subject as one book belongs to one subject and output is:
5. How to create a migration file in Ruby?
In Rails, we can create a migration file using the below command, and it contains basic syntax which describes the data structure of the table
- rails generate migration table_name
- rails generate migration books
It will generate the below code as the output:
6. How to create string literals in Ruby?
In Ruby, strings are a sequence of characters which are represented in 8-bit and double-quoted strings allow substitution where single-quotes strings don’t, and sample code is as below:
puts ‘escape using “\ \”’;
puts ‘That\’s right’;
The output of the higher than code is as below:
7. How to declare an array in Rails?
In Rails, we can declare an array by having a combination of integers and strings separated by commas as below:
array = [ "fred", 10, 3.14, "This is a string", "last element", ]
array.each do |i|
puts i
end
And the output of the above code is as below:
8. What is Range and how to use it in Ruby?
In Ruby, a range is used to represent a set of values between the start and end of the range and constructed using for start and e for end literals and the sample code as below:
(10....14). each do |n|
print n, ‘ ‘
end
And the output of the above code is as below:
9. How to use a defined operator in Ruby?
In Ruby, defined? It is a special operator which takes a method call to determine whether the expression is passed or failed, and the sample code as below:
defined? Var — it validates to true if Var is initialized and examples as below:
10. How to use a single-line comment in Ruby?
In Ruby, comments can be represented in different ways such as single-line comment and multi-line comment and the sample code as below:
This is single line comment
puts “Hello, Ruby!”
And the output is as below:
Intermediate Rails Commands
Following are the intermediate commands.
1. How to generate a new application using the desired template in Ruby?
In Ruby, we can generate a new application using the below rails command new treehouse whereas we can use templates to run them against new application as below:
rails new treehouse –template=India
2. How to delete an element from an array at a particular index in Ruby?
In Ruby, we can delete an element from an array at a particular index using the below command:
array.delete_at(index)
Example
- array = [“hi”,”bar”,”foo”]
- array.delete_at(2)
- new array is : [“hi”,”bar”]
3. What is Interpolation and how to do in Ruby?
In Ruby, Interpolation is defined as combining a string with a variable or expression using double quotes is called Interpolation and the sample code is as below:
“ A string and an #{expression}”
4. How to call a method in Ruby?
In Ruby, we can call a method using the object and calling a method is like a sending a message in Ruby as we are sending an object with some message and waiting for its response and the sample code as below:
Example:
- object.method(arguments)
- string.length
- array.delet
5. How to create and use hashes in Ruby?
In Ruby, hashes can be created by having key-value pairs in curly braces and the key will always point to the value by an arrow and sample code is as below:
{42 => “answer” , “score” =>100, :name=> “Das” }
Advanced Rails Commands
Following are the advanced commands.
1. What is collect iterator and how to use it in Ruby?
In Ruby, collect iterator is used to return all elements of a collection and collect method no need to be associated with a block and sample code as below:
collection = collection.collect
a=[1,2,3] b=Array.new
b=a.collect
puts b
2. Write a web service handler method in Ruby?
In Ruby, Web services methods can be written as simple Ruby Rails methods which will expose to the outside world and sample code two perform addition of two numbers as below in command:
class MyServer < SOAP:: RPC:: StandaloneServer
#handler methods
def add(a,b)
return a+b
end
end
Tips and Tricks to Use
- Use extract regular expression which matches quickly
- Best way to join the strings as Array#join e,g: Array#*
- Use format decimal as it amounts quickly
- Perform Interpolation as it interpolates text quickly
Conclusion
Finally, Its an overview of Ruby on Rails commands of different types. I hope you will be having good knowledge about Ruby on Rails commands after reading this article.
Recommended Articles
This has been a guide to Rails Commands. Here we have discussed concept, basic, intermediate as well as advanced Rails Commands along with tips and tricks to use effectively. You may also look at the following article to learn more –
|
https://www.educba.com/rails-commands/?source=leftnav
|
CC-MAIN-2022-40
|
refinedweb
| 1,185
| 60.04
|
22 – When is a Test Not a Unit Test?
The examples I’ve been working with for this series have assumed that you are doing "Green Field” development. This means that we are writing an application from scratch and can make sure all of the things that make code testable are incorporated into our design from the start. The biggest one of these things being the user of dependency injection. But what if you are working on “legacy” code that still uses static dependencies and not dependency injection? Is TDD and unit testing out of reach due to a decision made at the beginning of the project? Not necessarily.
Note: Up till now I’ve been using the free version of JustMock called JustMock Lite. The features I’ll be demonstrating in this post are part of the full commercial version of JustMock. If you do not own this version of JustMock you can download a 30 day trial here.
When practitioners of TDD use the term “legacy code” what they are usually referring to is code that has not been designed to use dependency injection and therefore is difficult to unit test. Not being able to mock the dependencies of a class is certainly a barrier to writing unit tests that isolate your code under test. In fact, tests written against these types of classes and method cannot be considered unit tests; they are integration tests.
Most developers today are working with legacy code. Most developers work at large organizations or enterprises where they work on a team that is dealing with a line of business application whose line of code count could number in the millions. In this situation having a good solid suite of unit tests could be a huge benefit in terms of developer efficiency and application quality. It’s just that million lines of legacy code that are standing in the way. In these cases a mass-refactor of your application to add dependency injection is usually out of the question.
To address the needs of developers in this situation, JustMock has a feature called Future Mocking. Future mocking allows you to mock private statically bound dependencies in your code. For example, let’s consider these two classes:
1: using System;
2: using System.Linq;
3:
4: namespace FutureMocking
5: {
6: public class DependencyClass
7: {
8: public int GetDependentValue()
9: {
10: return 42;
11: }
12: }
13:
14: public class DependentClass
15: {
16: private DependencyClass _dependencyClass;
17:
18: public DependentClass()
19: {
20: _dependencyClass = new DependencyClass();
21: }
22:
23: public int GetValue()
24: {
25: return _dependencyClass.GetDependentValue();
26: }
27: }
28: }
(get sample code)
On line six I’ve created a class I’m calling DependencyClass. On line 14 I have created a class called DependentClass which is dependent on DependencyClass. As you can see on the constructor for DependentClass, which starts on line 18, I am not using dependency injection. DependentClass is statically bound to the DependencyClass by virtue of the instantiation of the DependencyClass on line 20. As I have no way to pass in a mock for DependencyClass as part of my unit test, with most mocking frameworks I can only really write an integration test for this unit of code.
We I used JustMock Lite in my previous examples I was able to add it to my project with NuGet. I won’t be able to use NuGet to get the full version of JustMock. The good news is that by installing it I’ve gained a couple new project types in Visual Studio (Figure 1):
Figure 1 – New project types added by JustMock
Under the Telerik –> Test branch of the new project templates dialog I have two new project types. I now have the ability to create a new JustMock Test project in either C# or VB.NET. By using this new project type I can create a class library that has the appropriate JustMock assembly referenced and I get a few code files showing various examples and boilerplate code that I can use to start writing tests.
If you want to use JustMock in an existing project you can add a reference to the JustMock assembly manually. When you install JustMock the default location of the Telerik.JustMock.dll assembly will be C:\Program Files\Telerik\JustMock.
To test my tightly coupled code I create a new JustMock test project and deleted the exiting files (I don’t need this right now). I then added a C# file called FutureMockingTests.cs and wrote my test:
1: [TestFixture]
2: public class FutureMockingTests
3: {
4: [Test]
5: public void FutureTest()
6: {
7: var expectedDependency = Mock.Create<DependencyClass>();
8: Mock.Arrange(() => expectedDependency.GetDependentValue())
9: .IgnoreInstance()
10: .Returns(5);
11:
12: var classUnderTest = new DependentClass();
14: Assert.AreEqual(5, classUnderTest.GetValue());
15: }
16: }
This test should look pretty familiar; I create a mock of the class I want to mock (DependencyClass) and then arrange the mock. I then call the class and assert the results. The difference here is the call to IgnoreInstance on line nine. What this does is tell JustMock to replace all calls to the arranged method (in this case GetDependentValue) with the arrangement I have supplied. This enables me to create mocks for classes that are statically bound without using dependency injection.
Before I run my test I need to ensure that the JustMock profiler is enabled. I can use the JustMock menu in Visual Studio to enable and disable the profiler (Figure 2):
Figure 2 – Enabling the JustMock profiler
Running this test we can see that the replacement of the static method definition is replaced by our arrangement (Figure 3)
Figure 3 – A passing test
TDD is not only about testability. It’s also about creating well designed applications. This ties in with adherence with the SOLID principles, specifically the Liskov Substitution principle and the Dependency Inversion principle. These in turn tie into the practice of dependency injection. These are all things that are employed to make our applications better designed, more maintainable and more extensible.
Future mocking very much flies in the face of all of this. So why offer it? The reason is that developers who are working with large legacy projects would otherwise have no way to write unit tests for their existing code. Future mocking is only recommended for these “legacy code” situations; it is not recommended that you start a project with future mocking. It’s also recommended that when working with legacy code you consider future mocking a stop-gap practice and should be trying to refactor your code to use dependency injection. It may take years, but your goal should be to wean yourself off of future mocking.
As mentioned previously future mocking uses the JustMock profiler. This profiler enables future mocking, however it can make your tests run noticeably slow. If you do not need any of the JustMock features that require the profiler it is recommended that you disable it.
In an ideal world we would all be working in beautifully designed codebases that make development a simply and joyful experience. Unfortunately that’s not usually the case. In some cases we inherit another developers past problems. By using the future mocking feature of JustMock some of these issues can be mitigated and make unit testing an option in places it would otherwise not be possible.
|
https://www.telerik.com/blogs/30-days-of-tdd-day-23-mocking-from-the-future
|
CC-MAIN-2018-51
|
refinedweb
| 1,224
| 52.29
|
For the past few hours I have been pouring through hundreds of internet articles, blogs and posts in various forums regarding interfaces.
I get a general idea that it is basically a class with no implementation at all...a "skeleton" if you will.
I was hoping for an a-ha! moment that never came. This is my last ditch effort here.
What exactly is the usefulness of making interfaces?
I've read that it makes code more clean and easier to manage, but I fail to see how it would ever do that. Changing a public class that is used by many other methods or classes seems to be much MUCH more manageable than editing a "template" and then having to edit (maybe) every single instance of that template.
Also, it provides no information on it's function.
I mean an interface like this:
public interface ICopy { public void CopyMethod(); }
Could be implemented in any way a programmer wants like this:
public class FileManipulation : ICopy { public void CopyMethod() { //code to copy files } }
but it could also be implemented like this:
public class MakePrankCalls : ICopy { public void CopyMethod() { //code to use a modem and dial random numbers then play a .wav file to prank them } }
I mean it basically means nothing right?
The definition of an interface (in computer science) is "a point of interaction between two components." I fail to see how an interface in C# achieves this at all...
It seems to me that if I look at source code from someone that uses interfaces, I cannot rely on the fact that simply because an interface is used on a class, that it was used correctly (like my sample above) right?
I'm really hoping that someone can set me straight here and say something like "no, no, no! your missing the fact that __________________."
Anyone?
|
https://www.daniweb.com/programming/software-development/threads/308337/usefulness-of-interfaces
|
CC-MAIN-2016-50
|
refinedweb
| 307
| 70.33
|
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I'm trying to add some useful keyboard functions to my XYscope library and hitting a strange problem within Eclipse to use a
registerMethod() and keyEvent.
In the initialization of the library class, I'm calling the
registerMethod() which works:
p.registerMethod("keyEvent", this);
but I can only compile successfully if I create a public function of keyEvent without any parameters inside:
public void keyEvent(){
p.println("test");
}
however when trying to run a sketch using the library in Processing, I get this error: "There is no public keyEvent() method in the class xyscope.XYscope"
If I try to include the KeyEvent parameter like I've read about, the library won't compile:
public void keyEvent(KeyEvent evt){
p.println("test");
}
The error I get while trying to compile is such:
102: error: cannot find symbol
[javac] public void keyEvent(KeyEvent evt){
[javac] ^
[javac] symbol: class KeyEvent
[javac] location: class XYscope
Any ideas out there what I might be doing wrong?
Answers
The correct method signature is
public void keyEvent(KeyEvent event)
Difficult to be certain without seeing your code but the code
p.registerMethod("keyEvent", this);
suggests that p is an object of type PApplet or child class of PApplet and that this code is in the constructor for the class
XYscope. If true then the handler must also a method of the class
XYscope.
It could be that Eclipse thinks that the KeyEvent parameter in your handler method is
java.awt.event.KeyEventrather than
processing.event.KeyEvent
Check your import statements.
Sorry, the p. refers indeed to the PApplet that calls the library. EDIT: In the github source linked below, it was still called called 'myParent', but after reading some tips today, realized I should have just given it a single char variable... now 'p'.
I'm having no problem using
registerMethod("dispose", this);– so it must be something about Eclipse using the java.awt.event.keyEvent being used instead of the Processing one... the code (without my attempt to add the keyEvent is on github here.
I was trying to add this for example to line 91 for the register method.. and just below that function the keyEvent.... at the top I've done the core import as suggested in the library template:
package xyscope; import processing.core.*;
You need to import it with
import processing.event.*;
which will make all Processing event classes available when you add the event handler to the class.
Why change it to
p? The benefits of using descriptive identifiers far exceed the small effort to do a bit more typing, especially since Eclipse provides auto-completion.
In most computer applications the most expensive part of the software-life-cycle is maintenance so having well documented source code with clear semantics is sooooo... very important.
import processing.event.*;Ahaaa! Argh.. somehow that didn't turn up on any tutorials I was following for libs.. thought I only needed the core. Thanks!
Side note.. in my code for accessing other functions
constrain,
map,
println– i was placing the PApplet reference in front of it. Could this have been avoided if I imported the correct Processing lib for those things??
Regarding 1 char vars.. didn't mean to open up that can in this thread.. but I agree with both of you. In the case of PApplet – using a standardized variable name (
por
pa) is useful, especially considering how often one will type it in. When it comes to other variable names, it's a fine art of balancing descriptiveness and brevity.
int iLikeToTellStudentsThatSoLongAsTheyUseCamelCaseTheyCanCallThingsWhateverTheyLike = 1;
but
foois probably better = wtf is
foo? -> end of class.
staticmethods from
classPApplet.
importthe whole PApplet like this: *-:)
import static processing.core.PApplet.*;
staticmembers w/o prefixing them: :ar!
println("static call:", sqrt(THIRD_PI));
edit – guess this just worked for some of the math functions, not many of the other functions Processing has (fill, pushMatrix, translate, stroke, etc etc)...
I I can see those methods belong to the current canvas, which in your case will be the object assigned to p or pa. So those are associated to an object aka. they are not stateless and nop, they are not static:
Kf
|
https://forum.processing.org/two/discussion/23890/registermethod-of-keyevent-not-compiling-in-custom-library
|
CC-MAIN-2019-35
|
refinedweb
| 713
| 65.12
|
What are the best practices for performing a clean install of CMS 5 and Relate+ (Community and EPiMail) for local development purposes (Vista, VS2008 and Cassini)? No demo templates and our own namespace.
My guess is this in general:1. Community 3.2.0.191 - "Install site and SQL Server database" (with EPiServer CMS integration) on staging server2. Install EPiMail somehow (copying from other installation)3. Copy everything locally4. Create empty project and add references to EPiServer assemblies and modify web.config.
Or is there an easier way to get everything cleanly installed?
Hi Håkan!
Currently there is no easy way of installing Community on top of a existing CMS installation. Therefore the easiest way is to do like this:
Seems like a lot of work, but actually it is done pretty quick. If you are several persons in the development team I advice you to use a common database and have your own site set up on each of the development machines. Personally I prefer to use the IIS, but Cassini will work just fine.
Best regards,Tom Stenius
Great. I will try this approach. Maybe give feedback later on.
Thank you!
Håkan
Oh, and yes, we do use common databases, cassini, subversion and set upp VPP on UNC shares such as \\server\vpp\project\global\.
H
Great! Good luck!
//Tom
As this is related to first time installs maybe you know why the following is happening?
I have not had a chance to investigate this.
When clicking on the 'Members' page a script error appears in the bottom bar of IE. not a very useful err description of "Line 2 char 1 syntax err." any clue..?
If I've installed Relate + 'EPiServer RelatePlus 1.0.0.2'. Will the 'Install Hotfix 1 for EPiServer Community 3.2' be included?
When I downloaded the install there was no hotfix listed so would assume so.
Please advise?
Steven
Hi Steven,
As Mail is a part of the Relate+ package you will need the Hotfix. When you go to the download page for EPiServer Mail it will get listed under the Hotfix list.
Thanks for that. I have updated the site with the hotfixes..
Do you perhaps have some idea about my previous Q on the script err:
"When clicking on the 'Members' page a script error appears in the bottom bar of IE. not a very useful err description of "Line 2 char 1 syntax err." any clue..?
After some investigating my suspicion is that it's the Ajax.. part of it.. but not sure yet..
<asp:UpdatePanel /> used on the page..
I have removed the update panel to try and isolate the script error.it's within there that it happens somewhere..
Narrowed the script error down to the control:
<RelatePlus:Members
|
https://world.episerver.com/Modules/Forum/Pages/Thread.aspx?id=29133
|
CC-MAIN-2018-39
|
refinedweb
| 463
| 67.76
|
How to use eSWT with Midlets
eSWT provides a rich set UI components and can be used as an alternative UI toolkit to MIDP LCDUI provided that the MIDP runtime provider supports it. This describes how to use eSWT as the MIDlet UI, covering the issues outside of MIDP2 specification bthat need to be taken into account when eSWT is used by the MIDlets instead of LCDUI. The process of creating application code using editors, IDEs, compilers or such is not discussed in this document. This document is only covering the application design.
Contents
Displays, MIDlets and MIDletSuites
org.eclipse.swt.widgets.Display class in eSWT represents the connection point from Java to the native UI functionality. Creating one instance of Display can be perceived as creating one instance of eSWT. A MIDlet wishing to use eSWT for its UI will have to create an eSWT Display instance. It is possible to have multiple MIDlets using eSWT UI in the same MIDlet suite. In this case each MIDlet should create its own Display instance.
eSWT in the MIDlet life cycle
Starting up
When a MIDlet using eSWT for its UI has entered the active state for the first time and MIDlet.startApp() is being called then the MIDlet should immediately begin acquiring the UI resources. The first step is to create the thread that will be the eSWT UI thread.
The eSWT UI Thread
To avoid hanging the event dispatcher it’s not allowed to execute the eSWT event loop directly in the thread that called MIDlet.startApp() but a new thread must be created. I.e. in the first MIDlet.startApp() call the MIDlet should create and start a new thread that will be the eSWT UI thread. The MIDlet should then allow MIDlet.startApp() to return.
Creating the Display
In the UI thread the MIDlet should at first construct a Display instance. Finally the thread should enter an event loop dispatching eSWT events. The Display for the MIDlet should be created immediately after the UI thread has been started and it should not be disposed until the MIDlet is about to enter the destroyed state.
Pausing
When MIDlet.pauseApp() is being called and a MIDlet is entering the paused state then it can release any other eSWT resources but it should not dispose the Display instance. When a paused MIDlet is resumed then it should continue with the existing Display instance and the existing eSWT UI thread.
Destroying
System exit
When a MIDlet is notified by system that it will be destroyed with call MIDlet.destroyApp() any eSWT resources allocated by the MIDlet including graphic contexts, images, fonts and colors should be disposed*. Finally before returning from MIDlet.destroyApp() the Display should be disposed to release all the remaining eSWT UI resources.
Application initiated exit
When the application initiates exiting then the eSWT UI Thread event loop should exit. Then clean-up equivalent to the system exit case described in above should be performed. Finally MIDlet.notifyDestroyed() should be called to have the MIDlet enter destroyed state.
Code example of MIDlet lifecycle with eSWT
import javax.microedition.midlet.*; import org.eclipse.ercp.swt.mobile.Command; import org.eclipse.swt.SWT; import org.eclipse.swt.events.*; import org.eclipse.swt.graphics.*; import org.eclipse.swt.widgets.*; public class HelloWorld extends MIDlet implements Runnable, SelectionListener, PaintListener { // A handle to the eSWT UI thread created by this MIDlet. private Thread UIThread; // The eSWT Display created by this MIDlet in the eSWT UI thread. // When this is created the MIDlet gets connected to the native UI // functionality and eSWT UI toolkit is initialised for it. private Display display; // A Shell widget created by this MIDlet. private Shell shell; // A custom color resource owned by this MIDlet. private Color myColor; // A boolean to set when the event loop should exit. private boolean exiting = false; public void startApp() { // Create the eSWT UI thread. if(UIThread == null) { UIThread = new Thread(this); UIThread.start(); } } public void pauseApp() { // Here we could reduce the resources but we should keep the Display // instance and the eSWT UI Thread. } // destroyApp is called when the MIDlet is terminated from the task list // with the clear key, or the end key is pressed when the MIDlet is focused. // It might also be called when the system needs to close applications // e.g. in low memory conditions. public void destroyApp(boolean unconditional) { // Make the event loop exit in the eSWT UI thread. exitEventLoop(); // Wait for the eSWT UI thread to die. try { UIThread.join(); } catch(InterruptedException e) { } } // This method can be called from any thread to make the event loop to exit. void exitEventLoop() { exiting = true; Display.getDefault().wake(); } // The eSWT UI Thread. public void run() { // Create the Display. display = new Display(); shell = new Shell(display); shell.open(); Command exitCommand = new Command(shell, Command.EXIT, 0); exitCommand.setText("Exit"); exitCommand.addSelectionListener(this); // Allocate some resources that we will own. myColor = new Color(display, 100, 100, 0); // Print the hello world greeting on the Shell. shell.addPaintListener(this); shell.redraw(); // Execute the eSWT event loop. while(!exiting) { if(!display.readAndDispatch()) { display.sleep(); } } // Clean up and destroy the MIDlet. myColor.dispose(); display.dispose(); notifyDestroyed(); } public void widgetDefaultSelected(SelectionEvent e) { } public void widgetSelected(SelectionEvent e) { // Exit command selected, exit the event loop. exitEventLoop(); } public void paintControl(PaintEvent e) { // Print a hello world greeting using the custom color resource. e.gc.setForeground(myColor); e.gc.drawText("Hello world!", 0, 0, SWT.DRAW_TRANSPARENT); } }
|
http://wiki.eclipse.org/index.php?title=How_to_use_eSWT_with_Midlets&oldid=41740
|
CC-MAIN-2016-30
|
refinedweb
| 905
| 58.08
|
Using a Photocell or Phototransistor to determine lighting levels
Many embedded devices need to detect light levels. A low-cost photocell can be used to determine different lighting levels in a room. The photocell seen below can cost under $1 and is available from Adafruit or Sparkfun. The resistance of this type of a photocell (also known as a photoresistor or light dependent resistor (LDR) ) varies with the light level on top of the sensor. Photocells are more sensitive to red and green light levels and not very sensitive at all to blue. A common application would be to dim an LED automatically in a dark room and brighten it when it is in full daylight so that it is visible, or perhaps just turn on the light when it is dark. The resistance response can vary quite a bit from photocell to photocell (perhaps as much as 50%), so extremely accurate light level measurements are not possible without individual calibration for each photocell.
A typical CdS Photocell from Sparkfun
Internal Structure of Photocell
Wiring
The easiest way to hookup this device is to use a voltage divider circuit connected to an analog input pin. The resistance of the device changes based on the lighting level (light levels are measured in lux). This low cost device can measure approximate lighting levels and that is all that is needed in many applications.
Schematic of Photocell hookup to an analog input
The typical way to interface this device is to hook it up to the 3.3 supply and use a 10K pulldown resistor to build a voltage divider circuit as seen in the schematic above. An A/D is then used to read the analog voltage value into the microprocessor. The resistance values and voltages shown in the table below are for the photocell from Adafruit. The photocell from Sparkfun is very similar, but the datasheet lists a maximum dark value of 1MΩ. Note that the analog voltage response is not linear (closer to a logarithmic response), but it is monotonically increasing. To expand the high light-level readings or for a different photocell, the pull down resistor value may need to change a bit to get the maximum analog voltage swing. A rough rule of thumb for this is Rpd = (Rmin*Rmax)1/2, where Rmin and Rmax is the region of interest to measure.
By checking the value of the analog voltage by using the AnalogIn API on mbed, the light condition can be detected. Recall that the mbed AnalogIn API scales the voltage from 0.0 to 1.0 with 1.0 being an external analog input voltage of 3.3V.
Example Program
An example program is shown below for the photocells analog output connected to p15 on the mbed LPC1768 using the schematic seen earlier. As the room gets darker, the photocells analog voltage output drops and a PWM output is used to dim one of mbed built-in leds. You can place your finger over the top surface of the photocell to simulate the room getting darker. A cupped hand actually works a bit better since it also blocks light from the sides and bottom. A similar idea is used to automatically dim the back light leds on mobile device displays to save power and keep them from being too bright in a dark room but still keeping them visible in daylight. Large outdoor signs also do this and some communities even have local ordinances enforcing this to reduce urban light pollution at night.
#include "mbed.h" AnalogIn photocell(p15); PwmOut myled(LED1); //photocell analog input dims led at lower light levels using PWM int main() { while(1) { myled = photocell; wait(0.1); } }
Other Light Sensor Options
It is even possible to use an LED to detect bright light, but a transistor amplifier circuit will likely be needed since the voltage and current output from the LED is very low. It is simpler just to use a phototransistor instead. The base on a phototransistor is controlled by the external light that shines through its transparent package, so a phototransistor only has two external pins.
While the long leads make the photocell breadboard friendly, in an actual device a tiny surface mount phototransistor type light sensor with a resistor would likely be used. A large number of such options are available. These are often used to dim the display backlighting on phones, tablets, and notebook PCs. These would require a breakout board for breadboard use. Sparkfun and Adafruit have several such breakout boards for a few dollars more than the basic photocell option used here. Some also feature more precise measurements than are possible with a photocell. Photocells do have a higher voltage rating (perhaps 150V) allowing them to be used directly in triac or relay AC lighting control circuits (unlike a phototransistor). CdS photoresistors are now banned in Europe (RoHS) and silicon phototransistors must be used instead.
Surface Mount ALS PT19 Light Sensor
A phototransistor supplies light-level controlled current to a series resistor and the voltage drop across the resistor is measured using an A/D, so it is very similar to the earlier photocell hookup. Unlike the photocell, the two phototransistor pins have polarity. The earlier demo program will also work for a phototransistor. Since it is a bipolar transistor, there will be a voltage drop across the transistor and the circuit's analog output voltage swing will be reduced a bit. When more accurate light readings are needed (or even the exact color!) other light sensors are available at an increased cost. Some even have a digital I2C interface with red, green, and blue sensors.
Typical Phototransistor Interface Circuit
There are a couple of low-cost phototransistors with wire leads (i.e., not surface mount) for use directly in a breadboard. They look a bit like an LED as seen below.
Phototransistor in a through hole package
Phototransistors are even combined with LEDs in a single package to detect objects. The light from the LED reflects from an object, or is interrupted by an object and the phototransistor detects the presence or absence of the LED light. Line following sensors on robots use this type of device.
This IR detector from Sparkfun contains an LED and Phototransistor
Typical Applications
Common applications include automatic dusk to dawn outdoor lighting such as streetlights, automatic night lights, automatic camera exposure control, automatic brightness control on displays, automatic car headlights and rear view mirrors that dim, and office building lights. Modern office buildings with large windows often have a circuit to dim indoor lighting in the daytime during periods of bright sunlight. This not only reduces the power consumed by the lights, but it also reduces cooling costs.
Additional Resources
Datasheet
Application Note
Please log in to post comments.
|
https://os.mbed.com/users/4180_1/notebook/using-a-photocell-to-determine-light-levels/
|
CC-MAIN-2021-17
|
refinedweb
| 1,132
| 52.09
|
java file run in weblogic (0 messages)
Hi I am new in weblogic as i tried to run a java simple application on weblogic but i am facing a simple problem that the start option is disable.. the steps i did for that is 1. create weblogic application 2. create weblogic project then in WEB-INF>src create a java class public class test { public static void main(String as[]) { System.out.println("Hello World 1"); System.out.println("Hello World 2"); System.out.println("Hello World 3"); System.out.println("Hello World 4"); } } after that i build the application but when i try to debug the project or debug the java file then start button is disable so can any one tell me how to run this simple application in weblogic Thanks & Warm Regards Vikas Sheel Gupta 9911005168 vikassheelgupta@rediffmail.com
- Posted by: vikas gupta
- Posted on: July 24 2006 07:46 EDT
|
http://www.theserverside.com/discussions/thread.tss?thread_id=41462
|
CC-MAIN-2015-35
|
refinedweb
| 154
| 56.15
|
- Create a new project working-with-applicative with the simple Stack template:
stack new working-with-applicative simple
- Open src/Main.hs and add the following imports after the initial module definition. The Applicative type class is defined in the module Control.Applicative:
import Data.Functor import Control.Applicative
- We will use two operators, Functor application <$> (synonym for fmap) and Applicative application <*>. The Applicative function <*> is defined as <*> :: f (a -> b) -> f a -> f b. It takes a data type where values are functions of type (a -> b) and applies to a data type with values of type a, and gets back the data type with values of type b. In the first example, we will use a list:
-- Mapping ...
|
https://www.oreilly.com/library/view/haskell-cookbook/9781786461353/7196315a-500e-491c-9288-c49b08c790fb.xhtml
|
CC-MAIN-2019-39
|
refinedweb
| 120
| 58.99
|
Relieve your creative blockages with these interactive desktop reminders.
Brown Note is a desktop notes application written in Python, using PyQt. The notes are implemented as decoration-less windows, which can be dragged around the desktop and edited. Details in the notes, and their position on the desktop, is stored in a SQLite file database, via SQLAlchemy, with note details and positions being restored on each session.
Data model
The storage of user notes in the app is handled by a SQLite file database via SQLAlchemy, using the
declarative_base interfact. Each note stores it's identifier (
id, primary key), the text content with a maximum length of
1000 chars, and
the
x and
y positions on the screen.
Base = declarative_base() class Note(Base): __tablename__ = 'note' id = Column(Integer, primary_key=True) text = Column(String(1000), nullable=False) x = Column(Integer, nullable=False, default=0) y = Column(Integer, nullable=False, default=0)
The creation of database tables is handled automatically at startup, which also creates the database file
notes.db if it does not exist. The created session is used for all subsequent database operations.
engine = create_engine('sqlite:///notes.db') # Initalize the database if it is not already. Base.metadata.create_all(engine) # Create a session to handle updates. Session = sessionmaker(bind=engine) session = Session()
Creating new notes
Python automatically removing objects from memory when there are no further references to them. If we create new objects, but don't assignment to a variable outside of the scope (e.g. a function) they will be deleted automatically when leaving the scope. However, while the Python object will be cleared up, Qt/C++ expects things to hang around until explicitly deleted. This can lead to some weird side effects and should be avoided.
The solution is simple: just ensure you always have a Python reference to any PyQt object your creating. In the case of our notes, we do this using a
_ACTIVE_NOTES dictionary. We add new notes to this dictionary as they are created.
_ACTIVE_NOTES = {}
The
MainWindow itself handles adding itself to this list, so we don't need to worry about it anywhere else. This means when we create a callback function to trigger creation of a new note, the slot to do this can be as simple as creating the window.
def create_new_note(obj=None): MainWindow(obj)
The note widget (a QMainWindow)
The notes are implemented as
QMainWindow objects. The main in the object name might be a bit of a misnomer, since you can actually have as many of them as you like.
The design of the windows was defined first in Qt Designer, so we import this and call
self.setupUi(self) to intialize. We also need to add a couple of window hint flags to the window to get the style & behaviour we're looking for —
Qt.FramelessWindowHint removes the window decorations and
Qt.WindowStaysOnTopHint keeps the notes on top.
class MainWindow(QMainWindow, Ui_MainWindow): def __init__(self, *args, obj=None, **kwargs): super(MainWindow, self).__init__(*args, **kwargs) self.setupUi(self) self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint) self.show()
To complete the setup for notes we need to either store the existing
Note object (from the database) or create a new one. If we're starting with an existing note we load the settings into the current window, if we've created a new one we save it to the database.
This initial save just stores the position + an empty string. On a subsequent load we would have the default empty note.
# Load/save note data, store this notes db reference. if obj: self.obj = obj self.load() else: self.obj = Note() self.save() self.closeButton.pressed.connect(self.delete_window) self.moreButton.pressed.connect(create_new_note) self.textEdit.textChanged.connect(self.save) # Flags to store dragged-dropped self._drag_active = False
We define a method to handle loading the content of a database
Note object into the window, and a second to save the current settings back to the database.
Both methods store to
_ACTIVE_NOTES even though this is redundant once the first storage has occurred. This is to ensure we have a reference to the object whether we're loading from the database or saving to it.
def load(self): self.move(self.obj.x, self.obj.y) self.textEdit.setHtml(self.obj.text) _ACTIVE_NOTES[self.obj.id] = self def save(self): self.obj.x = self.x() self.obj.y = self.y() self.obj.text = self.textEdit.toHtml() session.add(self.obj) session.commit() _ACTIVE_NOTES[self.obj.id] = self
The last step to a working notes application is to handle mouse interactions with our note windows. The interaction requirements are very basic — click to activate and drag to reposition.
The interaction is managed via three event handlers
mousePressEvent,
mouseMoveEvent and
mouseReleaseEvent.
The press event detects a mouse down on the note window and registers the initial position.
def mousePressEvent(self, e): self.previous_pos = e.globalPos()
The move event is only active while the mouse button is pressed and reports each movement, updating the current position of the note window on the screen.
def mouseMoveEvent(self, e): delta = e.globalPos() - self.previous_pos self.move(self.x() + delta.x(), self.y()+delta.y()) self.previous_pos = e.globalPos() self._drag_active = True
Finally release takes the end position of the dragged window and writes it to the database, by calling
save().
def mouseReleaseEvent(self, e): if self._drag_active: self.save() self._drag_active = False
The delete note handler shows a confirmation message, then handles the delete of the
Note object from the database via
db.session. The
final step is to close the window and delete the reference to it from
_ACTIVE_NOTES. We do this by id, allowing us to delete the PyQt object reference after the Qt object has been deleted.
def delete_window(self): result = QMessageBox.question(self, "Confirm delete", "Are you sure you want to delete this note?") if result == QMessageBox.Yes: note_id = self.obj.id session.delete(self.obj) session.commit() self.close() del _ACTIVE_NOTES[note_id]
Theming the notes
We want to add a bit of colour to our notes application and make them stand out on the desktop. While we could apply the colours to each element (e.g. using stylesheets) since we want to affect all windows there is a simpler way — setting the application palette.
First we create a new palette object with
QPalette(), which will contain
the current application palette defaults. Then we can override each colour
in turn that we want to alter. The entries in a palette are identified
by constants on
QPalette, see here for a full list.
app = QApplication([]) app.setApplicationName("Brown Note") app.setStyle("Fusion") # Custom brown palette. palette = QPalette() palette.setColor(QPalette.Window, QColor(188,170,164)) palette.setColor(QPalette.WindowText, QColor(121,85,72)) palette.setColor(QPalette.ButtonText, QColor(121,85,72)) palette.setColor(QPalette.Text, QColor(121,85,72)) palette.setColor(QPalette.Base, QColor(188,170,164)) palette.setColor(QPalette.AlternateBase, QColor(188,170,164)) app.setPalette(palette)
Starting up
When starting up we want to recreate all our existing notes on the desktop. We can do this by querying the database for all
Note objects, and then
creating a new MainWindow object for each one. If there aren't any we just
create a blank note.
existing_notes = session.query(Note).all() if len(existing_notes) == 0: create_new_note() else: for note in existing_notes: create_new_note(obj=note) app.exec_()
Want to build your own apps?
Then you might enjoy this book! Create Simple GUI Applications with Python & Qt is my guide to building cross-platform GUI applications with Python. Work step by step from displaying your first window to building fully functional desktop software.
Further ideas
The text editor is handled using a
QTextEdit plain text editor widget meaning there is no support for rich text. Adding support for basic formatting (bold, italic, etc.) could be added by enabling rich text mode and loading/saving via html to the database. Checks would be needed to ensure formatted text doesn't exceed the database row size and lose closing tags.
|
https://www.twobitarcade.net/article/post-it-notes-python-qt/
|
CC-MAIN-2019-09
|
refinedweb
| 1,339
| 58.38
|
libssh2_knownhost_add - add a known host
#include <libssh2.h> int libssh2_knownhost_addc(LIBSSH2_KNOWNHOSTS *hosts, char *host, char *salt, char *key, size_t keylen, const char *comment, size_t commentlen, int typemask, struct libssh2_knownhost **store);
Adds a known host to the collection of known hosts identified by the ’hosts’ handle. host is a pointer the host name in plain text or hashed. If hashed, it must be provided base64 encoded. The host name can be the IP numerical address of the host or the full name. If you want to add a key for a specific port number for the given host, you must provide the host name like ’[host]:port’ with the actual characters ’[’ and ’]’ enclosing the host name and a colon separating the host part from the port number. For example: "[host.example.com]:222". salt is a pointer to the salt used for the host hashing, if the host is provided hashed. If the host is provided in plain text, salt has no meaning. The salt has to be provided base64 encoded with a trailing zero byte. key is a pointer to the key for the given host. keylen is the total size in bytes of the key pointed to by the key argument comment is a pointer to a comment for the key. commentlen is the total size in bytes of the comment pointed to by the comment, LIBSSH2_KNOWNHOST_TYPE_SHA1 or LIBSSH2_KNOWNHOST_TYPE_CUSTOM. The key is encoded using one of the following encodings: LIBSSH2_KNOWNHOST_KEYENC_RAW or LIBSSH2_KNOWNHOST_KEYENC_BASE64. The key is using one of these algorithms: LIBSSH2_KNOWNHOST_KEY_RSA1, LIBSSH2_KNOWNHOST_KEY_SSHRSA or LIBSSH2_KNOWNHOST_KEY_SSHDSS. store should point to a pointer that gets filled in to point to the known host data after the addition. NULL can be passed if you don’t care about this pointer.
Returns a regular libssh2 error code, where negative values are error codes and 0 indicates success.
Added in libssh2 1.2.5
libssh2_knownhost_init(3) libssh2_knownhost_free(3) libssh2_knownhost_check(3)
|
http://huge-man-linux.net/man3/libssh2_knownhost_addc.html
|
CC-MAIN-2018-05
|
refinedweb
| 315
| 64.51
|
SYNOPSIS
#include <sys/types.h>
#include <utime.h>
int utime(const char *pathname, const struct utimbuf *times); interpreted as a
pointer to a utimbuf structure; and the access and
modification times are set to the values in the designated structure.
Only the owner of the file and processes with appropriate privileges can
use the
PARAMETERS
- pathname
Points to a path name that names a file.
- times
Is the structure
POSIX.1 (1996).
MULTITHREAD SAFETY LEVEL
Async-signal-safe.
PORTING ISSUES
Only the NTFS file system supports full file times. Refer to File Systems in the Windows Concepts chapter of the MKS Toolkit UNIX to Windows Porting Guide.
On Windows NT/2000/XP/2003/Vista,.2 Documentation Build 16.
|
http://www.mkssoftware.com/docs/man3/utime.3.asp
|
crawl-001
|
refinedweb
| 118
| 60.72
|
Dave, > > 1.) Namespaces v. Python Modules > > > > All code in ITK is in a C++ "itk" namespace. We'd like to have this > > namespace reflected in the python wrappers with code like this: > > > > # Load the ITK ptyhon wrappers. > > import itk > > > > # Create an instance of an ITK object. > > # Equivalent C++ code: > > # itk::Object::Pointer o = itk::Object::New() > > o = itk.Object.New() > > What's the problem? Maybe it's the fact that we don't have static > function support in Boost.Python yet? Oops, I didn't finish this section of the email. I meant to point out that I don't see a way to have nested namespaces treated as modules in Python through BPL: # itk::foo::Bar() itk.foo.Bar() Even if it were not possible to separately load this "itk.foo" module, it would still be nice from a naming perspective. > I understand why you want this, but for this /particular/ case I'd > suggest that an interface like > > o = itk.Object() > > would be more appropriate. Why expose to users that there's a factory > function at work here? I agree that this particular case is prettier when hiding the New() method, but there is also something to be said for a one-to-one correspondence among C++, Tcl, and Python implementations of the same program. Also, CABLE is intended to be a tool separate from ITK, and should not have ITK-specific hacks in it. Then, in order to achieve this syntactic change there would have to be a way of specifying it in the configuration file. One of my design goals for CABLE was to produce wrappers that reflect the original C++ as closely as possible, and with as few configuration options as possible. This actually brings up the other main issue with automatic generation. Whenever a return_value_policy is required, CABLE will have to make some assumptions, and probably just always use reference_existing_object. Anything different would require per-method configuration, which already starts to defeat the purpose of auto-generation of wrappers. > If you must have the static function interface, we just need to have a > discussion of the C++ interface questions I pose in the link above, so > that I can come up with a good design. I'd definately prefer the def(..., static_()) approach over having static_def. Here is another approach that crossed my mind: struct A { static void StaticMethod(); }; class_<A>("A", init<>()) .def("StaticMethod", static_(&A::StaticMethod)); I'm not sure I'd prefer this over class_<A>("A", init<>()) .def("StaticMethod", &A::StaticMethod, static_()); but I thought I'd mention it anyway. > Aha. I've considered doing something like this in the past, but I > thought that tracking every object in this way *by default* would be a > price that not all users would be willing to pay. In Tcl the tracking is more necessary because the wrapper objects each have a corresponding Tcl command used to invoke methods on that object. CABLE needs to keep track of all the commands it creates and destroys to keep this working. The tracking isn't as necessary for Python, though, so you're right that the extra cost may not be worth it. > There are also issues of base/derived class tracking (what happens if > you're tracking the base object and someone returns a pointer to the > derived object, which has a different address?) which make doing it > right especially difficult. I went through the same thought process for CABLE's Tcl wrappers. At the time I decided to go for the pointer comparison and ignore the problems to get something working. I have yet to thoroughly revisit the issue. However, I've toyed with the idea of automatic down-casting of pointers to polymorphic C++ types. >. -Brad
|
https://mail.python.org/pipermail/cplusplus-sig/2002-November/002117.html
|
CC-MAIN-2014-10
|
refinedweb
| 625
| 63.59
|
As of WordPress 5.0, Gutenberg comes built-in. In this post, I'll give you the basics of what Gutenberg is, why it's awesome, and how to set up your environment to start creating your own custom Gutenberg blocks. While at least some knowledge of React will be useful, it is not totally required.
Before getting into building custom gutenberg blocks, I think it will be helpful to know what gutenberg is. It may also be useful to understand the history of the editor and why WordPress added it to their core codebase. Without further adieu, let's get into it!
What is Gutenberg?
Before WordPress 5.0, users were able to edit their content using a WYSIWYG (which stands for "What You See Is What You Get) editor. This allowed content creators to write blog posts and static pages with no coding skills. At the same time, it also severely limited what they could do with their site. The theme would control what the header and footer looked like, but for any sort of custom layout, a developer would have to create a custom template and hardcode stuff in (bad) or do a bunch of crazy stuff to make things more changeable for the user (also bad).
In 2011, the Advanced Custom Fields plugin was released which made a lot of these things easier. It allows developers to create custom fields for a given content type (post or page) and then render them in a template with minimal code. It makes custom templates for a home page or other special pages much easier to change for both developers and end-users. This has been my go-to for years now and it's been a great experience. I've even used it when creating sites with WordPress and Gatsby!
While this solution is still a great solution and offers many different use cases, I have been using Gutenberg to build sites lately. As I mentioned before, Gutenberg now comes built-in to WordPress as the default editor although it started out as a plugin. So why did it get added to core? I presume it's largely an effort to keep up with site-builders such as SquareSpace and Wix.
What are Gutenberg blocks?
Gutenberg (named after Johannes Gutenberg who invented the first printing press) allows users to select pre-styled sections, or "blocks", for each page and fill in the content. This makes for a much more fluid user experience when creating pages or blog posts. WordPress provides some default blocks which will probably work for a lot of casual users, but what if you need a special block for a particular page or you want a block with some different styles?
Rest assured, it is totally possible to create custom blocks. I will admit: at this time some of the documentation isn't great for creating blocks but hopefully this post will help anyone getting started with Gutenberg to have a better understanding of the block development process.
Blocks in the theme or module?
Pretty much all of the tutorials I have seen about block creation address doing so in a plugin. In addition, many of them are creating a plugin for a single block. By following these tutorials, you'd need 30 separate plugins if you needed 30 custom blocks! I have created multiple blocks in a plugin and can definitely see the value in doing so if you have a lot of existing sites to add those blocks to. Doing so would allow you to update the module, push it to a remote git repository, then pull your changes into whatever sites needed the update.
When I was searching the other day, I couldn't find any tutorials that explained how to set up custom blocks as a part of a theme. I believe there are some benefits to having the blocks in a theme rather than a plugin though, including (but not limited to) less dependencies to manage, keeping proprietary code for blocks specific to a website private, and not having to worry about a user accidentally disabling the plugin and breaking things.
Custom Gutenberg block theme setup
When I'm building a new WordPress site, I tend to use the Underscores theme which is made by Automattic. It's a starter theme with very minimal styling. Although it can be downloaded with Sass structures in place, there is not a bundling tool included. I will be using Gulp to allow me to write jsx in my custom blocks. Before you can start developing the custom blocks, you need to add some code to the theme to handle it.
Blocks directory for custom blocks
To help keep things organized, I like to place all of my custom blocks into a directory in the root of my theme called
blocks. This directory can be called whatever you like, but I'd recommed naming it something that is easily recognizable as custom blocks. In my case, the following command will create the directory:
# terminal $ mkdir blocks
Now that my blocks directory has been created, I need to create a php file inside which will enqueue my blocks and register my custom block types. I usually give mine the appropriate name of
blocks.php though, again, you can call this whatever you like. The following command will create the file in my blocks directory and open it in the default code editor:
# terminal $ touch blocks/blocks.php && open $_
Create a function to register custom gutenberg blocks
The first thing you need to do in your blocks.php file (after the opening php tags) is create a function which will take care of adding the block scripts as well as registering the custom block type. I'll take this step-by-step so it's easy to follow. The empty function should look like this:
<?php // blocks/blocks.php /** * Enqueue scripts for custom blocks */ function custom_block_scripts() { // Do something... } add_action('enqueue_block_assets', 'custom_block_scripts');
After creating the function, you'll use a hook to call the function. Since adding Gutenberg to WordPress core, a new hook has been added called
enqueue_block_assets which exists exactly for this purpose.
Enqueue the scripts and styles for the custom blocks
The next thing you need to do is include the scripts for the custom blocks you're creating. This can be done using
wp_enqueue_script() just like you'd do in a custom theme. This should go inside the
custom_block_scripts() function like so:
<); } add_action('enqueue_block_assets', 'custom_block_scripts');
In the code above, you may notice that I have listed an array of dependencies. This is required for any WordPress components you want to use in your blocks. The ones I have listed here are the ones I find myself using most often. A full list of packages that are available can be found here. At a minimum, you need
wp-blocks to register a block. The rest of the
wp_enqueue_script() function should look pretty familiar if you've done theme development before. In case you haven't, here's a quick breakdown of the arguments:
<?php // wp_enqueue_script() wp_enqueue_script( $nickname, $location, $dependencies, $version, $in_footer );
Register the actual custom block types
Now that you have the scripts added, you need to use
register_block_type() to tell WordPress what to do with the code. It should be noted that the
$args array will use the nickname you chose in the previous step to identify the script or styles you want to use. Again, WordPress added a custom function to do this called
register_block_type() with the following arguments:
<?php // register_block_type() register_block_type( $namespace, $args );
Based on the way you have set up the blocks so far, this is how your
register_block_type() function will look:
<?php // register_block_type() register_block_type( 'iamtimsmith/blocks', array( 'editor_script' => 'custom-block-scripts', // The script you enqueued earlier ) );
The code above should go in the same
custom_block_scripts() function where you are enqueuing your scripts. After you have set this up, your custom function should look like this:
<); // Register custom block types register_block_type( 'iamtimsmith/blocks', array( 'editor_script' => 'custom-block-scripts', ) ); } add_action('enqueue_block_assets', 'custom_block_scripts');
Telling functions.php about the custom blocks
The final step for registering blocks in your theme is to add a call to the
functions.php file. This will simply tell your theme that the file exists in the blocks directory and the content should be pulled in. While this step is relatively easy, it is also required for this to work. If you are running into issues with your custom blocks not showing up at all, I'd double check and make sure you added the call to your
functions.php file. Adding the code below will tell your theme about the registered custom blocks:
<?php // functions.php /** * Add custom blocks for gutenberg */ require get_template_directory() . '/blocks/blocks.php';
Although it doesn't matter where in your
functions.php file you place the code, I tend to put it at the bottom. Especially if you're using the underscores theme, it helps to keep your code separated from the default theme code.
Wrapping Up
That's as much as I'm going to cover in this article. You have now registered the namespace and scripts where your custom blocks will live. In the next post in the series, I'll be going over a gulp setup which allows you to use JSX when building your custom blocks. Using JSX makes blocks easier to read and can make your life easier as a developer. If you're not familiar with gulp, I'll teach you some basics to get your custom blocks up and running and provide you with a jumping off point to add more optimizations.
Have thoughts or questions? You can reach me on Twitter at @iam_timsmith.
Discussion (3)
Hey Tim. Great article! :) Small question: How do you add the "Originally published at iamtimsmith.com on Oct 05, 2019" line to your posts?
I think it’s because I have the canonical url set in the front matter. Other than that, I’m not sure. It comes from my RSS feed and it’s just there.
Ok... i have the canonical url set as well. I'll have a look at the rss feed option then. Thanks :)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/iam_timsmith/creating-custom-gutenberg-blocks-with-react-and-wordpress-part-1-1njb
|
CC-MAIN-2021-17
|
refinedweb
| 1,699
| 70.53
|
Welcome to the Parallax Discussion Forums, sign-up to participate.
ersmith wrote: »
... and to add a "Port" menu that will let you specify COM1 to COM9 if you want to override the default behavior of having the program search for the port. ...
jmg wrote: »
ersmith wrote: »
... and to add a "Port" menu that will let you specify COM1 to COM9 if you want to override the default behavior of having the program search for the port. ...
Can that (easily?) support above COM9 ? eg mine added as COM19
David Betz wrote: »
It should be possible to enumerate all COM ports that actually exist. That way you could include every port in the menu even though some might have very high numbers. It is very likely that the actual list of ports is a very sparse list.
pilot0315 wrote: »
Got it working with the help of dgatley. I have to press reset so as to load a different program. Now going to attempt to write a counting program. Any one have a quick example of printing to the serial terminal?
Thanks
' hellop2.bas
' P2 hello world program in BASIC
const _clkmode = 0x010c3f04
const _clkfreq = 160_000_000
clkset(_clkmode, _clkfreq)
_setbaud(2000000)
do
print "hello from P2!"
loop
pilot0315 wrote: »
In P2asm??
pilot0315 wrote: »
Is C or C++ available?
pilot0315 wrote: »
Thank you for your help. I can do spin and prop c. (old days fortran, basic and some cobol I am dating my self old ibm 1130, 360/370 and hp 2000c), I will follow your directions and let you know what happens. Just checked those examples are not in the samples. 1.3.0. I will look for a later version.
Martin
CON 'RJA: new for real P2 - you can use different xdiv and xmul to set clock frequency: /10*125 -> 250 MHz
_XTALFREQ = 20_000_000 ' crystal frequency
_XDIV = 10 ' crystal divider to give 1MHz
_XMUL = 125 '
CON
intensity = 80 '0..128
fclk = _CLOCKFREQ 'RJA: Adjusted for real P2 '80_000_000.0
fpix = 25_000_000.0
fset = (fpix / fclk * 2.0) * float($4000_0000)
vsync = 4 'vsync pin 'RJA: changed for real P2
DAT org
'
'
' Setup
'
'+-------[ Set Xtal ]----------------------------------------------------------+
' RJA: New for real P2
hubset #0 ' set 20MHz+ mode
hubset ##_SETFREQ ' setup oscillator
waitx ##20_000_000/100 ' ~10ms
hubset ##_ENAFREQ ' enable oscillator
'+-----------------------------------------------------------------------------+
pri clkset(mode, freq)
CLKFREQ := freq
CLKMODE := mode
asm
hubset #0
hubset mode
waitx ##20_000_000/100
add mode, #3
hubset mode
endasm
David Betz wrote: »
I naively cloned spin2gui and just tried running the following command on the Mac:
wish spin2gui.tcl
This brings up the GUI but none of the button labels appear. Is there some other step required to run spin2gui on the Mac?
wish spin2gui.tcl
/Applications/Utilities/Wish.app/Contents/MacOS/Wish Spin2gui.tcl
ke4pjw wrote: »
Does fastspin support local labels? Some of my code will not compile that has local labels.
fastspin -2 -H 0x10000 myprog.spin
fastspin -2 -H 0x10000 -E myprog.spin
David Betz wrote: »
Another question: How do I compile a program that has more than one source file? Do I just include all of the source file names on the same command line?
#include <stdio.h>
int main(void)
{
printf("Hello, world!\n");
return 0;
}
fastspin -2 -I include -b hello.c
loadp2 -p /dev/tty.usbserial-P2EEQXU hello.binary -b 115200 -t
Do you know what com# is assigned to the USB to SERIAL device when you plug in your P2-EVAL?
I think loadp2.exe only looks for up to com19
As always, spin2gui is available for download at:
Grab the most recent release there (1.3.1 at the time of this writing).
Can that (easily?) support above COM9 ? eg mine added as COM19
The menu doesn't, but the command used to run programs on the P2 is completely configurable (via the Command > Configure Commands... menu). So all you have to do to use COM19 is to replace the "%P" string in that command with "-p COM19". ("%P" stands for "the com port selected by the menu")
I was dreading trying to do that, but I found some tcl code that is supposed to do the enumeration. It works on my Win10 machine and on Linux. @jmg, do you want to try the new spin2gui release (v1.3.2) and see if that finds your device on com19?
Thanks
There is a samples/ folder that comes with spin2gui that has a few sample programs. The simplest hello world program for P2 would look like:
Why not use a high level language if you have one available? Or, if you really want to see P2ASM, you could look at the hellop2.p2asm file that fastspin will produce from the above example, although that's got a lot of library code in it that may not be of interest.
I think Cluso99 posted an assembly language serial demo in one of the P2 Newbies or P2 Eval board samples threads.
Is C or C++ available?
It is my understanding that fastspin is part of the Spin2gui??
Would you please send me a couple of links of now to access the hellop2.p2asm file that is in fast spin. I tried to launch fastspin
autonomously and it just flashed and goes away.
Appreciate the help.
Martin
I would like to use a high level language like C or C++ or maybe SPIN if it is available. In p2asm I would just like to start with simple math and move
up.
Only partially. C support in fastspin is still very incomplete, but you can write simple programs in it. If you prefer C to BASIC, then you can use that, and there's even a serial demo in the samples folder. Here's how to access it:
1. Open spin2gui.exe (version 1.3.0 or later)
2. Go to the File > Open file... menu item
3. Navigate to spin2gui\samples and select cdemo.c.
4. The P2 blinking lights and serial demo C program will open in the spin2gui window.
5. Now push the "Compile & Run" button.
6. A "Propeller Output" window will open that will show "fastspin C demo" and then a lot of lines that say "toggle 0", "toggle 1", etc. You'll also see the P2 LED connected to pin 58 start blinking.
You can use the C program as a start if you want to, although honestly I would recommend using Spin or BASIC. All 3 languages are supported by the fastspin built in to spin2gui, but C is still under development and pretty buggy.
For example, you could repeat the steps above except that in step 3 select "multest2.spin" instead of "cdemo.c" to try out a simple Spin program that uses the SimpleSerial object (in the spin2gui\lib folder) and the multiply object (in the samples\ folder).
To see the P2 assembly language generated by building a program, select the Commands > "Open Listing File..." menu item. This will open up a window showing the assembly language that fastspin generated from your Spin, BASIC, or C program. But really there's no need to look at the P2ASM -- it's totally optional, and just informational.
Martin
Prop OS (also see Sphinx, PropDos, PropCmd, Spinix)
Website:
Prop Tools (Index) , Emulators (Index) , ZiCog (Z80)
Sorry, I misremembered when the cdemo.c sample was added. It's certainly there in spin2gui 1.3.2, and it's probably worth upgrading to that version (for one thing it should fix the problem you ran into with spaces in folder names).
The fastspin BASIC that's used in spin2gui is pretty similar to old style BASICs, except it doesn't have line numbers and it (optionally) allows you to specify types for variables and functions. So it sounds like you're set for working with all of the languages that spin2gui supports
Eric
My first programming was on a FACOM 230/45 (Japanese copy of IBM 360) mainframe,
and the program was read in from punched paper tape. I kept that roll of paper tape
for years until i finally lost it, and yes my first programming language was COBOL.
There is another C compiler that works.
(See thread - Can't Wait for PropGCC on the P2?)
I should have posted it here, but it's in "Microcontrollers" because it can complile for P1 and P2.
Fastspin is working very well for P2ASM and Basic.
The punch card chat has been moved to it's own thread here:
Was just trying SimpleSerial and sometimes it works, sometimes it doesn't. Baud looks wrong...
I think you're supposed to set the clock in a two step process with a delay like this:
Could you post the code you're having trouble with?
EDIT: Just realized what you might be having trouble with: the parameter to clkset() needs to be the _SETFREQ number, not _ENAFREQ.
Slowing it down to 115200 baud fixed it. But an even better fix is to use smartpins for the I/O, for example with the attached SmartSerial.spin2 object.
(Note that if you download spin2gui you do not have to get fastspin separately, as it's already included; the separate fastspin release is for people who just want the command line version, e.g. to use with Spin Edit.)
This has a number of bug fixes, some new demos (e.g. a SmartSerial object that uses the smartpin for serial I/O on P2), and improvements to all 3 languages:
- Spin has a few of the proposed P2 methods (like dirh_, drvh_, and the like)
- BASIC now supports multiple statements per line and changing the default type for variables
- C has an updated <propeller.h> that is more like the PropGCC one
This is a Tcl-Tk issue with macOS Mojave, fixed with the latest Wish or Tcl-Tk versions (8.6.8)...
A work-around for those not wanting to update Tcl-Tk or Wish is to resize the window (works for the text editing window as well as the configuration window). The button labels will appear on forcing a window resize.
BTW: It is difficult to actually update the wish version that resides in /usr/bin/ on macOS Mojave (permissions issue and you can't just 'sudo rm' the older version)... You'll need to get Wish or TclTk and exec them from their newly installed location (kind of a pain). I'm able to run spin2gui.tcl like this:
dgately
Try the FTDI 245 and the FullDuplexParallel Object.
Check out my spin driver for the Parallax "96 x 64 Color OLED Display Module" Product ID: 28087
Local label syntax is different in PASM2 and PASM. In PASM2 local labels start with "." rather than ":".
The fastspin changes are:
- Fixed abort/catch in P2
- Fixed a problem with quotes in strings
- Fixed potential name conflict with labels, and improved optimization of branches as a side benefit
- Added -H and -E flags to fastspin to allow changing HUB start address. For example, `fastspin -2 -H 0x10000 -E foo.bas` compiles a program to load and execute at address 65536. If the `-E` is left off then it still loads at 0, but is padded to start executing at 0x10000.
- Added some more C library functions
@msrobots : I think you'll find -H and -E useful for your experiments with TAQOZ. For example, you can compile a program with to move its HUB base address to start at 64K. Note that this is basically equivalent to your earlier trick of manually editing the .p2asm file to change the ORGH. There will still be 64K of padding inserted. To get rid of that padding add the -E flag: That creates a program that must be loaded at 64K in order to work properly. loadp2 has a -s flag that can do that.
@Cluso99 : I think you could use -H plus -E to create binaries to run in a Spin OS. You'd compile the OS code to run at 0 (as usual) and the disk based programs to run at some offset (like 64K). Dave Hein did something like this for p2gcc as well.
|
http://forums.parallax.com/discussion/comment/1460957
|
CC-MAIN-2019-13
|
refinedweb
| 2,011
| 72.97
|
the result of all of this coming together.
Mimosa and Ember
In January of this year we made the switch from Backbone to Ember. We needed the power up and the team was ready to move on after two years writing a lot of boilerplate. After the initial, expected, much talked about learning curve was conquered the productivity boosts and code reduction we expected have arrived. When we were getting going, the tooling was already in place for Ember developers to use Mimosa to build their apps.
Several modules catered to doing Ember work.
- An emblem compiler
- An ember-handlebars compiler
- A module that dealt with Ember's r.js build issues
- A transpiler for es6 module syntax which the Ember community adopted very early on
- QUnit testing support which is essential for testing Ember apps
There were also several skeleton apps to get you started with Ember. They usually also include tasks like application packaging and CSS/JS transpiling:
- A port of the peepcode example app which includes CoffeeScript and Emblem compilation among other things
- A GitHub repo browser which included tests via Testem and a dependency bundler written by the skeleton's author
- An example app using Emblem
- A
mimosa newapp modified for Ember
- A commonjs/browserify example
Tooling Gaps
I hesitated to attempt to solve any other Ember-specific tooling problems until we had a chance to really get going with a big Ember project. After awhile, we began to see some big gaps that could be filled with some smart modules.
The two big problems that needed module solutions were application wiring and Ember-specific testing support.
Wiring Up an Ember App
It is common to assign your
Ember.Application instance to a global variable, and then define all of the framework factory classes (controllers, routes, views, etc.) as properties of this object:
window.App = Ember.Application.create(); App.PostsRoute = Ember.Route.extend(); App.PostsController = Ember.ArrayController.extend();
Ember will be able to use the right components of your application at the right time. For instance, when you visit /posts, Ember will resolve the corresponding route, controller, view, and template by looking them up on the App namespace.
To achieve this in a modular web application, you can choose to attach your Ember assets from within each individual module, or you can choose to bring all the modules together in one place and attach them all at once.
Bringing the App to the Modules
views/post_view.js
var App = require('app'); App.PostView = Ember.View.extend(...);
In this case we're bringing the Ember application (
Ember.Application.create()) to the
PostView module.
But this option is semantically wrong. A small component of an an application doesn't depend on the larger application. The application depends on it. And something still needs to
require/
import this view. What would do that?
Bringing the Modules to the App
A better approach is to create an application manifest file (of sorts) where the application and its modules are brought together and wired up. When solving this problem, we created a
modules.js file that pulled together all the various Ember assets in one place and attached them to
App. Doing this results in individual assets that know nothing about the larger application and are therefore more modular.
views/post_view.js
var PostView = Ember.View.extend(...); export default PostView;
Here, rather than attaching to
App, the view just exports itself. Now anything that needs it (multiple apps? test harness?) can
import it without needing the app.
Here's the
modules.js file where we pull the application together.
modules.js
import App from 'app'; import PostView from 'views/post_view'; App.PostView = PostView; import PostController from 'controllers/post_controller'; App.PostController = PostController; import Post from 'models/post'; App.Post = Post; ...
All the wiring of the various Ember assets occurs in a single place. No bringing the app in as a dependency to every Ember asset.
But, boilerplate much?
Our production app's
modules.js has around 600 lines and counting. Whenever a developer creates a new asset, they have to remember to go add it to that file. It's not a huge hurdle, but given how boilerplate it is, it begs for a tooling solution...
mimosa-ember-module-import
mimosa-ember-module-import was developed to solve the problem of
module "manifest" creation. With a trivial amount of config (6 lines in the skeleton's case) you can include this module and get your manifest file autogenerated during
mimosa build and
mimosa watch.
The module will output a manifest file in AMD format by default, but it also supports spitting out CommonJS. Here's an example manifest file in AMD.
define( function( require ) { var _0 = require('./views/post_view'); var _1 = require('./controllers/post_controller'); var _2 = require('./models/post'); var modules = { PostView: _0 && (_0['default'] || _0), PostController: _1 && (_1['default'] || _1), Post: _2 && (_2['default'] || _2), }; return modules; });
This file can then be imported and used by Ember during app creation a few different ways.
import modules from 'modules'; import App from 'app'; // class not instance App.createWithMixins(modules);
...or...
import modules from 'modules'; var App = Ember.Application.extend(modules);
Supports Multiple Apps
Something that is lacking with other tools is an ability to support multiple apps in the same project. We have that case so it was important all the tooling solutions supported it. mimosa-ember-module-import supports multiple applications within a single project with a small tweak to the config.
Config with one app
emberModuleImport: { apps: [{ namespace: "blogger", manifestFile: "modules", additional: ["router"] }] }
Config with two apps
emberModuleImport: { apps: [{ namespace: "blogger", manifestFile: "blogger-modules", additional: ["router"] }, { namespace: "admin", manifestFile: "admin-modules", additional: ["router"] }] }
It should be easy to see, adding more applications is as simple as adding another entry to the array.
Testing an Ember App
mimosa-testem-require and its fork mimosa-testem-qunit both solve a lot of problems that come with writing browsers tests. The goal of both is to allow you to Just Write Tests. No need to waste time figuring out how to wire tests up and get the running.
With Ember apps though, there's some additional work to do that is currently beyond those modules' capabilities.
mimosa-ember-test
mimosa-ember-test was created to double the support provided by the modules created before it.
An important note, this module assumes require.js usage. This module is about wiring together not only Ember's tests and testing support, but doing so in a require.js/AMD application.
Below are some of the features of mimosa-ember-test.
Continued from Previous Modules
The previous Mimosa testing modules included:
- Support for running tests during
buildand
watch
- Built on Testem, a first-class test runner.
- Automatic wiring of testing assets into the
.mimosadirectory, far from your application's code.
- Automatic detection and inclusion of tests
- Built-in support for Sinon, Require.js (and QUnit in this module's case)
- Command,
mimosa testscript, to autogenerate script to run Testem interactive client
testem cisupport
mimosa-ember-test continues all of this and builds onto it.
ember-qunit
An increasingly popular library to help you unit test your Ember apps is the aptly named ember-qunit. It introduces helper functions which make writing Ember tests easier. Any top notch Ember testing support needs to include ember-qunit, so any module we created to support Ember testing needed to include ember-qunit in its testing scaffolding.
mimosa-ember-test includes ember-qunit in its test scaffolding and makes its functions globally available.
Built-in Bower Support for Testing Assets
We wanted to make sure it was easy to update testing assets. Previous modules would require you to either update specific test assets (like QUnit or Sinon) if you needed newer versions or submit pull requests to the module repo to update assets.
mimosa-ember-test can, when configured (it is by default), utilize the functionality provided by mimosa-bower to Bower in your test assets. Toss the required dependencies in your
bower.json and you are ready to go.
"devDependencies": { "qunit": "1.14.0", "requirejs": "2.1.14", "sinonjs": "1.10.2", "ember-qunit":"0.1.8" }
If Bower isn't your thing, then mimosa-ember-test does come with its own versions of the test assets. Turn Bower-ing of assets off (
bowerTestAssets:false) and mimosa-ember-test will copy in the assets it has. Biggest downside to this is that the ember-test module may, over time, fall slightly out of date.
As with previous modules, you can copy in your own assets and tell mimosa-ember-test to not overwrite them.
Multiple Apps
As with the module-import module above, mimosa-ember-test supports multiple apps. It will partition all test assets and scaffolding by app and will run each application's tests separately. Additionally, the
mimosa testscript will kick out scripts capable of running interactive tests for specific apps.
Pulled Together: Ember Skeleton
To show off some of the work we've done and to give ourselves a good starting point for our Ember development, we put together an Ember Skeleton.
Get the Skeleton setup
Just a few steps.
npm install -g mimosa
- git clone
- cd MimosaEmberSkeleton
npm install
mimosa build
mimosa build will pull in all the Mimosa modules not already in Mimosa.
Module Manifest
mimosa build will generate a
modules.js file for the app that is configured. Here's that output:
define( function( require ) { var _0 = require('./controllers/post_controller'); var _1 = require('./helpers/helpers'); var _2 = require('./router'); var _3 = require('./routes/post_route'); var _4 = require('./routes/posts_route'); var modules = { PostController: _0 && (_0['default'] || _0), Helpers: _1 && (_1['default'] || _1), Router: _2 && (_2['default'] || _2), PostRoute: _3 && (_3['default'] || _3), PostsRoute: _4 && (_4['default'] || _4) }; return modules; });
The skeleton is also all wired up to use the modules.
require(['common'], function() { require(['app', 'blogger/modules'], function(App, modules) { window.Blogger = App['default'].createWithMixins(modules); }); });
Tests
The skeleton app comes with some tests already written.
mimosa build not only runs a full build of the app, it also, by default, executes the tests CI-style. Here's the messaging from the tests.
17:48:08 - Success - 4 of 4 tests passed for .mimosa/emberTest/tests/testem.json. ok 1 PhantomJS 1.9 - Acceptances - Posts: displays all recent posts ok 2 PhantomJS 1.9 - Unit - PostController: #init ok 3 PhantomJS 1.9 - Unit - PostController: #edit ok 4 PhantomJS 1.9 - Unit - PostController: #doneEdit 1..4 # tests 4 # pass 4 # fail 0 # ok
If you run
mimosa testscript, you can get a script generated that when run will execute the interactive Testem UI.
About require.js
One of the biggest gripes about require.js is the syntax. I hope to address this in a future blog post, but this skeleton is an example of being able to use the best of require.js without a lot of the cruft.
- You aren't coding AMD. It's pure ES6.
- Mimosa understands require.js so that in many cases you do not. This isn't more true than with an Ember app with its very simple dependency tree
- require.js allows you to only concatenate when you build. You can develop with individual assets. It doesn't matter how far source maps have come along, developing with optimized assets just isn't ideal.
- Mimosa manages configuration for concatenation for you. It can figure out most of the configuration. In the case of the skeleton, it just needs a little help to swap in the production version of ember.js
Everything Else, Ember and Not
This skeleton includes plenty of other modules, some of which enable Ember, some which do not. I won't run down them all, but here are the highlights.
- It incorporates ember-handlebars template compilation which, like all Mimosa template compilers, will concatenate all the templates into a single file. Multiple file support is just a few config params away
- The es6-module-transpiler is included and most modules are coded using ES6 module syntax which is compiled to AMD.
- Bower support is included and all vendor and test assets are Bower-ed in.
- JavaScript is hinted and the
.jshintrcis already set up to expect all of QUnit and ember-qunit's global variables.
- SASS!
- An Express server complete with a module that will bounce that server when server code changes. (A server isn't necessary, just included)
- Live Reload with no browser plugins
- Concatenation of vendor CSS
- Minification of JS, CSS and images.
- Packaging support, build your application with deployment in mind.
And this is just what is included in this starter skeleton. Mimosa has plenty of other modules available.
So Little Config
If Ember is your thing, then so are conventions. By sticking to a few simple conventions you can wire your app up with very little configuration. The skeleton has all of 93 lines of config and 30 of it is a very spaced out/commented array. For what this skeleton can do, that's a tiny amount of configuration.
Why Mimosa for Your Ember Application?
Besides the support listed above, in general, why Mimosa?
General Purpose but Customizable
I mentioned we moved from Backbone to Ember. That was a big change for our team. One thing that remained constant after the transition was Mimosa. Before making the switch ourselves, plenty of folks had been using Mimosa for their Ember apps, so all the support we needed to get going was already available.
Mimosa is multi-purpose. It doesn't have anything to do with Ember, but modules can be built to solve any problem set, including Ember's.
A Single Tool
If you need to add Grunt to your Mimosa app, it isn't because Mimosa isn't capable of doing what you need, it is because the module you need hasn't been built. (Try filling the gap? Or bring up your need?)
Stable and Supported
Mimosa may be new to you, but Mimosa isn't new. It predates many of the other tools. And I'm not going anywhere!
Try it out!
If you have any feedback, hit me up.
If you have any ideas about how Mimosa's Ember support could be better, hit me up.
Thanks!
Special thanks to Estelle DeBlois for her help building the above modules and giving this post a thorough sanity check. Her help has been invaluable!
And thanks to my team at Berico for patiently waiting on the above modules to land!
|
http://dbashford.github.io/enhanced-ember-mimosa-tooling-and-a-new-ember-skeleton/index.html
|
CC-MAIN-2019-09
|
refinedweb
| 2,410
| 57.87
|
#include <Wire.h> #include <LiquidCrystal_I2C.h>LiquidCrystal_I2C lcd(0x19,16,2); // set the LCD address to 0x19 for a 16 chars and 2 line displayvoid setup(){ lcd.init(); // initialize the lcd // Print a message to the LCD. lcd.backlight(); lcd.print("Hello, world!");}void loop(){}
That library will not support that LCD, if you look at the datasheet it seams to have a custom driver that is not hitachi driver compliant.
What about the library listed on this page for New Haven (LCDi2cNHD) Displays, it seems they also use the PIC 16F690 Interface ?
Considering that, what are my other options for using this LCD using the fewest possible wires ?
An easy test to do and it would be nice to know if it works.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=103998.msg780802
|
CC-MAIN-2016-44
|
refinedweb
| 159
| 73.37
|
A little down to earth evaluation of Elm
This is reaction to the 10 reasons why you should give Elm a try article . I recommend you to read it since it provides nice introduction to the features of the language. But as with any of these articles that praise particular technology, they usually don’t tell you about things that are not so great. So I just took these 10 reasons and added some “not so good” comments to them.
1 — Elm is fast and easy to learn
The docs are nice but lots of times they are very sparse, more examples would be nice. Eg. they tell you the type of Time.now and what it does, but there is no example of how to wire it up in your app, which is nontrivial task for beginner.
The official guide, at the time when I read it, contained a lot of syntax errors in the examples. It is probably much better now, but it still seems quite sparse and not going deep enough. Especially about how to structure your app, there are only 2 short examples, so you are left with learning on your own by browsing other people’s code.
Generally anything other than simplest examples are missing from the official documentations/guides. Also recently there were decent breaking changes which made lot of the material on the internet outdated.
Elm might be easy but ATM the learning material does not provide fast way to learn the language.
2 — Elm syntax is compact and expressive
Semi-official style guide uses a lot of newlines, The Elm Architecture requires decent amount of boilerplate.
I don’t really see this as a problem, but I would not call it compact.
3 — Pipes are beautiful
Yes they are! It is much nicer and cleaner way to do chaining than what is happening in JavaScript world.
Btw github.com/mindeavor/es-pipeline-operator
4 — The Elm Architecture (TEA) is the current best way to do front end development.
Pretty much agree.
5 — Elm documentation is wonderful
Not quite there yet, see point 1.
6 — Forget about runtime exceptions
Exceptions are inexistant
Again, not quite there yer, there is at least one compiler bug that I know of, that bit me twice, and produces runtime exception.
7 — Fast to compile, fast to execute
Maybe I produced some code which compiler doesn’t handle well, but mid sized 7k LoC program takes about 3–10s to compile, and I am just talking about the incremental recompilation which compiles only whatever changed, not the whole program. If Elm did not have incremental compilation, compile time speed would be unbearable.
I don’t have the fastest computer, but this definitely doesn’t seem fast.
8 — Everything is a pure function, so every test is a unit test !
If you are used to Capybara or PhantomJs, you can just … forget about all that
I guess this talks about selenium tests? Lots of times everything compiles and unit tests pass and your program still doesn’t work as expected, because you forget to wire something up or update something in the update function. So I wouldn’t say Selenium tests are thing of the past. But on the other hand, tracing & fixing these bugs is really easy.
Testing is definitely much easier and nicer than in JavaScript.
9 — Elm is used in the best front end intensive projects I have seen
This is just a massive overstatement, there are way more complex apps in the JavaScript world, which isn’t really surprising when we take into account how much older JavaScript is.
I never saw this level of ambition in vanilla JavaScript apps.
Really?
10 — Elm is fun
Can’t argue with that :)
Other observations
The bad
Biggest pain with Elm? Probably type mismatch errors, when you have bigger models and you are somewhere deep in your model/functions, errors can be very confusing. You see what type it expects and what is passed into it, but it is sometimes very hard to figure out why the thing you are passing in, is different from what it expects. And since you can’t really put breakpoint in your program, so you could step through and see where exactly the wrong data happened, you are just left to stare at the screen trying to figure it out.
Elm tries to provide abstraction over the web platform and it’s APIs, but it’s abstraction is not finished. Do you want to change document.title? You gotta use JavaScript. Do you want to focus some input element? You gotta use JavaScript (edit: you can do that now with elm-lang/dom). This is getting better and better with each release, but if you are writing bigger app, there is very good chance, that you will have to write JavaScript anyway. On the other hand, the JavaScript part will be probably somewhere in the 1–5% LoC region, so it’s not that big of a deal.
Errors can also be very indirect, suppose you have view, that takes model, and that view calls 2 other functions, that also use the model. In function 2 you have type mismatch but the compiler can be complaining about misuse of model in function 1. These errors are very frustrating and costly (time wise) to debug. If you do not use type annotations, it gets even worse as the scope of reported error gets even wider.
Here is the example of the indirection I am talking about. Suppose you have super simple program which has a model (integer), and it prints that integer into view.
import Html
import Html.App
type Msg = NoOp
type alias Model = Int
main =
Html.App.beginnerProgram
{ model = 0, view = view, update = \msg model -> model }
view : Model -> Html.Html Msg
view model =
Html.div [] [ Html.text model ]
When you try to compile it, it produces following error:
-- TYPE MISMATCH ---------------------------------------------------
The type annotation for `view` does not match its definition.
11| view : Int -> Html.Html Msg
^^^^^^^^^^^^^^^^^^^^
The type annotation is saying:
Int -> Html.Html Msg
But I am inferring that the definition has this type:
String -> Html.Html a
This might lead you to think, that there is something wrong with the data you are passing into the view function, but the real error is, that the text function is expecting string instead of int. But that text function is not even mentioned in given error. As your program and model grows, it gets worse.
If you don’t have type annotations the problem is even worse as the reported function is further away from where you really made a mistake:
-- TYPE MISMATCH ---------------------------------------------------
The argument to function `beginnerProgram` is causing a mismatch.
8| App.beginnerProgram
9|> { model = 0, view = view, update = \msg model -> model }
Function `beginnerProgram` is expecting the argument to be:
{ ..., view : number -> Html.Html a }
But it is:
{ ..., view : String -> Html.Html a }
The good
The Elm Architecture, which is like a web framework, along with static typing, makes writing bad spaghetti code much much harder than in JavaScript.
Package manager which enforces semantic versioning (based on type definitions of module’s public API) is just so cool.
One of the nicest things about Elm is that it provides most of the things, that you would want to make web apps, out of the box. You got a great language than wasn’t designed in 10 days, and that doesn’t have lot of baggage that JavaScript has accumulated over the years, you got a great standard library, and you got The Elm architecture (which is basically state of the art web framework). So compared to JavaScript world, you have really nice all in one package for building web applications.
If you wanted to do that in JavaScript you would need:
- React for views
- Redux for state management
- react-redux to wire these two
- Lodash to have some decent standard library
- bunch of polyfills
- Webpack to bundle your app
- Babel to compile ES6/ES7 to something current browsers can handle
- Babel to compile non-standard jsx syntax into function calls
- Flow/TypeScript on top of that, so you have some type safety
You will probably have to spend days, if not weeks, on configuring and making sure these dependencies work with each other. Then, when you start working on different project, the mixtures of libraries used are completely different, so you have to learn a lot (again) before you can start making any changes.
Also you aren’t really writing JavaScript anymore, but some super-set (Flow) of that language, along with custom syntax extensions (jsx). At this point you are probably wondering, if it wouldn’t be better to became a farmer and have a simpler life. Or if it wouldn’t be simpler to just start a clean slate with something entirely different, like Elm.
Conclusion
- Just wanted to give a little more down to earth look at Elm, than the 10 reasons why you should give Elm a try article provided
- Elm is pretty good, but it is not quite there yet
- You should give Elm a try
|
https://medium.com/@petr.hurtak/a-little-down-to-earth-evaluation-of-elm-bd7cb06cce4a
|
CC-MAIN-2018-30
|
refinedweb
| 1,522
| 68.81
|
Re: why>?
- From: "aaron.kempf@xxxxxxxxx" <aaron.kempf@xxxxxxxxx>
- Date: 23 Jun 2006 19:16:53 -0700
its' called
uh.. owc9.dll owc10.dll owc11.dll
it's a component in the standard office installation; if you can save a
XLS as a HTM page and 'add interactivity' and it works??
then you have it
now all you got to do it put a single button on the form in order to do
something constructive with the Spreadsheet1.XmlData property.
it's all pure XML driven.. you store those defs in a database and it's
a lot lot lot easier to search through than if you had 10,000
spreadsheets.
I literally work on a repository where we have 14,000 of these
definitions. there isn't a 65535 row limit; you can do basic
calculations; and it's a lot easier to share reports than emailing
around a 20 mb spread***.
it has relational reporting capabilities that are just blatantly more
powerful than normal XLS files.
it has pivotTables with drilldown.
its' just a magical beautiful land.
I use all 4 components from this one DLL
DataSource Control
PivotTable Control
ChartSpace Control
Spread*** Control.
it comes free with office.
you can see these things easily with a standard html editor; it takes a
little bit of toying around with the XmlData defintion in order to do
anything fun with it.
but you can loop through columns and rows; i've got a bunch of
additional buttons for standardizing cells; formatting; etc
Harlan Grove wrote:
aaron.kempf@xxxxxxxxx wrote...
dude you conceited ***
Possibly, but irrelevant.
i dont want excel functions
Do tell! Who wouldn't have been able to figure this out by now?
Access functions aren't 'just a wrapper around excel functions' you
concieted ***
You're right. They provide a subset of what's available in Excel.
and MDX isn't an add-in-- PivotTableService comes with a standard
office install.
Where? You claim this is so, so what's the name of the .EXE or the .DLL
that provides this 'standard' service? I just searched my Office 11
directory and its subdirectories for any files (*.*) that contain the
text 'MDX' or 'PivotTableService'. The only file that contains 'MDX' is
VB_XLTOC.XML, and the 'MDX' instances appear to be crossreferences into
help files or the like. There were no files that contain
'PivotTableService'. FTHOI, I also checked C:\Program Files\Microsoft
Shared and all its subdirectories, and there were no files found
containing either 'MDX' or 'PivotTableService' strings. Kinda difficult
to use a 'standard office install' feature if there's not even any
mention of it in the standard Office help files.
So, great self-proclaimed expert, where would one find this? Or is this
just more BS?
I don't agree that 'using RegExp on the desktop is the same as using it
on the server'
That's nice when running things on the server. Presumably when running
Access against local MDB files, nothing would be running on the server.
So let's assume you mean any old business PC user would be updating a
table update on a database server. Does SQL Server have built-in
regular expression support, or would it need to be added after install?
If the latter, any user given any arbitrarily restrictive permissions
could do so? Or would it require the DBA to provide such functions to
users?
for starters; you don't need to install it in a hundred different machines.
You don't need to install anything on any PC shipped in the last 5
years. WSH comes standard on any PC running IE5 or later and/or any PC
running Windows 98 SE or Windows 2000 or later. Maybe there are small
businesses still running older versions of Windows and/or IE, but it
they're still running IE4, security is clearly not a big concern.
That's off on a tangent, but my point is that any Windows user/admin
who's kept their software up to date with free service packs has WSH.
secondly; you don't need to rely on a semi-compiled languge with
inherent security problems.
Excel VBA is a disease and a threat to national security.
As would be Access VBA as well? At the very least it could be used to
launch Excel, so Access is therefore about as unsafe, no?
For the record; no you CANT create functions and share them with the rest of your
company.
OK, so you have no clue how add-ins, either XLA, XLL or COM, work.
Since you have no clue in general, not surprising you have no clue
about the particulars.
Out of the box; a function created in SQL Sever; if it is assigned permissions to the
public role-- less than 2 words-- then anyone can use it.
So there are out-of-the-box regular expression functions in SQL
Serrver? What are their names?
what do you have to do; keep a centralized XLS for company-wide macros on a UNC?
Not an XLS file, either an XLA, XLL or COM DLL file. And while it could
be accessed from a single, common UNC path, most sensible companies
with WANs would put a copy into each regional office's file server.
And, yes, there are network management tools that do handle replication
of files between HO and regional servers that would ensure that the
files on the regional servers are the same as the files on the HO
servers.
and then you go and email yourself the XLS at home and you can't use it; so your
code gets ALL SCREWED UP since you dont have the reference.
Reference in the VBA sense?
As long as there are no namespace collisions, e.g., loading different
add-ins that both contain a function named foo, file pathnames and
module names aren't used for add-in functions. This is different from
functions stored in modules in XLS files, but this is perhaps the key
difference between XLS and XLA files. I can have FOO.XLA loaded in
\\mycompanyserver\shared\whatever, and call it's bar() udf in formulas
in myfile.XLS, then save and close myfile.xls, COPY FOO.XLA onto a
local drive, disconnect from the network, open Excel, open FOO.XLA as
an add-in from the local drive, then open myfile.XLS. Guess what? The
formulas in myfile.XLS will automatically call the bar() udf in the
local copy of FOO.XLA.
How little clue do you have about using Excel?! Do you know anything
other than the arithmetic arguments and the functions it has in common
with SQL? You done a fine job showing you're completely ignorant about
damn near all Excel's intermediate to advanced features.
seriously-- how do you share functions; ***??
By putting read-only XLA, XLL or COM add-in files into shared
directories on file servers OR by making them part of the standard
software image so the next time users log in while connected to the
network the add-in files are automatically distributed to their PCs. As
for having them load automatically whenever Excel starts, all it takes
is adding one value for each to the Registry. And before you rant on
about the insecurity of Registry changes, IT departments in medium to
large companies make not infrequent changes. It's part of current,
broadly accepted IT practice.
But don't let that stop you from ranting on (& on & on).
I don't believe in using Excel VBA for anything.
Any 'function' that you need in Excel-- is also present in Analysis
Services.. which means I can create *** that is just as powerful as
ANYTHING that is possible for you.
Nope. Two examples: MDX's LinReg* functions just handle single variable
least squares regressions. Kiddy stuff. They may not be able to handle
multiple variable regression modeling because MDX also lacks any matrix
arithmetic functions.
Like I said, it can do the simple numeric stuff which, to be
charitable, is all you know how to do. So it may be adequate for your
needs, but not mine.
but I can do it with sub-second response times against billions of
records.
Only the simple stuff that's already cached. Not a chance you'd get
subsecond response time performing any operation on thousands of
records if none of the calculations were cached. And it's unlikely that
any calculations involving MDX's LinReg* functions would have been
cached unless someone had set them up to be cached.
you can't even scale 66,000 records.
Don't need to. Very little statistical data requires more than a few
hundred observations per variable. On the other hand, more than one
independent variable is usually needed, and it seems MDX can't handle
that at all.
oh yeah; i forgot 'you dont create reports'...
So how does that not involve 'making reports'...
Calculations aren't the same thing as reports unless you (and I'll
stress *YOU*) adopt the incoherent definition that any human-readable
output generated by computers is a report.
But how is 'report' defined in dictionaries? FTHOI,
gives the following definition for report as a noun,
'An account presented usually in detail.'
OK, details are optional, so reports don't have to include details. But
now need to define 'account'. gives the most likely
definitions as
A narrative or record of events.
or
A precise list or enumeration of financial transactions.
Most of the calculations I perform are neither related to actual events
nor are they based directly on enumerated financial transactions. I use
derivative information. Anyway, a report is a snapshot summary of past
transactions or other quantifiable events. Pro forma reports would be
analagous summaries of forecasted future transactions/events treated as
if they had already happened. I don't produce retrospective (typical)
reports or pro formas. I provide a handful of figures, which, along
with a few other handfuls of figures from other people, determine
target pricing for contracts. Supplemental to that, I also estimate
decreased profitability if we'd need to charge less than target
pricing. Maybe estimates can be presented as reports, but it's not
necessary to do so, and I don't.
Excel doesn't CRUNCH numbers.
Not as well as other types of software, but it does a much better job
of it than Access or any other database.
btw, you've never found a use for pivotTables?...
Nope. For forecasting or exploratory data analysis they're either
irrelevant or inadequate. For some kinds of interactive retrospective
summaries of accounting data, they may be useful, but that's not what I
do. Also, pivot tables aren't automatically recalculated, so I prefer
to use formula equivalents, which are automatically recalculated.
It's the 256 column limit that's more of an unnecessary constraint.
FWIW, Excel 12 (aka 2007) will blast that away, providing more than 16K
columns. When (if ever) will Access and/or SQL Server provide more than
255 fields per table?
I have used SQL Server EXCLUSIVELY for 6 years because of the wimpy 255
column limit in MDB format.
Fair point. Excel 2007 will have 16K columns. That'll be what, 16 times
more than SQL Server?
not everything can be centralized?? why not?...
Because it'd prevent offline computer usage. You are aware that people
use computers occasionally when not connected to networks?
.
- References:
-: dbahooker
- Re: why>?
- From: Harlan Grove
- Re: why>?
- From: dbahooker
- Re: why>?
- From: Harlan Grove
- Re: why>?
- From: aaron.kempf@xxxxxxxxx
- Re: why>?
- From: Harlan Grove
- Prev by Date: Re: How to Reverse Cell Content
- Next by Date: Re: Can someone explain how to do this technique?
- Previous by thread: Re: why>?
- Next by thread: Re: why>?
- Index(es):
|
http://www.tech-archive.net/Archive/Excel/microsoft.public.excel/2006-06/msg01718.html
|
crawl-002
|
refinedweb
| 1,967
| 65.83
|
I need to write a formula for Excel that calculates the amount of fuel that is in a tank.
Here is what it is for. We have cylinder tanks that lay horizontaly. Each month we put a dipstick into the tank to measure how many inches of fuel is left in the tank. I need a formula that will take that and calculate it into gallons. Here is the formula I have to work with.
Hey Tiger,
I never try to understand formulas like that... I just just code them up and give the test data and results to someone who does know and let them confirm the results are ok.
Public Function ToGallons(L As Double, r As Double, h As Double) As Double
ToGallons = L * ((r ^ 2 * Cos((-1 * (r - h)) / r)) - (Sqr((2 * r * h) - (h ^ 2)) * (r - h))) End Function
I need for the formula to pull information from certain cells and then put the answer in another cell. I also need it to do it for 13 diffrent tanks.
My spreadsheet is laid out with 13 rows (one for each tank). If I put a number into lets say A1 I need the formula to take that number calculate the results and then put the answer into cell A2.
Could you help me with that
No worries Tiger,
Lets take the forumla I posted yesterday even if it might be wrong.
If you paste that into a module you can call it from your sheet like a formula. If you go Insert->Function on Excel's menu, select User Defined in the category drop down list you'll hopefully see ToGallon. Select it, the dialog that appears will let you select a cell reference for L, r and h.
The function will be called with the values from the sheet and the answer will be returned and put back into the cell. You might want to polish up the function and add error handling but you can apply the same function to all your 13 tanks.
Hope thats what your needing.
Im going to try it out tonight. Ill let you know if it works. Thanks for the help.
Tiger
Sorry it took so long to reply. I tried what you wrote and I got this error.
Compile error
Statement invalid outside type block
It highlights the stuff in red.
Public Function ToGallons()
L As Doubler As Doubleh As Double
ToGallons = L * ((r ^ 2 * Cos((-1 * (r - h)) / r)) - (Sqrt((2 * r * h) - (h ^ 2)) * (r - h)))
End Function
Not exactly sure what it means or what I did wrong. I tried putting it in just as you typed it but it didn't like that either.
HELP!!! LOL
I got that to work now. However something in the formula is off. If I put in 10ft for length, 5ft for the radius, and .02ft for the height of fuel in the tank it comes out with 113.67. Now if i use the same length and radius then enter .05ft for the height of fuel in the tank it comes out with 102.26.
Something is definatly off but I think it is in the formula itself.
Yeah I know. I do appriciate the help. All that is left is getting the formula right and I think it will work just fine. Again thanks for the help.
PS Im glad to see that the forum isn't going to die off just because the buddy team can't help anymore. It is great to know there are people out there who are willing to help complete strangers with thier programming needs. =)
Ok I need help with one more thing (well I hope just one more thing LOL).
I am refrencing a mathmatical formula that is written in JavaScript. I've converted most of it except for two lines which I am unsure of how to create them in VBA.
var depth=1; depth<=f.diameter.value; depth++
var area=segment_area(depth, f.diameter.value)
f.diameter.value = d
segment_area = SA
Thanks!!!
Hey man,
The line var depth=1; depth<=f.diameter.value; depth++
That's a loop in JavaScript, it translates to....
for depth = 1 to f.diameter.value
Thats just a method call....
Dim x as Integer
x = segment_area(depth, f.diameter.value)
the last line segment_area = SA I'm not that sure about to be honest. Post some more and put it into context and I'll work out.
Im sorry the segment_area = SA wasn't supposed to be in there. That is just a refrence for me for my code notes.
Here is the entire JavaScript code that I am trying to convert (or at least the section that I am trying to use).
dia=(f.diameter.value)
len=(f.length.value)
volume=Math.floor(dia*dia*3.141592/4*len/12/12/12*7.48)
for(var depth=1; depth <= f.diameter.value; depth++){
var area=segment_area(depth, f.diameter.value)
function segment_area(depth, dia) {
radius=dia/2
temp=radius-depth
chordl=2*Math.sqrt(2 * depth * radius-depth * depth)
ang=Math.acos(temp/radius)*2
arcl=ang*radius
return ((arcl*radius-chordl*temp)/2)
};// end segment_area
volume = I turned into a formula that resides in the spreadsheet.
radius, temp, chord1, ang, arcl and segment_area are in the VB script as public functions.
The ones highlighted in yellow are the ones I am having trouble understanding. Im not exactly sure what they mean or what relivance they have to the overall formula. Thanks again for the help.
Tiger, forgot to reply to this I'm sorry,
volume=Math.floor(dia*dia*3.141592/4*len/12/12/12*7.48)
Math.Floor is like a rounding function, it 'Returns the largest integer that is not greater than the argument',
floor(3.5) = 3
floor(-1.3) = -2
floor(4) = 4.
If your using Excel you have to use the FLOOR function like this...
answer = Application.WorksheetFunction.Floor(number, significance)
there really isn't any other equivalent in VBA that I know of.
for(var depth=1; depth <= f.diameter.value; depth++)
Thats is just a loop, for 1 to f.diameter.value step 1
{
thats calling a function and storing the value returned into variable 'area', looks like it might be calculating the surface area of a segment of a cylinder or pipe, maybe. ;)
thats like Public Function segment_area(ByVal depth as Double, ByVal dia as Double)
|
https://databaseforum.info/17/509085.aspx
|
CC-MAIN-2022-05
|
refinedweb
| 1,077
| 75.4
|
#include "lib/crypt_ops/crypto_cipher.h"
#include "lib/crypt_ops/crypto_digest.h"
#include "lib/crypt_ops/crypto_hkdf.h"
#include "lib/crypt_ops/crypto_rand.h"
#include "lib/crypt_ops/crypto_s2k.h"
#include "lib/crypt_ops/crypto_util.h"
#include "lib/ctime/di_ops.h"
#include "lib/log/util_bug.h"
#include "lib/intmath/cmp.h"
#include <string.h>
Go to the source code of this file.
Functions for deriving keys from human-readable passphrases.
Definition in file crypto_s2k.c.
Write a new random s2k specifier of type type, without prefixing type byte, to spec_out, which must have enough room. May adjust parameter choice based on flags.
Definition at line 159 of file crypto_s2k.c.
Given a hashed passphrase in spec_and_key of length spec_and_key_len as generated by secret_to_key_new(), verify whether it is a hash of the passphrase secret of length secret_len. Return S2K_OKAY on a match, S2K_BAD_SECRET on a well-formed hash that doesn't match this secret, and another error code on other errors.
Definition at line 486 of file crypto_s2k.c.
Helper: given a valid specifier without prefix type byte in spec, whose length must be correct, and given a secret passphrase secret of length secret_len, compute the key and store it into key_out, which must have enough room for secret_to_key_key_len(type) bytes. Return the number of bytes written on success and an error code on failure.
Definition at line 258 of file crypto_s2k.c.
Given a specifier previously constructed with secret_to_key_make_specifier in spec of length spec_len, and a secret password in secret of length secret_len, generate key_out_len bytes of cryptographic material in key_out. The native output of the secret-to-key function will be truncated if key_out_len is short, and expanded with HKDF if key_out_len is long. Returns S2K_OKAY on success, and an error code on failure.
Definition at line 370 of file crypto_s2k.c.
Given a specifier in spec_and_key of length spec_and_key_len, along with its prefix algorithm ID byte, and along with a key if key_included is true, check whether the whole specifier-and-key is of valid length, and return the algorithm type if it is. Set *legacy_out to 1 iff this is a legacy password hash or legacy specifier. Return an error code on failure.
Definition at line 117 of file crypto_s2k.c.
References S2K_RFC2440_SPECIFIER_LEN.
Given an algorithm ID (one of S2K_TYPE_*), return the length of the its preferred output.
Definition at line 92 of file crypto_s2k.c.
Construct a new s2k algorithm specifier and salt in buf, according to the bitwise-or of some S2K_FLAG_* options in flags. Up to buf_len bytes of storage may be used in buf. Return the number of bytes used on success and an error code on failure.
Definition at line 405 of file crypto_s2k.c.
Hash a passphrase from secret of length secret_len, according to the bitwise-or of some S2K_FLAG_* options in flags, and store the hash along with salt and hashing parameters into buf. Up to buf_len bytes of storage may be used in buf. Set *len_out to the number of bytes used and return S2K_OKAY on success; and return an error code on failure.
Definition at line 442 of file crypto_s2k.c.
Implement RFC2440-style iterated-salted S2K conversion: convert the secret_len-byte secret into a key_out_len byte key_out. As in RFC2440, the first 8 bytes of s2k_specifier are a salt; the 9th byte describes how much iteration to do. If key_out_len > DIGEST_LEN, use HDKF to expand the result.
Definition at line 203 of file crypto_s2k.c.
References DIGEST_LEN, SIZE_T_CEILING, and tor_assert().
Referenced by do_hash_password().
Given an algorithm ID (one of S2K_TYPE_*), return the length of the specifier part of it, without the prefix type byte. Return -1 if it is not a valid algorithm ID.
Definition at line 75 of file crypto_s2k.c.
|
https://people.torproject.org/~nickm/tor-auto/doxygen/crypto__s2k_8c.html
|
CC-MAIN-2019-39
|
refinedweb
| 619
| 67.96
|
You can subscribe to this list here.
Showing
7
results of 7
Hello everybody,
I am leaving New Zealand on the 29th June. I will be in Australia until
the 20th July. During this time I will have e-mail access, however I
will not be doing any jEdit development. On the 21st July, I am going to
Canada. Once I settle in, I will resume jEdit development.
Before I leave I will release a new version of the FTP plugin, and maybe
a new version of the XML plugin as well.
--
Slava Pestov
jEdit 4.1pre2 is now available from <>.
Thanks to Alexander Maryanovsky and Gerd Knops for contributing to this
release.
+ Editing Changes:
- The 'Smart Home/End' setting has been split into two separate
settings, one for Home and one for End.
- Made behavior of mouse in gutter more customizable. (Gerd Knops)
- Added option to make double-click drag treat each non-alphanumeric
character as one word. (Gerd Knops)
- Added an option to not hide the final end of line character of a
buffer.
+ Syntax Highlighting Changes:
- Syntax rules can now specify the AT_WHITESPACE_END attribute. If this
is set to TRUE, then the rule will only match if it is the first
non-whitespace text in the line.
- Fixed minor highlighting problem with properties mode.
+ File System Browser Changes:
- Multiple files can now be selected in the stand-alone browser;
right-clicking no longer deselects all but the clicked file.
- Added 'Open in New Split' command to the right-click menu that splits
the view and opens the selected file in the newly created text area.
- Right-click menus in the 'Open File' dialog box now contain menu items
for opening files in new views and new splits.
- File->Open With Encoding menu replaced with an 'Encoding' menu in the
file system browser's 'Commands' menu.
+ Scripting Changes:
- 'scriptPath' BeanShell variable is set to the macro file path while a
macro or startup script is executing.
- Startup scripts can now be written in any language supported by a
registered macro handler; so you can put Python scripts in the
'startup' directory if you have the JythonInterpreter plugin
installed, for example.
- Slight performance improvement when invoking editor actions.
+ Miscellaneous Changes:
- The HyperSearch feature no longer blocks the GUI while listing a
directory (which could take some time).
- New 'broken image' icon shown in place of tool bar buttons whose icons
cannot be located.
- Improved popup menu positioning code.
- jEdit.get{Integer,Double}Property and Buffer.getIntegerProperty() no
longer barf if the property contains leading or trailing whitespace.
- Added View->New Plain View command that opens a new view without
toolbars or dockables. This can be useful for opening up a quick
window for taking notes, etc.
- File system browser color regexps are now case-insensitive.
- Each dockable window now has a <name>-float action that opens a new
instance of that dockable in a few floating window (regardless of the
docking status of the window). These commands do not appear in the
menu bar, however they can be added to the context menu and tool bar,
or bound to keystrokes.
+ Bug Fixes:
- Fixed default install path settings in installer when running on Unix.
Now, instead of checking for a user name of "root", it checks if the
appropriate subdirectories of /usr/local are writable.
- When evaluating BeanShell expressions, the standard
view/buffer/editPane/textArea variables would not be unset when the
expression finishes executing.
- The text area did not get initial focus if there is a window docked
in the left or top part of the view, and the 'tip of the day' was
switched on.
- Removed debugging messages from PanelWindowContainer.java.
- Fixed bottom tool bar layout problem.
- Image shown in 'welcome to jEdit' page in help was not being installed
by the installer.
- Fixed a bug in the folding code that could be triggered by switching
to indent fold mode, collapsing some folds, switching to explicit fold
mode, then switching back to indent fold mode again.
- The view's minimum size was rather large, this caused problems while
trying to resize it if the option to decorate window borders using the
Swing L&F was enabled.
- 'Expand Fold Fully' command didn't work.
- The 'gutter bracket highlight' setting in the Color option pane didn't
work.
- Fixed possible ClassCastException if a 'paste previous' string started
with the text "<html>". Swing has a weird feature where any text label
beginning with <html> is rendered using the Swing HTML engine, and
this would trip it off.
- HyperSearch inside a selection did not handle ^ and $ in regular
expressions correctly on the first or last line of the selection.
- Insertion of { and } in C-like modes can now be undone in one step.
- Another indentPrevLine regexp fix. (Alexander Maryanovsky)
+ API Changes:
- It is no longer necessary to define labels for dockable window
-toggle actions. The label is now automatically created by appending
"(Toggle)" to the non-toggle action's label.
- Old-style dockable window API no longer supported; the following
symbols have been removed:
EditBus.addToNamedList() method
EditBus.removeFromNamedList() method
EditBus.getNamedLists() method
CreateDockableWindow class
DockableWindow interface
--
Slava Pestov
jEdit 4.1pre1 is now available from <>. This is the
first "real" release since 4.0final -- lots of new features here. Please
tell me what you think of them.+
Thanks to Alexander Maryanovsky, Alfonso Garcia, Claude Eisenhut,
Joseph Schroer, Kris Kopicki, Steve Snider and Thomas Dilts for
contributing to this release.
+ Editing Changes:
- Improved rectangular selection. It now does the right thing with hard
tabs, and the width of the selection is no longer limited to the width
of the last line. A new 'Vertical Paste' command has been added (it
behaves in a similar manner to the 'Virtual Paste' macro, which has
now been removed). When inserting text into a rectangle, the inserted
text is left-justified with spaces. The quick copy feature has been
extended to support this -- a Control-middle click vertically pastes
the most recently selected text.
- Fixed auto-indent behavior when entering constructs like:
if(foo)
bar();
baz();
in Java/C/C++/etc modes. Previously the 'baz();' would get an
unnecessary level of indent, requiring it to be removed manually.
(Alexander Maryanovsky)
- Added an option to the 'Text Area' pane to implement "standard"
previous/next word behavior, like that in a lot of other programs
(next word moves caret to start of next word, instead of end of
current word; previous word moves caret to end of previous word,
instead of start of current word).
You might remember I implemented this behavior for a little while in
the 4.0 pre-releases, but now it's back as a configurable option.
(Alexander Maryanovsky)
- Added a few extra key bindings for Windows users:
S+DELETE bound to cut
C+INSERT bound to copy
S+INSERT bound to paste
- Optimized the several parts of the buffer code; this should make
'Replace All' and similar edit-intensive tasks much faster.
+ Search and Replace Changes:
- HyperSearch now respects rectangular selections. 'Replace All' already
supported rectangular selections.
- Directory search is now VFS-aware; however it shows a confirm dialog
before doing a search on a remote filesystem. If your VFS is not
affected by network latency, you can have the getCapabilities() method
return the new LOW_LATENCY_CAP capability.
- Tool bars no longer take up the full width of the view. This saves
some screen space.
- Clicking 'Cancel' or closing the search and replace dialog box no
longer shows warnings about empty filesets, etc.
+ Syntax Highlighting Changes:
- More intelligent highlighting of numbers. Instead of hard-coded
heuteristic that only worked for C-like languages, numbers are now
highlighted as follows:
- A keyword consisting of only digits is automatically marked with the
DIGIT token type.
- If it has a mix of digits and letters, it is marked as DIGIT of it
matches the regexp specified in the rule set's DIGIT_RE attribute.
If this attribute is not set, then mixed sequences of digits and
letters are not highlighted.
- In Java mode, for example, the default value of this regexp is
"(0x[[:xdigit:]]+|[[:digit:]]+)[lLdDfF]?".
- EOL_SPAN elements can now have DELEGATE attributes.
- SEQ elements can now have DELEGATE attributes. If specified, this rule
set will be swapped in after the text matched by the sequence rule.
- Delegates to rulesets with TERMINATE rules should work now.
- IGNORE_CASE attribute of KEYWORDS rule removed. This value is now the
same as the IGNORE_CASE attribute of the parent RULES tag.
- WHITESPACE rule no longer necessary in mode definitions.
- It is no longer necessary to define <SEQ TYPE="NULL"> rules for
keyword separator characters. Now, any non-alphanumeric character,
that does not appear in a keyword string or the "noWordSep"
buffer-local property is automatically treated like it had a sequence
rule.
- Added FORTRAN syntax highlighting (Joseph Schroer)
Added Interilis syntax highlighting (Claude Eisenhut)
Updated PL-SQL mode (Steve Snider)
Updated NetRexx mode (Patric Bechtel)
- HTML and related edit modes now correctly highlight sequences like:
<SCRIPT LANGUAGE="JavaScript">...</SCRIPT>
<SCRIPT ... whatever ...>...</SCRIPT>
Previously only JavaScript between <SCRIPT> and </SCRIPT> was
highlighted. A similar change has been made for <STYLE> tags.
- Improved loading time of plain text files if 'more accurate syntax
highlighting' is on.
+ User Interface Changes:
- Status bar looks somewhat different now, and shows the word wrap mode
and line separator status.
- The search bar commands now show the search bar if it is hidden.
Search bars that come into existence as a result of this have an
extra close box button on the right. Pressing ESCAPE in the text field
or clicking the button hides the search bar.
I have renamed the search bar setting in the General option pane to
"Always show search bar", and made it be switched off by default.
You can revert to the old behavior simply by switching this setting
back on.
- The text color and style used to show the "[n lines]" string can now
be set independently of the EOL marker color.
- Plugin manager window can be closed by pressing Escape.
- Open buffers are shown with a different icon in the file system
browser.
- 'I/O Progress Monitor' window is dockable now.
- Added two new sub-menus to the Utilities menu, 'jEdit Home Directory'
and 'Settings Directory'. These two work in a similar fashion to the
'Current Directory' menu.
Also the 'Current Directory' menu (and these two new menus) now also
lists directories; selecting a directory menu item opens it in the
file system browser.
- Moved BeanShell evaluation commands from 'Utilities' to 'Macros' menu,
rearranged 'Edit' menu.
- New splash screen, about box, and tool bar icons. (Kris Kopicki)
- Added ColorWellButton control. Under Mac OS X, changing the background
color of a JButton doesn't work if the MacOS Adaptive look and feel is
in use... so I wrote a custom control. It looks better and eliminates
duplicated code anyway. Plugin developers, please use this instead of
the widely-copied and pasted JButton trick. (Kris Kopicki)
- Added RolloverButton control. Use this instead of the JToolBar
"isRollover" client property (which only works in the Metal L&F unless
you're running Java 2 version 1.4). (Kris Kopicki)
+ OS-specific Changes:
- MacOS plugin version 1.2.1 adds an option pane with a few settings,
and some bug fixes and cleanups. (Kris Kopicki)
- When running on MacOS, the roots: filesystem now lists all disks
mounted under /Volumes. (Kris Kopicki)
- On Unix, the installer now defaults to installing in the user's home
directory when running as a non-root user.
+ Miscellaneous Changes:
- WheelMouse plugin integrated into core -- no need to install a
separate plugin to get wheel mouse scrolling under Java 2 version 1.4.
- Added SOCKS proxy support. This option will help people trapped behind
a Microsoft Proxy Server configured to use NTLM authentication.
Normal HTTP connections through the proxy would not be possible since
Java does not implement this proprietary protocol; however a little
known fact is that MS Proxy Server also usually runs a SOCKS service
that does not require a password. (Alfonso Garcia)
- BeanShell 1.2b6 included. Changes since 1.2b5 are:
-.
- Exposed bsh.BshMethod and added a public invoke() method.
- Added getMethods() method to namespace to enumerate methods.
The fact that BshMethod is now public has facilitated optimizations
which improve performance of BeanShell search and replace.
- Updated printing code (Thomas Dilts)
- Uses Java 2 version 1.4 print dialogs when running on that Java
version
- Performs printing in a background thread
- Documentation is now generated using DocBook-XSL 1.51.1 stylesheets.
+ Bug Fixes:
- Select Open File; press Enter first; then choose a file to open. Bang,
an error message. Now fixed.
- When closing a file with unsaved changes, the file will now stay open
if the save failed. Previously it would be closed and the unsaved
changes would be lost forever.
- If 'Keep Dialog' was off, the search dialog would close, even after an
unsuccessful HyperSearch. This was inconsistent with the bahavior for
normal searches, where an unsuccessful match did not close the dialog
(so you could correct the search string easier).
- The 'initially collapse folds with level' setting was not being
honored when reloading files.
- A few printing bugs fixed. (Thomas Dilts)
- Workaround for views not being brought to front on windows. This
workaround minimises and then restores the view, so a minimise
animation might be visible for a brief period of time. However,
there is no other way of fixing this. (Alexander Maryanovsky)
- Dynamic menus (Recent Files, etc) did not work under MacOS X if the
menu bar was at the top of the screen. Note that this does not solve
the other problem with having the menu bar there, namely keyboard
shortcuts not being displayed. For now, leave the menu bar inside the
frame for best results. (Kris Kopicki)
- Fixed silly windows backup saving bug.
- Fixed minor problem when Control-clicking characters in the text.
- Middle mouse button drags now work if there is an existing selection.
+ API Changes:
- Two methods added to jEdit class:
getDoubleProperty()
setDoubleProperty()
- Removed unused TextUtilities.findMatchingBracket(Buffer buffer,
int line, int offset, int startLine, int endLine) method.
- New ViewUpdate.EDIT_PANE_CHANGED message added; it is sent when a
different edit pane in a split view receives focus.
- EBMessage.veto(), isVetoable() methods and EBMessage.NonVetoable class
deprecated.
- Removed old text area highlighter API, old buffer folding API.
- Removed BufferUpdate.ENCODING_CHANGED, FOLD_HANDLER_CHANGED messages;
replaced with generic BufferUpdate.PROPERTIES_CHANGED.
- MultiSelectStatusChanged message removed.
- Buffer.markTokens(int lineIndex) deprecated; use
Buffer.markTokens(int lineIndex, TokenHandler tokenHandler) instead,
with one of the available TokenHandler implementations from
org.gjt.sp.jedit.syntax.
The tokenizer now behaves differently with respect to whitespace. Tab
characters are always put into separate tokens with type Token.TAB;
other whitespace gets token type Token.WHITESPACE.
- Added new jEdit.getActiveView() method.
- VFS file chooser now supports a new 'CHOOSE_DIRECTORY_DIALOG' mode.
- Buffer.getRuleSetAtOffset() method is now public.
--
Slava Pestov
jEdit 4.0.3 is now available from <>. This fixes a
showstopper bug in jEdit 4.0.2. Everyone should upgrade. Note that I
have now made unified diffs from 4.0 to 4.0.2 and from 4.0.2 to 4.0.3
available.
* Version 4.0.3
+ Bug Fixes
- Added missing check for control key being down in text area mouse
handler.
+ API Changes
- Buffer.getRuleSetAtOffset() method is now public.
Hello jEdit users-
This evening, I have added the latest batch of new plugin releases to
Plugin Central. The batch includes one new addition (CheckStylePlugin
0.1 by Todd Papaioannou) and six updates to plugins already on Plugin
Central.
* CheckStylePlugin 0.1: initial Plugin Central release; CheckStylePlugin
is a wrapper around the CheckStyle program that allows you to check
your code for adherence of deviation from a Coding Standard; requires
jEdit 4.0, ErrorList 1.2, and JDK 1.3; includes CheckStyle 2.2 and
ANTLR (checkstyle-all-2.2.jar)
* JavaRefactor 0.0.3: added the ExtractMethod refactoring; updated for
jEdit 4.0; requires jEdit 4.0pre1 and JDK 1.3; includes RECODER
* JSwatPlugin 1.2.0: updated for JSwat 2.4 and JPDA; requires jEdit
4.0pre7, CommonControls 0.2, and JDK 1.4; includes JSwat 2.4
* Lazy8Ledger 1.31: change from 2 digit to 3 digit version number;
numerous serious bugs fixed that showed up if you used Java 1.3
instead of Java 1.4 (these bugs prevented the use of Lazy8Ledger
almost entirely); after creating a new company and adding default
accounts, the name of the company was wrong (fixed); if you used Java
1.3 instead of Java 1.4 then the database backup that should happen
when you exit Lazy8Ledger didn't work (fixed); requires jEdit 4.0pre8,
BufferTabs 0.7.6, InfoViewer 1.1, and JDK 1.4
* MementoPlugin 0.5.1: fixed layout problems; requires jEdit 4.0final
and JDK 1.4
* NetRexxJe 0.07: added source code navigator; removed toolbar; switched
to standard icons; other GUI changes; classpaths stores in jEdit
history file instead of property file; requires jEdit 4.0final and JDK
1.3
* TaskList 0.4.0: added ability to specify which modes to parse tasks
for; option to 'single-click' select tasks in TaskList; added a couple
new task types (XXX and FIXME:); bug fixes (case-insensitive task
types didn't work, new task names weren't saved); added some
validation when adding/editing new task types; new icons; updated to
work with jEdit 4.1 in CVS (Token parsing changes); now requires jEdit
4.1 to compile; requires jEdit 4.0pre5 and JDK 1.3
This will be the last update for a couple weeks at least, as I am
currently surrounded by boxes and am moving in two days. Have fun.
-md
jEdit 4.0.2 is now available from <>. This is bug
fix release; everyone using 4.0 should upgrade as soon as possible.
Note that jEdit 4.0.1 was never released.
+ Enhancements
- Documentation is now generated with DocBook-XSL 1.51.1 stylesheets.
+ Bug Fixes
- Fixed silly windows backup saving bug.
- Fixed minor problem when Control-clicking characters in the text area.
- Fixed print output spacing)
- A few printing bugs fixed, including the notorious print output
spacing problem.
-.
--
Slava Pestov
Salutations-
This evening, I added five new releases to Plugin Central. Two of these
are updates of tried-and-true plugins: FTP 0.4 and TextTools 1.8.1; the
other three are new to Plugin Central: FastOpen 0.4.1 by Jiger Patel,
JalopyPlugin 0.3.1 by Marco Hunsicker, and MementoPlugin 0.5.0 by Greg
Cooper and Michael Taft. All require a 4.0-series version of jEdit.
* FastOpen 0.4.1: initial Plugin Central release; FastOpen is a plugin
designed to quickly open any file in the currect project by just
typing in the first few characters of the filename you want to open;
added ProjectSwitcher to allow switching ProjectViewer projects from
FastOpen (Warren Nicholls); added Current Project to window title;
FastOpen window is no longer modal (Ken Wootton); ProjectViewer is now
kept open if it is already open (Ken Wootton); invalid filenames are
now highlighted in red; use with multiple views is now supported;
requires jEdit 4.0pre1, ProjectViewer 1.0.2, and JDK 1.3
* FTP 0.4: implements persistent connections; bug fixes; requires jEdit
4.0pre3 and JDK 1.3
* JalopyPlugin 0.3.1: initial Plugin Central release; Jalopy is a source
code formatter/beautifier/pretty printer for the Sun Java programming
language; requires jEdit 4.0final, ErrorList 1.2.1, and JDK 1.3;
includes Jalopy 1.0b7
* MementoPlugin 0.5.0: initial Plugin Central release; Memento is a task
manager/calendar application meant to function both as a stand-alone
and as a jEdit plugin; requires jEdit 4.0final and JDK 1.4
* TextTools 1.8.1: column insert now beeps when there is no selection
(Nathan Tenney); column inserts can now be undone in one step (Nathan
Tenney); requires jEdit 4.0pre1 and JDK 1.3
-md
|
http://sourceforge.net/p/jedit/mailman/jedit-announce/?viewmonth=200206
|
CC-MAIN-2015-06
|
refinedweb
| 3,322
| 67.45
|
I can’t see how ‘ blist[in_id].erase();’ could possibly compile. It should be ‘ blist.erase(in_id);’
Please log in or register to post a reply.
I can’t see how ‘ blist[in_id].erase();’ could possibly compile. It should be ‘ blist.erase(in_id);’
No. The operand to ‘delete’ must be a pointer to the object you are trying to destroy, so dereferencing it first makes no sense.
However, take a good look at what you’re doing. You first destroy the object, and after it’s destruction, you tell it to remove itself from the map. How can you tell an object something if it’s already destroyed?
So you have to reverse the order of things. Or you could simply use the erase() method on the map using the index of the bullet.
@Kenneth Gorking
I can’t see how ‘ blist[in_id].erase();’ could possibly compile. It should be ‘ blist.erase(in_id);’
Oh, I actually assumed it was blist[in_id]->erase().
A note to the topicstarter: copy/paste (the isolated parts of) your code as is, to avoid this kind of confusion.
Thanks for all your input!
So… store the pointer in another pointer, erase it from the map, THEN delete the memory. Got it! I’ll try it when I get home.
Sorry about not copying the code as-is. I figured it was more theorycraft than anything. I’m actually at work where I have no access to my code. So thanks for bearing with me :)
Tried copying to pointer first, still crashes with fatal error. I now have the code in front of me. Please take a look if you can.
void BulletCollection::update() { // ***************************** update() ****************************** // update the collection and everything in it std::map < int, Bullet *>::iterator iter; // temp iterator to go through the map blist for ( iter = blist.begin(); iter != blist.end(); iter++ ) { iter->second->update(); } } void Bullet::update() { // ********************** Bullet Update() ****************************** if ( ttl-- < 0 ) { BC->killBullet( id ); // tell the collection to delete, id is private member of Bullet } } void BulletCollection::killBullet( int bulletid ) { // ************** kill a bullet ************* Bullet *btemp = blist[bulletid]; // temporary pointer to hold item to be erased blist.erase( bulletid ); // remove it from the list delete btemp; // delete the memory }
I believe something wonky is happening when I do blist.erase( bulletid )… When the crash happens, some file called xtree is opened up in my Visual Studio. Please help, I’m pulling hairs out over here.
BTW, even when I comment out // delete btemp; it still crashes, so it’s probably blist.erase
Thanks,
Flamesilver
Ah, classic mistake :)
You’re iterating over the map, but while you’re iterating, you’re deleting items. The ‘iter’ variable in BulletCollection::update() is still referencing an element that is already removed from the map, and incrementing that invalid iterator and dereferencing it will result in undefined behavior.
To circumvent the problem, you could increment the iterator before you call Bullet::update(), so you know which iterator to use in the next iteration of the loop.
Be aware of the iterator invalidation rules of various containers. For a std::map it is defined that only iterators to deleted items become invalidated. But for a std::vector, however, any iterator that points to or past the deleted element is invalidated.
Thanks for the help oisyn. The iterator range problem is fixed and I’m no longer crashing with Fatal Error. But now I’m getting a heap corruption error.
#include "vect2.h" #include "GameForm.h" #include "DarkGDK.h" #include "FulcrumPhy.h" #include <map> #define BULLETSTARTNUM 100 #define BULLETENDNUM 1000 class Bullet { // **************************** Bullet Class ***************************** private: int id; int ownerid; // id of creator int ttl; // number of cycles of life remaining BulletCollection *BC; // Collection so I know where to kill public: Bullet( int in_id, int in_ownerid, BulletCollection *in_bc, int in_ttl = DEFAULTTTL ); // CTor ~Bullet(); // DTor int getid() { return id; } // accessor for id int getttl() { return ttl; } // accessor for ttl int getownerid() { return ownerid; } // accessor for ttl void update(); }; class BulletCollection { // ************************** Hack BulletCollection Class (Precursor to GameSpace) ***************************** // * // * private: TicketSystem *TS; // TicketSystem used to assigning object id's std::map< int, Bullet *> blist; // bullet list - all currently live bullets public: FulcrumPhy *FP; // Fulcrum Physics Pointer BulletCollection( FulcrumPhy *in_Physics ); // CTor - allocate TicketSystem TS ~BulletCollection(); // DTor - free memory for TS and all void createBullet( int creatorid ); // make a bullet void killBullet( int bulletid ); // kill a bullet void update(); // update the collection and everything in it }; BulletCollection::BulletCollection( FulcrumPhy *in_Physics ) { // ********************** BulletCollection::CTor ****************************** // * allocate TicketSystem TS FP = in_Physics; TS = new TicketSystem(BULLETSTARTNUM, BULLETENDNUM); } BulletCollection::~BulletCollection() { // ********************** BulletCollection::DTor ****************************** // * // DTor - free memory for TS and all // * // * // delete the collection and everything in it std::map < int, Bullet *>::iterator iter; // temp iterator to go through the map blist for ( iter = blist.begin(); iter != blist.end(); iter++ ) { killBullet ( iter->first ); } delete TS; } void BulletCollection::createBullet( int creatorid ) { // ********************** createBullet ****************************** // * // * make a Bullet and add it to blist // * int creatorid - the guy who made the bullet // * Bullet *b = new Bullet ( TS->getTicket(), creatorid, this ); // allocate a new Bullet object blist[b->getid()] = b; // ** add b to the map of all bullets so it can get updated ** // Create a vector that's pointing away from the ship vect2 vPadding; vPadding.setXYFromAM( dbObjectAngleZ( creatorid ), 20 ); // ** Set the Bullet position to the player's, Move the bullet according to the player but faster by scale ** FP->setPosition ( b->getid(), dbObjectPositionX( creatorid ) + vPadding.x, dbObjectPositionY( creatorid ) + vPadding.y, 0 ); FP->setLinearVelocity ( b->getid(), vPadding.x * 5 + FP->getLinearVelocityX ( creatorid ), vPadding.y * 5 + FP->getLinearVelocityY ( creatorid ), 0 ); /*FP->setLinearVelocity ( b->getid(), FP->getLinearVelocityX ( creatorid ) * 3, FP->getLinearVelocityY ( creatorid ) * 3, 0 );*/ } void BulletCollection::killBullet( int bulletid ) { // ************** kill a bullet ************* // remove it from the list Bullet *btemp = blist[bulletid]; // temporary pointer to hold item to be erased blist.erase( bulletid ); delete btemp; } void BulletCollection::update() { // ***************************** update() ****************************** // update the collection and everything in it std::map < int, Bullet *>::iterator iter = blist.begin(); // temp iterator to go through the map blist Bullet *tb; // temp bullet pointer while ( iter != blist.end() ) { tb = iter->second; iter++; // iterate BEFORE possibly calling killBullet() and invalidating the whole thing tb->update(); // now call update } /*for ( iter = blist.begin(); iter != blist.end(); iter++ ) { iter->second->update(); }*/ } Bullet::Bullet( int in_id, int in_ownerid, BulletCollection *in_bc, int in_ttl ) { id = in_id; ownerid = in_ownerid; BC = in_bc; ttl = in_ttl; dbMakeObjectSphere ( id, 3 ); // create physical object BC->FP->makeSphere ( id, true ); // add to physics program } Bullet::~Bullet () { // *********** DTOR *********** // * Release the object created BC->FP->releaseActor(id); // release Actor (delete Object) from Physics through BulletCollection dbDelete(id); // delete the object from DGDK } void Bullet::update() { // ********************** Bullet Update() ****************************** if ( ttl < 0 ) { BC->killBullet( id ); // tell the collection to delete, id is private member of Bullet } ttl--; }
I did look up Heap Corruption, and now have the idea that I must be trying to free the same resource twice… but void BulletCollection::killBullet( int id ) is the only place I have delete!
Please advise.
Thanks,
Flamesilver
PS: Yes, I’m aware that \~BulletCollection() DTor still has that iterator problem. That’s an easy fix. But alas, it’s still crashing when a ttl runs to 0 on a bullet, I think.
Heap corruption can be due to a number of issues
.edit: are you sure GetTicket() never returns the same int twice?
The only reason why I don’t think the TicketSystem class is at fault is because as soon as the first bullet dies (ttl < 0) I get the heap corruption error.
I’m really going to have to think long and hard and test a few things to see what’s really bugging the system…
FOUND IT!
void Bullet::update() { // ********************** Bullet Update() ****************************** ttl--; // decrement BEFORE so we're not writing to already deleted memory if ( ttl <= 0 ) { BC->killBullet( id ); // tell the collection to delete, id is private member of Bullet } }
Previously –ttl; was after the killbullet call, so it would try and write to a variable that was deleted.
Thanks for telling me about the rules of Heap. Devmaster.net is so awesome!
PS: Without you guys, I wouldn’t’ve gotten very far. So far I’ve got a 3D space game with overhead fixed camera, overhead chase, and 3rd person camera view, full physics implemented via fulcrum, and I can fire bullets that interact with other ships (and give knockback). Soon I’ll have a working game! Thanks guys!
Need some help on dynamic memory allocation (actually, de-allocation) as I’m re-navigating the learning waters for C++.
Say I have 2 classes -
class BulletCollection;
class Bullet;
BulletCollection is responsible for instantiating / destroying Bullet objects by allocating new Bullet instances and storing the pointer in its std::map<int, Bullet*> so it can be iterated through, etc. Looks something like::
My problem is my BulletCollection::killBullet() is giving me weird issues (crashes).
And in my BulletCollection::killBullet( in_id ) I have something like:
…?
|
http://devmaster.net/posts/19332/dynamic-memory-destruction-in-objects
|
CC-MAIN-2014-10
|
refinedweb
| 1,460
| 53
|
Run selected text only
Hi all,
Sorry for the newbie question, how can I run selected text (code) only instead of the full script?
Is there a keyboard shortcut?
Thanks a lot!
Eduardo
You can use
editormodule to get selected text range and then split the lines into a list. last you should be able to iterate the list and use
execto run each line. Alterativly you could create a temp script at run time then write the selected code to that script then run the scriot. just make ure to save the stream and close it.
if you need an example i can write one up for you but this should get you in the right direction.
@etalamo, what @stephen said, except even simpler:
import editor start, end = editor.get_selection() exec(editor.get_text()[start:end])
With this open, go to wrench menu, Edit it and add (+) this script as something you can then run from the wrench menu on your selected piece of code.
If you use an external keyboard, it is possible to tie this to a key combo.
Thank you very much, it worked perfectly!
@mikael yes I’m using a bluetooth keyboard, do you know how can I tie it to a keyboard shortcut?
Again, thanks a lot!
@etalamo, I think the simplest way right now would be to install Black Mamba with the 1-liner here.
Then you create script like this (sets the magic key to be ⌘E):
import editor from blackmamba.uikit.keyboard import ( register_key_command, UIKeyModifier ) def exec_selected(): start, end = editor.get_selection() exec(editor.get_text()[start:end]) register_key_command( 'e', UIKeyModifier.COMMAND, exec_selected, 'Execute selected code' # Optional discoverability title (hold down Cmd) )
If that works as it should, place the code above in a file called
pythonista_startup.pyIn the
site-packages-3directory (add to it if you already have something there).
Unfortunately this last step didn’t work (I tried with different shortcut options). Nevertheless, Pythonista automatically adds a shortcut (I discovered it while holding the Command button down).
It’s not the simplest shortcut but at least I have one :)
Anyway, thank you for your help!!
|
https://forum.omz-software.com/topic/6543/run-selected-text-only
|
CC-MAIN-2021-39
|
refinedweb
| 353
| 65.22
|
Solution 1
def neutralize_uppercase(stringy): count = 0 while count < len(stringy): if stringy[count] == stringy[count].upper(): stringy = "" break count += 1 return stringy
Time is over! You can keep submitting you assignments, but they won't compute for the score of this quiz.
No Uppercase Allowed
Complete the function
neutralize_uppercase so that it returns
stringy the same as it was received if there are no capital letters in it or an empty string "" if there were. You need to use a while loop to go through each letter of the string one at a time to check for uppercase letters.
Examples:
# String has no uppercase letters so returns input string unchanged >>> neutralize_uppercase("snitch") == "snitch" 'snitch' # String has uppercase letters so returns empty string >>> neutralize_uppercase("eXpelliarMus") ''
Hint: You'll probably need to use len() to figure out your condition to stop looping through the string. You'll also need to use upper().
|
https://learn.rmotr.com/python/introduction-to-programming-with-python/getting-started/no-uppercase-allowed
|
CC-MAIN-2018-47
|
refinedweb
| 151
| 59.53
|
The question is to find the number of average elements in an array .()
I know that this can be made efficient but I wanted to try to code the NxNxN solution as well. However, it’s giving a wrong answer. What am I missing?
#include <iostream> #include <algorithm> using namespace std; int main() { int n,a[10001]; int count = 0; cin>>n; for(int i=0;i<n;i++) cin>>a[i]; for(int i=0;i<n;i++) { for(int j=0;j<n;j++) { for(int k=j+1;k<n;k++) { if(((a[k]+a[j])==2*a[i])) { count++; } } } } cout<<count<<endl; return 0; }
|
https://discusstest.codechef.com/t/finding-number-of-average-elements/4152
|
CC-MAIN-2021-31
|
refinedweb
| 108
| 73.78
|
collective.zodbbrowser 0.1
A zodb browser for Zope2
Introduction
Collective.ZodbBrowser is a web application to browse, inspect and introspect Zope's zodb objects. It is inspired on smalltalk's class browser and zodbbrowsers for zope3.
There is a demo video available at YouTube's menttes channel.
Using ZodbBrowser with your buildout
If you already have a buildout for Plone3 or Plone4 running, edit buildout.cfg to add collective.zodbbrowser to eggs and zcml sections at buildout and instance parts respectively.
[buildout] ... eggs = collective.zodbbrowser ...
Autoinclude is automatically configured to work in Plone but if you are using any other systems, make sure to add a zcml slug:
... [instance] ... zcml = collective.zodbbrowser
Then run bin/buildout to make the changes effective.
Changelog
collective.zodbbrowser 0.1
- Changed namespace to avoid conflicts on windows plataforms. Thanks to Laurence Rowe.
- Added Smartfilter for Plone folders. Thanks to Elizabeth Leddy and
- Added annotations list
- Switch from View to ViewManagementScreens Permissions. Thanks to Elizabeth Leddy.
- Added alternate view that starts to filter out objects that are unlikely. to be navigated. Thanks to Elizabeth Leddy.
- Rename views to not conflict with certain built in items. Thanks to Elizabeth Leddy.
- Render sites with broken interfaces. Thanks to Elizabeth Leddy.
- Bumped the jquery dynatree version, fixed missing dependencies for jq libs. Thanks to Elizabeth Leddy.
- Fix setup.py so that autoinclude works with plone without zcml. Thanks to Elizabeth Leddy.
- Make all browser views extend DoomedBrowserView, which makes sure the utility can't accidentally commit a transaction. Thanks to David Glick.
- Remove unused five:registerPackage directive and initialize method. Thanks to David Glick.
zope2.zodbbrowser 0.2 experimental version
- Added ui.layout for better layout and resizable panels. Thanks to Quimera.
- Updated jquery from 1.4.2 to 1.4.4.
- Added Pretty printing format to show propertie's values. Thanks to Laurence Rowe and Emanuel Sartor.
- Added support for older pythons 2.4 , 2.5. Thanks to Laurence Rowe.
- Included module and file path for source code. Thanks to davidjb.
- Added z3c.autoinclude.plugin support to remove the zcml entry on buildout. Thanks to aclark.
zope2.zodbbrowser 0.1 experimental version
- Initial release includes: Class and Ancestors, Properties, Callables and Interfaces provided.
- Support for Zope 2.13.x
- Support for Firefox 3.6 and Chrome 5.0. No support Internet Explorer yet.
- Author: Roberto Allende - Menttes SRL
- Keywords: zope2 zope zodb python plone
- License: GPL
- Categories
- Package Index Owner: menttes
- DOAP record: collective.zodbbrowser-0.1.xml
|
http://pypi.python.org/pypi/collective.zodbbrowser/0.1
|
crawl-003
|
refinedweb
| 412
| 55.1
|
Mainly due to the fact my book barely explains array's and doesn't really tell how to manually take user-input and feed it into an array, I am having some issues as of now.
I am trying to take three user defined inputs and store them into my array int real[2].
I get no compile errors as it is, and I am more or less wondering if someone can help me fix up whatever it is that needs fixing. I'v checked online for how to manually put values into an array, however it seems everything I see does it via a counter, which doesn't help me at all. Any help would be appreciated, and i'll post my code below.
(Not finished, just using a printf statement to try and get it so the output is actually real and I know it works properly.)
Code:
/* Loan Calculator
Name: Tyler Sinclair-Day
Purpose: To create a calculator which will use a loan calculation to display a table in which how long a loan will take
to pay of, as well as how much each month will be paid, showing other information as showed in the program
*/
#include <stdio.h>
#include <math.h>
int menu(void); //Prototype for int menu(void) function
int main(void)
{
int real[2];
//Printf showing options to user as what each number represents for entering data//
printf("--------------------\nEnter in the Numeric \nCharacter next to your choice"
"\n--------------------\n\n");
printf("||1. Enter the Principal ||\n"
"||2. Enter the annual interest rate ||\n"
"||3. Enter duration of loan in months ||\n"
"||4. Calculate your loan payments. ||\n"
"||5. Show Loan Table. ||\n"
"||0. Exit program ||\n");
real[2] = menu(); //Calling menu function
//printf("Month::Old Balance::Payment::Interest::Principal::New Balance");
printf("%d and %d and %d", real[0], real[1], real[2]);
getchar();
getchar();
return 0;
}
int menu(void)//Menu function for gathering information from user.
{
//int principal;
//int rate;
//int months;
int real[2];
while ((scanf("%d", &choice)) != 0)//Runs until recieves '0' than quits
{
switch (choice)//Case used for gathering data
{
case 1://Gets data for principal
printf("What is the principal you wish to enter?:\n");
scanf("%d", &real[0]);
break;
case 2://Gets data interest rate
printf("What is the annual interest rate?:\n");
scanf("%d", &real[1]);
break;
case 3://Gets data for months
printf("How many months in which does the loan need to be paid?:\n");
scanf("%d", &real[2]);
break;
case 4://Calculates table
printf("Your payment chart is calculated.\n");
break;
case 5://Prints table out
printf("Below is your loan table.\n\n\n");
break;
case 0:
return 0;
default://Will display message if number >5 or <1 is entered.
printf("You entered a value outside of 1 to 5.\nPlease only use 1, 2, 3, 4, 5 or 0 to exit.\n\n");
}
}
return real[2];
}
/*for(int i = 0;i < months; ++i)
{
printf(" new balance: %d ", newbalance);
principal = principal * (1 + (rate / 12));
oldbalance=oldbalance-principal;
newbalance=oldbalance;
}*/
|
http://cboard.cprogramming.com/c-programming/137041-array-not-outputting-properly-printable-thread.html
|
CC-MAIN-2014-23
|
refinedweb
| 507
| 62.17
|
Accessing data using Language Integrated
Query (LINQ) in ASP.NET WebPages – Part 1
This article comprises of two parts; Part 1
deals with the introduction to LINQ and LinqDataSource control in ASP.NET and
describes how to define and retrieve an in-memory data collection and display
data in a web page. Part 2 explains how to create entity classes to represent
SQL Server database and tables using Object Relational Designer and display data
in a web page using LinqDataSource control.
Introduction to LINQ
Language Integrated Query (LINQ) is a query
syntax that defines a set of query operators that allow traversal, filter, and
projection operations to be expressed in a declarative way in any .NET-based
programming language. It.
With the advent of LINQ, a
ground-breaking, new concept of a query has been introduced as a
first-class language construct in C# and Visual Basic. LINQ simplifies the way
you work with data queries. LINQ offers you a unified, declarative syntax model
to query any data source including an XML document, SQL database, an ADO.NET
Dataset, an in-memory collection, or any other remote or local data source that
chooses to support LINQ Language Integrated Queries are strongly typed and
designer tools can be used to create object-relational mappings. It is easier
now for developers to catch many errors during compile-time; also supports
Intellisense and debugging.
Sample Query
A query is an expression that retrieves data
from a data source. All LINQ query operations consist of three essential
actions: Obtain the data source,Create the query and Execute the
query.
In LINQ, the execution of the query is
isolated from the query itself and hence data cannot be retrieved just by
creating a query;
In the following sample, a query retrieves
even numbers from an array of integers.
// Data source.
int[] numbers = new int[10] { 0, 1, 2, 3,
4, 5, 6 ,8,9,10};
// Query creation.
IEnumerable<int> numQuery
=
from num in numbers
where (num % 2) == 0
select num;
// Query execution.
foreach (int j in numQuery)
{
Console.Write("{0,1} ", j);
}
Output will be:
0,2,4,6,8,10
LinqDataSource Control in
ASP.NET
As many of you familiar with various
DataSource controls in ASP.NET 2.0, the 2008 version of ASP.NET includes a new
DataSource control called LinqDataSource control. The LinqDataSource control
enables us to retrieve, update data using Language Integrated Queries (LINQ)
from an in-memory collection of data or SQL Server database tables. It
automatically generates data commands for select, update, delete and insert
operations and you need not have to create them manually.
This control has two main
properties:
1. ContextTypeName property that represents the name of the type object
that contains data collection
2. TableName property that represents the name of the public field or
property that returns data collection, or table name in case of database
access.
Walkthrough 1: Connecting
in-memory data collection using LinqDataSource control and perform queries to
display dat in a ASP.Net page
Open Visual Studio 2008 and create a New
WebSite Project.
Add a
new class file to the App_Code folder of the project. Define classes that
supplies data to the control and write LINQ Queries to retrieve data from
them.
public class Student
public string First { get; set;
}
public string Last { get; set;
}
public int ID { get; set; }
public List<int>
Scores;
public class StudentData}
}
};
You have
just added a Student Class and an initialized list of students in the class
file. The data source for the queries is a simple list of Student objects. Each
Student record has a first name, last name, and an array of integers that
represents their test scores in the class.
Creating Queries
You can
create number of queries to retrieve data from the datasource and in this
article, I have create few queries for demonstration purposes. Each query is
assigned to a public property or field so that it would be used in the
LinqDataSource Controls in the WebPage.
1. Queries of IEnumerable<StudentAverage>
type
public class StudentAverage
public int ID;
public string First;
public string Last;
public double ScoreAverage;
public IEnumerable<StudentAverage>
studentAverageMarks = from student in StudentData.students
where
student.Scores[0] > 90
new StudentAverage { ID=student.ID, First=student.First, Last=student.Last,
ScoreAverage = student.Scores.Average()};
Declared a new type StudentAverage that
consists of ID,First, Last, ScoreAverage fields. This query returns the list of
students with average marks.
2. Queries of IEnumerable<int> type
public static int studentID;
public IEnumerable<int>
GetStudentTotal = from student in StudentData.students where
student.ID == StudentData.studentID select
student.Scores.Sum();
Declared
another public property that returns a int type of collection that holds the
result from the query. This query retrieves sum total of all marks for a
particular student and it uses a condition in the where clause. User will supply
data to the parameter studentID during runtime.
public IEnumerable<int> GetStudentList =
from student
in
StudentData.students select student.ID;
GetStudentList is another property added to the class that stores the
result of a query. This query returns a collection of Student.ID.
3. Queries of Implicit Type(Anonymos Type)
public IEnumerable GetData
get
{
var studentMarks = from student in StudentData.students
where
student.ID > 110 select new { ID = student.ID, First =
student.First, Last = student.Last, ScoreTotal = student.Scores.Sum()
};
foreach (var s in studentMarks)
yield return s.ID + ", " + s.Last +
", " +
s.First + ", " + s.ScoreTotal;
}
}
Now you
have just added a public property that returns a query result as collection of
Student’s marks data. Note that the query involves an anonymous type that
contains fields such as ID, First, Last and ScoreTotal. The query returns
implicit type var and it uses Iterators to return the elements in the
collection.
By this time, you have your data sources and
queries ready for access by the data-bound controls of a web page.
Designing the Webpage to view
data
To demonstrate the usage of Linq data
sources and to view data on a web page, the layout of the page has been designed
in three sections.
Section 1: View Student
Details such as Student ID, First Name, Last Name and Average Score. This view
executes the Query No. 1 studentAverageMarks defined earlier in this
article.
Drag a LinqDataSource
control(ID=LinqDataSource1) into the page from the data tab in the toolbox and
configure it to access the data sources and queries you have just created in the
project as below in the Configure Data Source Wizard.
1. Choose “StudentData” as a Context object. (Pic1)
2. Choose “studentAverageMarks” as Table in the Data Selection and Check
all the fields appear in the Select list.(Pic2)
3. Click the “Finish” button to complete the configuration of
LinqDataSource control.
Drag
a GridView control(ID=GridView1) into the page from the data tab in the tool box
and set Data Source property to LinqDataSource1.
Section 2: View Total
Score for a selected student ID listed in the drop-down list. This section
executes the Query No. 2 GetStudentTotal defined earlier in this
article.
Drag a LinqDataSource
control(ID=LinqDataSource2) into the page from the data tab in the toolbox and
configure it to access the data sources and queries you have just created in the
project as below in the Configure Data Source Wizard.
1. Choose “StudentData” as a Context object.
2. Choose “GetStudentList” as Table in the Data Selection and Check the
field appear in the Select list. (Pic3)
Drag a Drop-down list control
(ID=DropDownList1) into the page and set the DataSource Property to
LinqDataSource2. Remember to check the box that enables AutoPostBack behaviour
to this control.
Place a TextBox and Label controls in the
page and add the following code to SelectedIndexChanged Event handler of the
DropDownList1.
protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs
e)
StudentData sd = new StudentData();
StudentData.studentID =
Convert.ToInt32(DropDownList1.SelectedItem.Text);
TextBox2.Text = Convert.ToString(sd.GetStudentTotal.ToList<int>()[0]);
Section 3: View Student
Details such as Student ID, First Name, Last Name and Total Score. This view
executes the Query No. 3 studentMarks returned from the public property GetData
defined earlier in this article.
Drag a LinqDataSource
control(ID=LinqDataSource3) into the page from the data tab in the toolbox and
configure it to access the data sources and queries you have just created in the
project as below in the Configure Data Source Wizard.
1. Choose “StudentData” as a Context object. Refer Pic1.
2. Choose “GetData” as Table in the Data Selection and Check all the
fields appear in the Select list.
Drag a
ListBox control (ID=ListBox1) into the page from the data tab in the tool box
and set Data Source property to LinqDataSource3.
That’s’ it. Now run the webpage and see the
output as displayed below. (pic4)
Download Source
Summary
The Part-I of this article has demonstrated
how to define and connect to in-memory data sources and explained various ways
of defining and executing queries on that data collection. You are now invited
to walk-through the Part-II of this article that defines entity classes to
database objects and write and execute Linq queries on database table
objects.
|
http://www.codedigest.com/Articles/ASPNET/38_Accessing_data_using_Language_Integrated_Query_(LINQ)_in_ASPNET_WebPages_%e2%80%93_Part_1.aspx
|
crawl-002
|
refinedweb
| 1,530
| 55.54
|
Man Page
Manual Section... (3) - page: globfree
NAMEglob, globfree - find pathnames matching a pattern, free memory from glob()
SYNOPSIS
#include <glob.h> int glob(const char *pattern, int flags,
int (*errfunc) (const char *epath, int eerrno),
glob_t *pglob);
void globfree(glob_t *pglob);
DESCRIPTIONThe glob() function searches for all the pathnames matching. This structure is of type glob_t (declared in <glob.h>) and includes the following elements defined by POSIX.2 (more may be present as an extension):
typedef struct { size_t gl_pathc; /* Count of paths matched so far */ char **gl_pathv; /* List of matched pathnames. */ size_t gl_offs; /* Slots to reserve in gl_pathv. */ } glob_t;
Results are stored in dynamically allocated storage.
The(3), or stat(2). If errfunc returns nonzero, or if GLOB_ERR is set, glob() will terminate after the call to errfunc.
Upon successful return, pglob->gl_pathc contains the number of matched pathnames and pglob->gl_pathv contains a pointer to the list of pointers to matched pathnames. The list of pointers is terminated by a NULL pointer. VALUEOn successful completion, glob() returns zero. Other possible returns are:
- GLOB_NOSPACE
- for running out of memory,
- GLOB_ABORTED
- for a read error, and
- GLOB_NOMATCH
- for no found matches.
CONFORMING TOPOSIX.2, POSIX.1-2001.
NOTESThe structure elements gl_pathc and gl_offs are declared as size_t in glibc 2.1, as they should be according to POSIX.2, but are declared as int in libc4, libc5 and glibc 2.0.
BUGSThe glob() function may fail due to failure of underlying function calls, such as malloc(3) or opendir(3). These will store their error code in errno.
EXAMPLEOne example of use is the following code, which simulates typing
ls -l *.c ../*.c
in the shell:
glob_t globbuf; globbuf.gl_offs = 2; glob("*.c", GLOB_DOOFFS, NULL, &globbuf); glob("../*.c", GLOB_DOOFFS | GLOB_APPEND, NULL, &globbuf); globbuf.gl_pathv[0] = "ls"; globbuf.gl_pathv[1] = "-l"; execvp("ls", &globbuf.gl_pathv[0]);
SEE ALSOls(1), sh(1), stat(2), exec(3), fnmatch(3), malloc(3), opendir(3), readdir(3), wordexp(3), glob
|
http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=globfree
|
CC-MAIN-2013-48
|
refinedweb
| 326
| 69.07
|
Targeting multiple tiles¶
In the previous example, the code targeted only a
single tile. Most real applications will instead target two or more tiles. It is
not possible to target multiple tiles using a single C application as the tiles
are complete, separate processors. Instead each tile runs its own application,
and each tile’s application can communicate with other tiles using the channels
documented previously. To aid development of multi-tile applications, the XMOS
tools allow the use of a special ‘mapping file’ which can be used to specify an
entry-point for each tile in a network. This is instead of specifying a
main
function in C - which is not allowed when a mapping file is used.
Warning
For historical reasons, the format of the mapping is C-like. However, this format should not be treated as C-source and is likely to be deprecated in future versions of the tools and replaced with a purely declarative format. Developers are therefore recommended to avoid any procedural code within a mapfile.
Using a mapfile¶
To map code onto both of the tiles on a XCORE-200-EXPLORER, it is necessary
to describe that mapping in a file which we will call
mapfile.xc. An
example is shown below:
#include <platform.h> extern "C" { void main_tile0(); void main_tile1(); } int main(void) { par { on tile[0]: main_tile0(); on tile[1]: main_tile1(); } return 0; }
This mapfile references the two functions in
main.c:
#include <stdio.h> void main_tile0() { printf("Hello from tile 0\n"); } void main_tile1() { printf("Hello from tile 1\n"); }
Now build and execute this multi-tile application on real hardware to see the printed output:
$ xcc -target=XCORE-200-EXPLORER mapfile.xc main.c $ xrun --io a.xe Hello from tile 0 Hello from tile 1
Summary¶
In this example, you have written a mapfile using the declarative components of XC language to deploy two C functions onto the two tiles of an XCORE-200-EXPLORER.
See also
At this point, you might proceed to the next topic, or you might chose to explore this example further:
|
https://www.xmos.ai/documentation/XM-014363-PC-4/html/tools-guide/quick-start/multi-tile.html
|
CC-MAIN-2022-21
|
refinedweb
| 348
| 52.29
|
As for general topic of backwards compatibility I think going “fully open” might be the
best longterm solution.
Once in a while the topic of changing metadata keeps reappearing and there is no guarantee
it will not strike back again. Opening up metadata will release ourselves from burden of producing
migration tools and shipping them with the new version of the binaries with revised catalog.
The performance (mainly storage) impacts of that solution will be tolerable especially considering
how much data is usually stored in metadata.
Moreover, being big proponents of semi-structured data, it does make perfect sense for us
to eat our own dog food here.
> On Dec 14, 2015, at 18:04, Ildar Absalyamov <ildar.absalyamov@gmail.com> wrote:
>
> I guess the main argument for 2 would be eliminating broken metadata records prior to
backwards compatibility cutoff.
> The last thing what we want to do is to be stuck with wrong implementation for compatibility
reasons. Once the functionality needed for 3 is there we can again introduce those indexes
without building sophisticated migration subsystem.
>
>> On Dec 14, 2015, at 17:55, Mike Carey <dtabass@gmail.com> wrote:
>>
>> SO - it seems like 3 is the right long-term answer, but not doable now?
>> (If it was doable now, it would obviously be the ideal choice of the three.)
>> What would be the argument for doing 2 as opposed to 1 for now?
>> As for the question of backwards compatibility, I actually didn't sense a consensus
yet.
>> I would tentatively lean towards "right" over "backwards compatible" for this change.
>> What are others thoughts on that?
>> (Soon we won't have that luxury, but right now maybe we do?)
>>
>> On 12/14/15 3:43 PM, Steven Jacobs wrote:
>>> We just had a UCR discussion on this topic. The issue is really with the
>>> third "index" here. The code now is using one "index" to go in two
>>> directions:
>>> 1) To find datatypes that use datatype A
>>> 2) To find datatypes that are used by datatype A.
>>>
>>> The way that it works now is hacked together, but designed for performance.
>>> So we have three choices here:
>>>
>>> 1) Stick to the status quo, and leave the "indexes" as they are
>>> 2) Remove the Metadata secondary indexes, which will eliminate the hack but
>>> cost some performance on Metadata
>>> 3) Implement the Metadata secondary indexes correctly as Asterix indexes.
>>> For this solution to work with our dataset designs, we will need to have
>>> the ability to index homogeneous lists. In addition, we will have reverse
>>> compatibility issues unless we plan things out for the transition.
>>>
>>> What are the thoughts?
>>>
>>>
>>> Orthogonally, it seems that the consensus for storing the datatype
>>> dataverse in the dataset Metadata is to just add it as an open field at
>>> least for now. Is that correct?
>>>
>>> Steven
>>>
>>>
>>> On Mon, Dec 14, 2015 at 1:23 PM, Mike Carey <dtabass@gmail.com> wrote:
>>>
>>>> Thoughts inlined:
>>>>
>>>> On 12/14/15 11:12 AM, Steven Jacobs wrote:
>>>>
>>>>> Here are the conclusions that Ildar and I have drawn from looking at
the
>>>>> secondary indexes:
>>>>>
>>>>> First of all it seems that datasets are local to node groups, but
>>>>> dataverses can span node groups, which seems a little odd to me.
>>>>>
>>>> Node groups are an undocumented but to-be-exploited-someday feature that
>>>> allows datasets to be stored on less than all nodes in a given cluster.
As
>>>> we face bigger clusters, we'll want to open up that possibility. We will
>>>> hopefully use them inside w/o having to make users manage them manually
>>>> like parallel DB2 did/does. Dataverses are really just a namespace thing,
>>>> not a storage thing at all, so they are orthogonal to (and unrelated to)
>>>> node groups.
>>>>
>>>>> There are three Metadata secondary indexes: GROUPNAME_ON_DATASET_INDEX,
>>>>> DATATYPENAME_ON_DATASET_INDEX, DATATYPENAME_ON_DATATYPE_INDEX
>>>>>
>>>>> The first is used in only one case:
>>>>> When dropping a node group, check if there are any datasets using this
>>>>> node
>>>>> group. If so, don't allow the drop
>>>>> BUT, this index has a field called "dataverse" which is not used at all.
>>>>>
>>>> This one seems like a waste of space since we do this almost never. (Not
>>>> much space, but unnecessary.) If we keep it it should become a proper
>>>> index.
>>>>
>>>>> The second is used when dropping a datatype. If there is a dataset using
>>>>> this datatype, don't allow the drop.
>>>>> Similarly, this index has a "dataverse" which is never used.
>>>>>
>>>> You're about to use the dataverse part, right? :-) This index seems like
>>>> it will be useful but should be a proper index.
>>>>
>>>>> The third index is used to go in two cases, using two different ideas
of
>>>>> "keys"
>>>>> It seems like this should actually be two different indexes.
>>>>>
>>>> I don't think I understood this comment....
>>>>
>>>>
>>>>> This is my understanding so far. It would be good to discuss what the
>>>>> "correct" version should be.
>>>>> Ste
>
Best regards,
Ildar
|
http://mail-archives.us.apache.org/mod_mbox/asterixdb-dev/201512.mbox/%3C98E6F7EC-988F-43F2-BC65-A47F6CB28AAF@gmail.com%3E
|
CC-MAIN-2020-45
|
refinedweb
| 811
| 63.39
|
Numpy loadtxt is a method to load text files faster as compared to the default python interpreter. You can load simple text files and manipulate them easily. In this entire tutorial, you will learn how to implement the NumPy loadtxt method.
Before going to the coding part let’s first learn the syntax for it.
numpy.loadtxt(fname, dtype=<class 'float'>, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0, encoding='bytes', max_rows=None)
The most generally used parameters explanation are below.
fname: File, filename, or generator to read. If the filename extension is .gz or .bz2, the file is first decompressed.
dtype: Data-type of the resulting array; default is of float type.
delimiter : The string used to separate values. The default is whitespace.
converters: A dictionary mapping column number to a function that will parse the column string into the desired value. E.g., if column 0 is a date string: converters = {0: datestr2num}. The default values is None.
skiprows: It allows you to skip the lines of text. Default is 0.
Step by Step to implement numpy loadtxt
For more understanding follow the steps given here.
Step 1: Import all the necessary libraries.
In our example, We are importing two python modules one is NumPy, and the other is StringIO from the io module. Anything inside the StringIO acts as a File Object. Let’s import all of them.
import numpy as np from io import StringIO
Step 2: Execute the following Examples.
Example 1: Directly loading File Object inside the numpy.loadtxt() method.
Let’s create a text file. You should note that the number of columns before the newline operator(\n) and after should be the same.
str = StringIO("10 11 12 13 \n14 15 16 17") np.loadtxt(str)
Output
Example 2: Using the delimiter as an argument.
In this example, We will create a String file separated with comma (,). Then I will load the text file with the delimiter = ‘,’ as an argument.
str2 = StringIO("10, 11, 12, 13, \n14, 15, 16, 17") np.loadtxt(str2,delimiter=", ",usecols =(0, 1, 2,3),unpack=True )
The usecols =(0, 1, 2,3) will group the 0,1,2 and 3 element before and after the “\n” character. And to retrieve each element We will set unpack to True.
The output of the above code is below.
Example 3: Categorizing the text file.
Suppose you have text data you want to categorize. Then you can use the dtype as an argument. For example, I have a text file with M and F abbreviation as Male and Female respectively.
str3 = StringIO("M 100 200\nF 300 400") np.loadtxt(str3, dtype={'names': ('gender', 'age', 'weight'), 'formats': ('S1', 'i4', 'f4')})
You will get the following output
Example 4: Retrieving data from a text file.
In this example, I will load a text file to the loadtxt() method. Then after will print out the values. The text will have two columns of rollNo and its corresponding age.
RollNo Age 1 26 2 25 3 39 4 27
roll,age = np.loadtxt('text.data.txt', # name of file skiprows = 1, # skip first row unpack = True # make each column an array ) print(roll) print(age)
Output
Conclusion
You will often use NumPy loadtxt method for loading text files. It is not even faster but also manipulates data easily. These are the methods that I have described will be often applied. So understand it properly.
If you have any queries regarding this post, then you can contact us for more information.
Source:
Official Numpy loadtxt Documentation
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
|
https://www.datasciencelearner.com/numpy-loadtxt-implementation-examples/
|
CC-MAIN-2021-39
|
refinedweb
| 615
| 68.06
|
Create a Script Plugin
The process to create a script plugin is outlined on Setting up a Development Environment. For a basic plugin, it is only necessary to follow the guide up until the Advanced IntelliJ IDEA Configurations section.
For a more advanced plugin, read on.
scriptrunner.yaml file should be at the root of the plugin’s jar file. Keeping it in
src/main/resources will make sure this is always the case. supported.:
<jira-home>/scripts/foo.groovy
groovy
import util.Bollo log.debug ("Hello from the script") Bollo.sayHello()
<jira-home>/scripts/util/Bollo.groovy
groovy
If there are too many results, simply keep typing characters to refine the search.
If there are lots of directories returned, you can simply keep typing to refine the search. As you add characters to your search phrase, the results will be refined.
When you navigate into a directory, the results will only show the scripts in that directory. If you want to search ALL scripts, just remove all characters from the input and start again.
You can use tab completion to navigate the directory structure. If you go too far, simply remove the characters to the previous slash and then the search should come back with the correct results.
Result Types
This table demonstrates the different types of result that the search could return.
Table 1. Result Types
|
https://docs.adaptavist.com/sr4js/6.25.0/best-practices/write-code/create-a-script-plugin
|
CC-MAIN-2021-39
|
refinedweb
| 228
| 65.62
|
Modification of mbed-src library only for STM32F030F4, very cheap microcontroller in 20-Pin TSSOP package, with 16Kbytes of Flash and 4Kbytes of Ram. **Target for online compilator must be Nucleo 32F030R8.**
Dependents: STM32F031_blink_LED_2
Information
For programming similar chips in TSSOP20 package, but with 32kB flash: STM32F031F6 and STM32F050F6 (virtually identical to STM32F031F6 and no longer produced but still on sale), it is recommended to use NUCLEO-F031K6 as compiler platform, and the mbed library without the need for any modifications.
Just remember that the only significant difference between these chips and NUCLEO-F031K6 is the lack of pins: PB_0, PB_3, PB_4, PB_5, PB_6, PB_7, PA_11, PA_12, PA_15 in TSSOP-20.
STM32F030F4 pinout (pin functions supported in mbed library).
other pins:
- Remove jumpers CN2 on Nucleo when CN4 is connected to STM32F030F4
- NRST connection is not necessarily needed, but in this case, after programming it is necessary to manually reset the target processor
STM32R030F4 programming using Nucleo (any type):
Notes:
- When programming using the Nucleo virtual disk (drag and drop) , first turn on the power STM32F030F4, and then connect Nucleo to USB. When programming with "ST-LINK Utility", it does not matter.
STM32R030F4 programming using Arduino (as a simple USB-Serial converter) and FlyMcu program:
Notes:
- For Usart in STM32F030F4, only 5V tolerant TX, RX pins are pins 17 and 18. Just their uses internal serial bootloader, so you can use such Arduino or other USB-Serial converter operating as standard 5V.
- Where used FlyMcu, binary file from online compiler Mbed need to convert to intel hex file and during the conversion add the data offset 0x08000000 (or if offset is 0, manually add/edit the first line of the .hex file to ":020000040800F2").
- During programming procedure, pin 1 (BOOT0) should be connected to 3.3 V. And before contact with the loader program, temporarily pin 4 (NRST) shorted to GND to reset the chip. After programming BOOT0 is connected to GND.
- In this set with Arduino Uno, the "Flash loader demonstrator" from STM does not work (does not recognize the response from the chip at the initial stage of connection). But with Arduino Duemilanove program "STM Flash loader demonstrator" works perfectly (ver. 2.7.0). And do not need any additional file conversion (as the need for FlyMcu). You can use a binary file directly from the on-line compiler mbed.
Warning.
Because of the small size of the STM32F030F4 flash, for programs that use UART, it is proposed not to use the Serial class but use the <cstdio> (stdio.h) functions that directly use stdout and stdin (e.g printf().putchar(),getchar(),vprintf(),scanf() ).
Example:
version with serial class
#include "mbed.h" Serial pc(USBTX, USBRX); // tx, rx int main() { pc.printf("Hello World!\n"); }
consuming 13.7kB FLASH and 1.5kB RAM
but this:
version without serial class
#include "mbed.h" int main() { printf("Hello World!\n"); }
consuming only 8.7kB FLASH and 0.4kB RAM
5kB used flash difference (with 16kB total size) !!!
However, if you need other than the default UART settings for stdin and stdout (that is 9600b, pins PA_2, PA_3), you can do as an example:
change uart pins and speed
#include "mbed.h" // declarations needed to change here the parameters of stdio UART extern int stdio_uart_inited; extern serial_t stdio_uart; int main() { // for change pins serial_init(&stdio_uart, PA_9,PA_10); stdio_uart_inited=1; // for change baud rate serial_baud(&stdio_uart, 115000); printf("Hello World!\n"); }
uVision users
In the case of online compilation of the program with this library using Keil, to prevent linker errors set in the project options "One ELF Section per Function" and Optimisation: Level 2.
Additional information (and inspiration for this modification):
|
https://os.mbed.com/users/mega64/code/mbed-STM32F030F4/shortlog/
|
CC-MAIN-2020-05
|
refinedweb
| 610
| 53.92
|
Has anyone encountered the same issue ?Has anyone encountered the same issue ?
SQL> show user USER is "SCOTT" SQL> select * from pets; NAME ----------------------------------- PLUTO SQL> conn / as sysdba Connected. SQL> create user GLS_DEV identified by test1234 default tablespace TSTDATA; User created. SQL> alter user GLS_DEV quota 25m on TSTDATA; User altered. SQL> grant create session, resource to GLS_DEV; Grant succeeded. --- Granting SELECT privilege on scott.pets to tstrole and then grant this role to GLS_DEV. SQL> conn / as sysdba Connected. SQL> SQL> create role tstrole; Role created. SQL> grant select on scott.pets to tstrole; Grant succeeded. SQL> grant tstrole to GLS_DEV; Grant succeeded. SQL> conn GLS_DEV/test1234 Connected. SQL> SQL> select * From scott.pets; NAME ----------------------------------- PLUTO ---- All fine till here. From SQL engine , GLS_DEV user can SELECT scott.pets table. --- Now , I am going to create a PL/SQL object in GLS_DEV which tries to refer scott.pets SQL> show user USER is "GLS_DEV" create or replace procedure my_proc is myvariable varchar2(35); begin select name into myvariable from scott.pets ; dbms_output.put_line(myvariable); end my_proc; / Warning: Procedure created with compilation errors. SQL> show errors Errors for PROCEDURE MY_PROC: LINE/COL ERROR -------- ----------------------------------------------------------------- 6/2 PL/SQL: SQL Statement ignored 6/41 PL/SQL: ORA-01031: insufficient privileges SQL> SQL> 6 6* select name into myvariable from scott.pets ; --- PL/SQL Engine doesn't seem to know that GLS_DEV has select privilege on scott.pets indirectly granted through a role --- Fix --- Instead of granting privilege through a role, I am granting the SELECT privilege on scott.pets to GLS_DEV directly. --- The error goes away, I can compile and execute the procedure !! SQL> conn / as sysdba Connected. SQL> SQL> grant select on scott.pets to GLS_DEV; Grant succeeded. SQL> conn GLS_DEV/test1234 Connected. SQL> SQL> create or replace procedure my_proc is myvariable varchar2(35); begin select name into myvariable from scott.pets ; dbms_output.put_line(myvariable); end my_proc; 2 3 4 5 6 7 8 9 10 11 / Procedure created. SQL> set serveroutput on SQL> exec my_proc; PLUTO PL/SQL procedure successfully completed.
N.Page wrote:Yes. using AUTHID (invoker vs. definer rights) can make a difference.
Ok. Thanks everyone.
Is there any workaround for this like using AUTHID clause or something ?> create role tstrole; Role created. SQL> grant create session, create procedure , create table to tstrole; Grant succeeded. SQL> create user slater identified by test123; User created. SQL> select privilege from dba_sys_privs where grantee = 'TEST123'; no rows selected SQL> grant tstrole to slater; Grant succeeded. SQL> conn slater/test123 Connected. SQL> SQL> select privilege from user_sys_privs; PRIVILEGE ---------------------------------------- DEBUG CONNECT SESSION -------------------------------> Never knew that any user will get these privileges by default. DEBUG ANY PROCEDURE SQL> show user USER is "SLATER" SQL> SQL> SQL> create or replace function random 2 (dummy in varchar2) 3 return number is 4 x number; 5 y number; 6 begin 7 execute Immediate 'alter session enable parallel ddl'; 8 select 4 into x from dual; 9 y :=x; 10 return (y); 11 end random; 12 / Function created. SQL> SQL> SQL> variable myvar number SQL> exec :myvar:= random('a'); PL/SQL procedure successfully completed. SQL> print myvar MYVAR ---------- 4
JohnJohn
SQL> create role t_role; Role created. SQL> grant select on ops$oracle.t to t_role; Grant succeeded. SQL> create user a identified by a default tablespace users; User created. SQL> grant create session, create procedure to a; Grant succeeded. SQL> grant t_role to a; Grant succeeded. SQL> connect a/a Connected. SQL> select * from ops$oracle.t; ID DESCR ---------- ---------- 1 One 1 Un SQL> create function f (p_descr in varchar2) return number as 2 l_num number; 3 begin 4 select id into l_num 5 from ops$oracle.t 6 where descr = p_descr; 7 return l_num; 8 end; 9 / Warning: Function created with compilation errors. SQL> show error Errors for FUNCTION F: LINE/COL ERROR -------- ----------------------------------------------------------------- 4/4 PL/SQL: SQL Statement ignored 5/20 PL/SQL: ORA-00942: table or view does not exist SQL> create or replace function f (p_descr in varchar2) return number as 2 l_num number; 3 begin 4 execute immediate 'select id from ops$oracle.t where descr = :b1' 5 into l_num using p_descr; 6 return l_num; 7 end; 8 / Function created. SQL> select f('One') from dual; select f('One') from dual * ERROR at line 1: ORA-00942: table or view does not exist ORA-06512: at "A.F", line 4
N.Page wrote:The principle is the same for both. Privileges acquired through roles are not valid in definer rights (the default) stored procedures for the reasons explained earlier in this thread. In the case of the simple select in my first version of the function, the visibility of the table was checked at compile time since the compiler "knew" that the table would be accessed by the function. In the execute immediate version the compiler only sees the string being passed to execute immediate but does not attempt to interpret it. The parsing of the string only happens when execute immediate passes it on to the SQL engine at run-time. At that point, no roles are enabled so the statement fails in the parse phase in SQL.
THANK YOU John.
My question was specific to the System Privilege ALTER SESSION . Your example is using an Object privilege.
So, parallel ddl is one of those privileges that anyone can set without alter session privileges.So, parallel ddl is one of those privileges that anyone can set without alter session privileges.
SQL> connect a/a Connected. SQL> alter session enable parallel ddl; Session altered.
|
https://community.oracle.com/message/10624529
|
CC-MAIN-2014-15
|
refinedweb
| 921
| 58.99
|
import "github.com/op/go-logging"
Package logging implements a logging infrastructure for Go. It supports different logging backends like syslog, file and memory. Multiple backends can be utilized with different log levels per backend and logger.
Code:
// This call is for testing purposes and will set the time to unix epoch. InitForTesting(DEBUG) var log = MustGetLogger("example") // For demo purposes, create two backend for os.Stdout. // // os.Stderr should most likely be used in the real world but then the // "Output:" check in this example would not work. backend1 := NewLogBackend(os.Stdout, "", 0) backend2 := NewLogBackend(os.Stdout, "", 0) // For messages written to backend2 we want to add some additional // information to the output, including the used log level and the name of // the function. var format = MustStringFormatter( `%{time:15:04:05.000} %{shortfunc} %{level:.1s} %{message}`, ) backend2Formatter := NewBackendFormatter(backend2, format) // Only errors and more severe messages should be sent to backend2 backend2Leveled := AddModuleLevel(backend2Formatter) backend2Leveled.SetLevel(ERROR, "") // Set the backends to be used and the default level. SetBackend(backend1, backend2Leveled) log.Debugf("debug %s", "arg") log.Error("error")
Output:
debug arg error 00:00:00.000 Example E error
backend.go format.go level.go log_nix.go logger.go memory.go multi.go syslog.go
const ( ColorBlack = iota + 30 ColorRed ColorGreen ColorYellow ColorBlue ColorMagenta ColorCyan ColorWhite )
var ( // DefaultFormatter is the default formatter used and is only the message. DefaultFormatter = MustStringFormatter("%{message}") // GlogFormatter mimics the glog format GlogFormatter = MustStringFormatter("%{level:.1s}%{time:0102 15:04:05.999999} %{pid} %{shortfile}] %{message}") )
ErrInvalidLogLevel is used when an invalid log level has been used.
ConvertColors takes a list of ints representing colors for log levels and converts them into strings for ANSI color formatting
Redact returns a string of * having the same length as s.
Reset restores the internal state of the logging library.
SetFormatter sets the default formatter for all new backends. A backend will fetch this value once it is needed to format a record. Note that backends will cache the formatter after the first point. For now, make sure to set the formatter before logging.
SetLevel sets the logging level for the specified module. The module corresponds to the string specified in GetLogger.
Backend is the interface which a log backend need to implement to be able to be used as a logging backend.
NewBackendFormatter creates a new backend which makes all records that passes through it beeing formatted by the specific formatter.
ChannelMemoryBackend is very similar to the MemoryBackend, except that it internally utilizes a channel.
func NewChannelMemoryBackend(size int) *ChannelMemoryBackend
NewChannelMemoryBackend creates a simple in-memory logging backend which utilizes a go channel for communication.
Start will automatically be called by this function.
func (b *ChannelMemoryBackend) Flush()
Flush waits until all records in the buffered channel have been processed.
func (b *ChannelMemory.
func (b *ChannelMemoryBackend) Start()
Start launches the internal goroutine which starts processing data from the input channel.
func (b *ChannelMemoryBackend) Stop()
Stop signals the internal goroutine to exit and waits until it have.
Formatter is the required interface for a custom log record formatter.
MustStringFormatter is equivalent to NewStringFormatter with a call to panic on error.
NewStringFormatter returns a new Formatter which outputs the log record as a string based on the 'verbs' specified in the format string.
The verbs:
General:
%{id} Sequence number for log message (uint64). %{pid} Process id (int) %{time} Time when log occurred (time.Time) %{level} Log level (Level) %{module} Module (string) %{program} Basename of os.Args[0] (string) %{message} Message (string) %{longfile} Full file name and line number: /a/b/c/d.go:23 %{shortfile} Final file name element and line number: d.go:23 %{callpath} Callpath like main.a.b.c...c "..." meaning recursive call ~. meaning truncated path %{color} ANSI color based on log level
For normal types, the output can be customized by using the 'verbs' defined in the fmt package, eg. '%{id:04d}' to make the id output be '%04d' as the format string.
For time.Time, use the same layout as time.Format to change the time format when output, eg "2006-01-02T15:04:05.999Z-07:00".
For the 'color' verb, the output can be adjusted to either use bold colors, i.e., '%{color:bold}' or to reset the ANSI attributes, i.e., '%{color:reset}' Note that if you use the color verb explicitly, be sure to reset it or else the color state will persist past your log message. e.g., "%{color:bold}%{time:15:04:05} %{level:-8s}%{color:reset} %{message}" will just colorize the time and level, leaving the message uncolored.
For the 'callpath' verb, the output can be adjusted to limit the printing the stack depth. i.e. '%{callpath:3}' will print '~.a.b.c'
Colors on Windows is unfortunately not supported right now and is currently a no-op.
There's also a couple of experimental 'verbs'. These are exposed to get feedback and needs a bit of tinkering. Hence, they might change in the future.
Experimental:
%{longpkg} Full package path, eg. github.com/go-logging %{shortpkg} Base package path, eg. go-logging %{longfunc} Full function name, eg. littleEndian.PutUint32 %{shortfunc} Base function name, eg. PutUint32 %{callpath} Call function path, eg. main.a.b.c
Level defines all available log levels for log messages.
Log levels.
GetLevel returns the logging level for the specified module.
LogLevel returns the log level from a string representation.
String returns the string representation of a logging level.
type Leveled interface { GetLevel(string) Level SetLevel(Level, string) IsEnabledFor(Level, string) bool }
Leveled interface is the interface required to be able to add leveled logging.
LeveledBackend is a log backend with additional knobs for setting levels on individual modules to different levels.
func AddModuleLevel(backend Backend) LeveledBackend
AddModuleLevel wraps a log backend with knobs to have different log levels for different modules.
func MultiLogger(backends ...Backend) LeveledBackend
MultiLogger creates a logger which contain multiple loggers.
func SetBackend(backends ...Backend) LeveledBackend
SetBackend replaces the backend currently set with the given new logging backend.
LogBackend utilizes the standard log module.
NewLogBackend creates a new LogBackend.
Log implements the Backend interface.
type Logger struct { Module string // ExtraCallDepth can be used to add additional call depth when getting the // calling function. This is normally used when wrapping a logger. ExtraCalldepth int // contains filtered or unexported fields }
Logger is the actual logger which creates log records based on the functions called and passes them to the underlying logging backend.
GetLogger creates and returns a Logger object based on the module name.
MustGetLogger is like GetLogger but panics if the logger can't be created. It simplifies safe initialization of a global logger for eg. a package.
Critical logs a message using CRITICAL as log level.
Criticalf logs a message using CRITICAL as log level.
Debug logs a message using DEBUG as log level.
Debugf logs a message using DEBUG as log level.
Error logs a message using ERROR as log level.
Errorf logs a message using ERROR as log level.
Fatal is equivalent to l.Critical(fmt.Sprint()) followed by a call to os.Exit(1).
Fatalf is equivalent to l.Critical followed by a call to os.Exit(1).
Info logs a message using INFO as log level.
Infof logs a message using INFO as log level.
IsEnabledFor returns true if the logger is enabled for the given level.
Notice logs a message using NOTICE as log level.
Noticef logs a message using NOTICE as log level.
Panic is equivalent to l.Critical(fmt.Sprint()) followed by a call to panic().
Panicf is equivalent to l.Critical followed by a call to panic().
func (l *Logger) SetBackend(backend LeveledBackend)
SetBackend overrides any previously defined backend for this logger.
Warning logs a message using WARNING as log level.
Warningf logs a message using WARNING as log level.
MemoryBackend is a simple memory based logging backend that will not produce any output but merly keep records, up to the given size, in memory.
func InitForTesting(level Level) *MemoryBackend
InitForTesting is a convenient method when using logging in a test. Once called, the time will be frozen to January 1, 1970 UTC.
func NewMemoryBackend(size int) *MemoryBackend
NewMemoryBackend creates a simple in-memory logging backend.
func (b *Memory.
type Record struct { ID uint64 Time time.Time Module string Level Level Args []interface{} // contains filtered or unexported fields }
Record represents a log record and contains the timestamp when the record was created, an increasing id, filename and line and finally the actual formatted log line.
Formatted returns the formatted log record string.
Message returns the log record message.
Redactor is an interface for types that may contain sensitive information (like passwords), which shouldn't be printed to the log. The idea was found in relog as part of the vitness project.
SyslogBackend is a simple logger to syslog backend. It automatically maps the internal log levels to appropriate syslog log levels.
func NewSyslogBackend(prefix string) (b *SyslogBackend, err error)
NewSyslogBackend connects to the syslog daemon using UNIX sockets with the given prefix. If prefix is not given, the prefix will be derived from the launched command.
func NewSyslogBackendPriority(prefix string, priority syslog.Priority) (b *SyslogBackend, err error)
NewSyslogBackendPriority is the same as NewSyslogBackend, but with custom syslog priority, like syslog.LOG_LOCAL3|syslog.LOG_DEBUG etc.
Log implements the Backend interface.
Package logging imports 17 packages (graph) and is imported by 2367 packages. Updated 2018-10-01. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/op/go-logging
|
CC-MAIN-2018-51
|
refinedweb
| 1,567
| 51.34
|
If you
have read some of the .NET - related articles on exception handling (including
one of mine, "Documenting
Exceptional Developers") then you probably have a pretty good
idea that the rules of the game in .NET are twofold:
1) Unhandled exceptions (when they are thrown) are expensive
and burn up a lot of extra CPU cycles.
2) Do everything you can to prevent
exceptions from happening. Avoid using exceptions
to
handle
business
logic
conditions
in
your code.
Having said this, the next step is "what to do with
exceptions". All developers know that when an application is deployed,
it's almost
never perfect. Sooner or later, some user is going to find a way to break
your app, correct? (After all, isn't that what users are for!) Users
create inputs and other business use conditions that we developers could
never have anticipated, and. uh -- well, "bad things" happen.
More often that not,
users see that nasty looking red and yellow default ASP.NET error page
because we extra - cool developers have never even taken
the time to create a custom error page! What the heck good is a stack
trace to a user? Heck, they can't even use the app correctly - they just
caused an exception with it, right?
Of
course you can easily set up your web.config (and even IIS) to redirect
to a more friendly and Really Useful (a
la "Thomas the Tank Engine") error page using the <customErrors>
element in web.config. I encourage you to do this, and there is plenty
of good documentation on it. However, that is not what this article is
about, and so I don't cover the subject here. The problem with exceptions
for the developer usually still exists: often the error
page
doesn't
provide enough
information for us to thoroughly and easily nail down a problem in a
production web application. And so we are forced to spend much
more time than we should attempting to figure out "why" it
broke in production when it seems to work just fine in debug mode on
our local development
box. This doesn't have to be.
This article presents some concrete and "really
useful"
steps you can take (including a Registry trick with the Event Log viewer
that you won't find anywhere else) that will enable you to easily zero
- in on the exact error along with all the details right
down to the page and exact line of code where your "perfect app" well
-- BLEW UP!
Not only that, but you can easily wire up any of your
web apps to use this little framework simply by adding a few items to
the web.config and the global.asax
and dropping in a little 16K DLL into the /bin folder. The
Really Useful Exception Engine works fine with codebehind
as well as script-only ASP.NET apps.
First, understand that in ASP.NET, the Global Application_Error event
handler receives all
unhandled exceptions (including those you choose to
throw on your own). And while I'm ranting on the subject, the following
is NOT my idea of a legitimately "handled" exception:
Try
' your buggy spaghetti code here
Catch
End Try
-- I really hope, if you have
digested the message correctly, that the above VB.NET snippet doesn't
look familiar! If it does, go back and read (at the least) my other
article linked at the top of this article.
The GetLastError() method of the Server
object returns a reference to a generic HttpException wrapping the original
exception
that was passed from your ASP.NET page to the Application_Error event.
Gain access to the original exception by calling its GetBaseException() method.
This will provide the original exception instance, regardless of how
many layers have been added
to the exception stack. Once Application_Error has
completed, it automatically performs a redirect to your custom error
page that you can set up in web.config. See the MSDN documentation
on how to set this up if you want it.
NOTE: If you are re-throwing an exception
in your catch block after doing some business logic with it, you don't
want
to do this:
catch(Exception e){
// your stuff here
throw(e);
}
Instead, you want to do this:
catch(Exception e){
// your stuff here
throw;
}
If you use the first model, you may be throwing away the
Stack Trace information. In the first example above, you are not rethrowing
e, you are beginning a new exception flow using the same exception object
instance. The C# throw syntax is "throw expr", where expr is
an expression that evaluates to an exception object. It doesn't need
to be a new object, any Exception-derived object can be used, even if
it has been thrown a number of times before.
What's more important, and what
we'll be dealing with here, is the fact that you can invoke whatever
logging and additional exception - handling code you wish from within
Application_Error. You don't need to use fancy HttpModules. All you
need to do is set a "using" or "Imports" reference
to you ExceptionHandler class library in your Global.asax and call
your method(s).
In writing this ExceptionLogging class library, my
initial objectives were:
1) It should be simple to write and easy to understand.
2) It should be portable (e.g., easy to set up with a minimum of configuration
steps)
3) It should be extensible - that is, you can easily improve on or add
to it.
4) It's initial scope should be to effectively log exceptions and, notify
developers that an exception occurred.
5) It should be very easy to look in the event log and get details (e.g.
go to a web page report).
6) It should use a database, not text files. Who wants to look through
text files? Jeesh!
7) You should be able to easily deploy it in a production app, and only
"turn it on" when you need it.
8) ALL exceptions from ALL apps in the enterprise should be logged to
a central database so that admins and other responsible individuals can
access
this information easily.
As mentioned above, I do NOT like "log files". They
are hard to parse, difficult to find things in, and they can get big
really fast and start to slow down the whole works. They are also local
to a specific machine and thus somewhat counterproductive to a development
/ production environment where numbers of dev and admin types are required
by the overall business model to be able to work well together in a team
environment. Databases, on the other hand, provide fast access throughout
the enterprise, are easier to control vis-a-vis security, and the RDBMS
model makes it much easier
to sort, select and report on information. Additionally, scheduled jobs
can perform cleanup or needed replication. So, if you want to write exception
stuff to log files, you can choose to either modify my code, or roll
your own.
The overall simple concept of my exception reporting
engine is as follows:
1) We definitely want to log unhandled exceptions including
basic details to the local event log. If possible, we also want to
identify each exception with a custom ID so that we can create an HTTP
URL link to a custom report page that will query our detailed exception
information out of our database and present it in a nice web - page report,
accessible by anyone with the proper credentials, throughout the enterprise.
2) We want optional email notification to a short list of people who
should know about the problem and are capable of taking fast action to
fix it, preferably (again) with a link to the report page URL.
With this in mind, we need to look at a little and very
much undocumented Registry hack. If you've ever noticed, some of your
event log entries have an actual clickable hyperlink in them that takes
you to the "help and support engine" and if the exception has
a description there, you get to read up on it. Big deal, right? I have "Help
and support"
service turned off on all my boxes, cause all it does is take up extra
memory and resources. And besides, since we as professional developers
already know everything, why would we need Help and Support?
Now open up REGEDIT and look at:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Event
Viewer
You'll see entries for:
MicrosoftRedirectionURL,
MicrosoftRedirectionProgram,and
MicrosoftRedirectionProgramCommandLineParameters
By simply changing
some of these values, you can tell the Event Viewer to stop using the
custom help and support executable and simply go to a URL of your choosing.
It will append the s% variable, which contains a whole bunch of querystring
information, onto the end of your custom URL, and if you have set a
custom EventId such as the Identity value of a newly inserted database
record, it will happily supply this on the querystring to your reporting
URL so that it can go find the newly inserted database record! (Don't
look for this trick in MSDN, you'll won't find anything beyond a vague
description of the Registry entries- I had to figure it out through
personal toil and pain). Of course what
this
means is
that your Event Log entries just became about 10 times more useful,
because now, if you think you found the right exception, all you need
to do is click on the hyperlink and you get the whole report in a nice
web page! And, as we'll see shortly, you can also turn on email notification
- which will provide much the same information, including the useful
hyperlink that zeroes in on the exact exception record from your database!
Oh, and don't worry
- I've included a handy registry script with the downloadable zipfile
below. All you need to do is double-click on it and the modifications
will
be
made for you automatically.
First, before we
delve into some code, lets take a quick review of what it takes to
configure an ASP.NET web application to use the engine:
1) web.config needs
to have an <appSettings> section with the following entries, which
should be pretty much self-explanatory:
</system.web>
<appSettings>
<add key="LogErrors" value="true" />
<add key="dbConnString" value="server=(local);database=WebAppLogs;User
id=sa;password=;" />
<add key="emailAddresses" value="you@yourdomain.com|friend@yourdomain.com" />
<add key="smtpServer" value="mail.yourdomain.com" />
<add key="fromEmail" value="yourEmailFrom@yourdomain.com" />
<add key="detailURL" value="" />
</appSettings>
< /configuration>
-- That's it for
the web.config! Easy!
Next, we move to
your Global.asax:
using ExceptionHandler;
. . .
protected void Application_Error(Object
sender, EventArgs e)
{
ExceptionHandler.LogException exc = new ExceptionHandler.LogException();
exc.HandleException(Server.GetLastError().GetBaseException());
}
-- Pretty simple!
Aside from the Registry entries and the database setup (a sample SQL
script is provided) we are now 100% set up! Note that the "LogErrors"
key can be set to "false" -- and it's as if the exception engine didn't
even exist.
So now, lets cruise through my spaghetti
code and see what this thing really does, under the hood:
using System;
using System.Web;
using System.Diagnostics;
using System.Data;
using System.Data.SqlClient;
using System.Web.Mail;
namespace ExceptionHandler
{
public class LogException
{
public LogException( ) //ctor
{
}
public void HandleException( Exception ex)
{
HttpContext ctx = HttpContext.Current;
string strData=String.Empty;
int evtId = 0;
bool logIt =
Convert.ToBoolean(System.Configuration.ConfigurationSettings.AppSettings["logErrors"]);
if(logIt)
{
string dbConnString=
System.Configuration.ConfigurationSettings.AppSettings["db sQuery =
(ctx.Request.QueryString !=null)?ctx.Request.QueryString.ToString():String.Empty;
strData="\nSOURCE: " + ex.Source +
"\nMESSAGE: " +ex.Message +
"\nFORM: " + sForm +
"\nQUERYSTRING: " + sQuery +
"\nTARGETSITE: " + ex.TargetSite +
"\nSTACKTRACE: " + ex.StackTrace +
"\nREFERER: " +referer;("@Source", ex.Source));));
SqlParameter outParm=new SqlParameter("@EventId",SqlDbType.Int);
outParm.Direction =ParameterDirection.Output;
cmd.Parameters.Add(outParm);
cmd.ExecuteNonQuery();
evtId =Convert.ToInt32(cmd.Parameters[7].Value);
cmd.Dispose();
cn.Close();
}
catch (Exception exc)
{
EventLog.WriteEntry (ex.Source, "Database Error From Exception Log!", EventLogEntryType.Error,65535);
}
finally
{
cmd.Dispose();
cn.Close();
}
try
{
EventLog.WriteEntry (ex.Source, strData,EventLogEntryType.Error,evtId);
}
catch(Exception exl)
{
throw;
}
}
}
string strEmails =System.Configuration.ConfigurationSettings.AppSettings["emailAddresses"].ToString();
if (strEmails.Length >0)
{
string[] emails = strEmails.Split(Convert.ToChar("|"));
MailMessage msg = new MailMessage();
msg.BodyFormat = MailFormat.Text;
msg.To = emails[0];
for (int i =1;i<emails.Length;i++)
msg.Cc=emails[i];
string fromEmail=
System.Configuration.ConfigurationSettings.AppSettings["fromEmail"].ToString(); msg.From=fromEmail;
msg.Subject = "Web application error!";
string detailURL=
System.Configuration.ConfigurationSettings.AppSettings["detailURL"].ToString();
msg.Body = strData + detailURL +"?EvtId="+ evtId.ToString();
SmtpMail.SmtpServer =
System.Configuration.ConfigurationSettings.AppSettings["smtpServer"].ToString();
try
{
SmtpMail.Send(msg);
}
catch (Exception excm )
{
throw;
}
}
else
{
return;
}
}
}
}
if you walk through the above code which is very "linear" in
nature, you should have no difficulty understanding what is going on.
Now here are
the steps to set up the sample app in the download below, so that you
can begin to use and / or customize it: The download zipfile contains
code that adds an additional "LogDateTime" field in the table to make
searching and sorting easier for any reporting pages you may wish to
create.
1) Unzip the files into a new folder under your wwwroot called "ExceptionLogger".
2) Go Into IIS Manager and set this to be a new Virtual Directory and
make it an IIS Application. (The actual ExceptionHandler class project
is in a subfolder).
3) In Windows Explorer double-click the EventLogEntries.reg Registry
script to make the Registry modifications described above (You can
export the existing keys first if you want so you can restore them). If
you want a custom IIS app for your reporting page(s), you'll need to
make those changes in the Registry script before you
execute it).
4) Open Up Sql Query Analyzer and connect to your SQL Server database.
Load the ExceptionLogger.sql script and execute it.
5) Make any custom modifications to the web.config that pertain to
your email and other settings. Make sure that your IIS SMTP Server
is running.
NOTE: If your app is running under the default "machine" account in
machine.config, that's the weak asp.net account, whch may not have write
permissions to the Event log. Either grant it the permission, change
the userName in the processModel section of machine.config to "system",
or run your code under impersonation of a stronger account with the correct
setting in your web.config.
Your Really Useful Exception Engine is now ready. If you request the
Webform1.aspx test page, you'll see that it attempts a division by zero
and creates
an unhandled
exception.
If
you bring up your Event Viewer and look in the Application log, you'll
see the event log entry for this. If you scroll to the bottom of the
entry, clcking on the help URL will bring up the Report.aspx page and
display your custom exception database record information! You should
also get an email with a link to the same page and info.
Your goal when you put your app through testing and QA should be to
turn on the Really Useful Exception Engine, and over an extended testing
period, see NO ENTRIES. Then, turn it off when you deploy your app into
production. If there is ever a user report of a "BTH" (Bad thing happened),
you can turn it back on temporarily to help diagnose the issue so it
can be fixed quickly.
Finally, in the interest of completeness, there is one issue you should
be aware of. The eventlog classes wrap the OS API and the eventID is
an integer with a maximum value of 65535. Once your table gets over
that number of rows, you will need to drop and recreate the LogItems
table, or at least delete the records and re-seed the EventId column
back to
"1". The most elegant way to do this is to test in the stored
procedure itself, and issue a TRUNCATE TABLE command which reseeds the
Primary Key back to start at "1" I've included code in the sproc that
will do this in the download below. Of
course,
only developers with Obsessive Compulsive Disorder are ever likely to
rack
up that many
exception records in their lifetimes, but - you never know, do ya'?
Download the code accompanying this article
Articles
Submit Article
Message Board
Software Downloads
Videos
Rant & Rave
|
http://www.eggheadcafe.com/articles/20030816.asp
|
crawl-002
|
refinedweb
| 2,724
| 55.24
|
A twitter discussion on build times and source-file sizes got me interested in doing some analysis of Chromium build times. I had some ideas about what I would find (lots of small source files causing much of the build time) but I inevitably found some other quirks as well, and I’m landing some improvements. I learned how to use d3.js to create pretty pictures and animations, and I have some great new tools.
As always, this blog is mine and I do not speak for Google. These are my opinions. I am grateful to my many brilliant coworkers for creating everything which made this possible.
The Chromium build tools make it fairly easy to do these investigations (much easier than my last build-times post), and since it’s open source anybody can replicate them. My test builds took up to 6.2 hours on my four-core laptop but I only had to do that a few times and could then just analyze the results.
I did my tests on an October 2019 version of Chromium’s code because that gave me the most flexibility about which build options to use. I used a 32-bit, debug, component (multi-DLL, for faster linking) build, with NACL disabled, full debug information, with reduced debug information for blink (Chromium’s rendering engine) as my base build. In other words, these are the build arguments I used:
target_cpu = “x86” # 32-bit build, maybe faster?
is_debug = true # Extra checks, minimal optimizations
is_component_build = true # Many different DLLs, default with debug
enable_nacl = false # Disable NACL
symbol_level = 2 # Full debug information, default on Windows
blink_symbol_level = 1 # Reduced symbol information for blink
This is a good set of options for developing Chromium as it gives full debuggability together with a fast turnaround on incremental builds (when just a few source files are modified between builds).
Enough talk, let’s see some data
Let’s start with some pretty pictures. The graph below shows the relationship between the number of lines of code in source files and how long it took to compile them. I clamped the really-big and really-slow files and zoomed in to make it easier to see the patterns and… there aren’t any (live diagrams here):
In this diagram and the ones that follow the colors represent when in the build a file was compiled, with blue files happening first, then green, and red files happening last.
The .csv files I generated from my builds have a wealth of data, easily explorable with the supplied scripts:
python ..\count_costs.py windows-default.csv
30137 files took 21.453 hrs to build. 11.856 M lines, 3611.6 M dependent lines
A bit of analysis shows that it makes sense that there is no correlation between source-file length and compile time. The 30,137 compile steps that I tracked consumed a total of 11.6 million lines of source code in the primary source files. However the header files included by these source files added an additional 3.6 billion lines of source code to be processed. That is, the main source files represent just 0.32% of the lines of code being processed.
It’s not that Chromium has 3.6 billion lines of source code. It’s that header files that are included from multiple source files get processed multiple times, and this redundant processing is where the vast majority of C++ compilation time comes from. It’s not disk I/O, it’s repeated processing (preprocessing, parsing, lexing, code-gen, etc.) of millions of lines of code (note the 100% CPU usage, maintained for most of the build, due to ninja’s excellent job scheduling):
But what about precompiled header files? Couldn’t they be used to reduce the overhead of these header files? Well, it turns out that Chromium does use precompiled header files in some areas and the 3.6 billion line number is after those savings have been factored in. More on that later.
Patterns? You want patterns?
Let’s create another chart, this one showing the relationship between the lines of code in all of the include-file dependencies and the compile times. Now we’ve got some patterns (live diagrams here):
Is it just me or does that look like a distorted version of the classic Hertzsprung-Russell diagram of stellar luminosities versus temperatures? No? Just me?
Moving on…
Looking at this graph I can see at least four patterns, numbered here:
- The bottom of the chart shows a thick band of points curving up and to the right (mostly red, but all colors appear) – the main sequence. This area clearly shows that more lines in dependencies generally leads to longer compile times. The bottom of this band curves up gradually, looking like it’s almost an O(n^2) equation, at least for the minimum cost, although the average and maximum costs look closer to linear.
- Starting at about the 4.0 second mark on the y-axis (in green) there is a significant set of files that have a minimum compile time of about four seconds, regardless of include size. There are many files in this region whose source-files plus all of their includes are less than 200 lines and yet take 4.0 seconds or longer to compile.
- There is an odd vertical structure from 200,000 to 240,000 lines of includes (in greenish blue).
- There is also a perfectly straight vertical line of points at 343,473 lines of includes and around 12 to 13 seconds of compile time (below the digit ‘4’, in green).
The colorful graphs of compile times (live versions available here) are interactive. This means that identifying which files are associated with each of the patterns is as simple as moving the mouse around in that area.
Structures 2 and 4 – Precompiled headers
It turns out that structures 2 and 4 are both related to precompiled header files. Structure 2 is a set of files that use precompiled header files. This leads to a minimum compile cost of about 4.0 seconds (loading precompiled header files in clang-cl is fairly expensive) and a very low dependencies line count – apparently the headers that are precompiled don’t count as dependencies in this context.
The vertical line in structure four is from creating precompiled header (.pch) files. It is perfectly vertical because every one of the precompilations is compiling exactly the same set of headers, all from precompile_core.cc, which includes (through command-line magic) precompile_core.h. For some reason this file gets compiled 59 times, each time creating a 76.9 MB .pch file. This got even worse for a while but has been mitigated – see below.
In short, precompiled headers can be very helpful, but they come with their own costs. In this case there is the 900+ seconds to redundantly build the .pch files, and then the ~16,000 seconds to load the large .pch file more than 4,000 times, plus the additional dependencies.
Note: if we disable precompiled headers entirely the build gets slightly slower. And the main source files go from 0.32% of the lines of code being processed to just 0.125%! (previously reported as 0.22% due to omission of system header files), with the includes adding up to 9.3 billion lines of code.
By March 2020 the number of copies of the blink .pch file had grown to 67 with each one now 90 MB. I ran some experiments with shrinking the blink precompiled header files and found that if I reduced precompile_core.h dramatically I could:
- Cut out almost 90% of the cost of creating the .pch files
- Cut out almost 95% of the 5.5 GB size of the .pch files
- Slightly lower the average compile time for Blink source files
- Reduce accidental dependencies – translation units depending on headers that they don’t need
Those improvements were good enough that I was able to get a change landed to reduce what precompile_core.h includes.
I got the cost-reduction numbers above from a build-summarizing script I wrote, but when when I created graph creation tools for this blog post it made sense to apply them to this change, to better visualize the improvements, so I patched the precompiled header change in to my old Chromium build. And, I realized that I could animate the compile-times and the number of include lines for each target, to make a movie showing the transition from old to new. In reality the change was a jump from one state to the other, but showing it as motion makes it easier to see patterns.
In this video (which just shows source files in Blink, to reduce the noise) you can see the files which create the precompiled-header files (pattern 4) moving down and to the left, because they are including fewer files and compiling faster. You can also see the many files which consume the precompiled header files (pattern 2) moving to the right because they now have more header files to consume – recall that precompiled headers aren’t counted – and moving both up and down (both longer and shorter compiles).
The animation makes it crystal clear that this change didn’t help all blink source files. Some got slower, so I might have to try using the original precompile_core.h for some files – this blog post sure triggers a lot of work!
In addition to visualizing the savings I (and you) can use one of my scripts to measure, for instance, the before/after costs of creating the precompiled header files:
python ..\count_costs.py windows-default.csv *precompile_core.cc
59 files took 0.193 hrs to build. 0.000 M lines, 20.3 M dependent lines
python ..\count_costs.py windows-pchfix.csv *precompile_core.cc
59 files took 0.021 hrs to build. 0.000 M lines, 2.6 M dependent lines
Our full results with this change patched in to the old build now look like this (live diagrams here):
The two precompile-related patterns are now gone which just leaves us with the main sequence and the bluish-green tower (pattern #3) at just past 200,000 include lines. A bit of spelunking shows that the tower is mostly v8 files, especially source files generated by the build, which all include expensive-to-compile header files. In general generated source files can consume much compilation time. I hope to make some improvements there, but that will have to wait until after the blog post:
python ..\count_costs.py R710480-default.csv gen\*.cc
5662 files took 4.065 hrs to build. 3.027 M lines, 836.7 M dependent lines
What if we had fewer source files…
My original belief was that having fewer source files would improve build times, by reducing redundant processing of header files. Wouldn’t it be nice if there was some way to test this? It turns out that there is. A classic technique for doing this is to treat your .cpp files like include files. That is, instead of compiling ten .cpp files individually, generate a .cpp file that includes them, and compile those generated files. This technique is sometimes called a unity build. The generated files look something like this:
#include “navigator_permissions.cc”
#include “permissions.cc”
#include “permission_status.cc”
#include “permission_utils.cc”
#include “worker_navigator_permissions.cc”
If the included C++ files share a significant number of header files – usually the case if the files are related – then a significant amount of work can be avoided.
Some people have argued against this by pointing out that if you #include all of your source files into one then your incremental build times get worse. Well. Yeah. So don’t do that. There’s a lot of middle ground between compiling everything separately and compiling everything in one translation unit.
For a while Chromium had an option to do this, called jumbo builds, created by Daniel Bratell. This system defaulted to trying to #include 50 source files in each generated file (subject to constraints) and this was configurable. These jumbo builds significantly reduced the time to do full rebuilds of Chromium on machines with few processors. For incremental builds and massively parallel builds the benefits of jumbo builds were lower.
I decided to do three different jumbo builds with jumbo_file_merge_limit set to three different values. I would have done more small numbers but 2-4 failed due to command-line limits that I didn’t feel like addressing.
The graph below shows four points which are, left to right, merge amounts of 50, 15, 5, and the default build. The graph shows how the total number of hours of compile time goes down as the number of translation units compiled is reduced.
The downwards curve of the graph suggests that if we can reduce the number of translation units to 10,000 then the compile times will hit zero but I would advise readers not to trust that extrapolation.
One of the common complaints about jumbo/unity builds is that, by glomming many files together, they make individual compiles take longer. Let’s examine that:
python ..\count_costs.py R710480-default.csv
30137 files took 21.453 hrs to build. 11.856 M lines, 3611.6 M dependent lines
Averages: 393 lines, 2.56 seconds, 119.8 K dependent lines per file compiled
So, our default compile takes an average of 2.56 seconds per source file. What of our jumbo build with up to five C++ files per compilation:
python ..\count_costs.py R710480-jumbo05.csv
15880 files took 10.744 hrs to build. 5.925 M lines, 1905.0 M dependent lines
Averages: 373 lines, 2.44 seconds, 120.0 K dependent lines per file compiled
Uh. Wait a minute. We’ve almost cut the number of files compiled in half, and the average compile time has… dropped?
No, this isn’t an error. This happens because jumbo was applied mostly to expensive-to-compile files. Since most of their cost is header files the combining of them barely increased their compile cost. Since there are now fewer expensive files the average cost drops. It’s a miracle! It turns out you have to go to the 84th percentile before jumbo-5 files take longer to compile, and even the most expensive file only takes 42% longer to compile. There is such a thing as a free lunch. In addition jumbo linking may be faster due to reduced redundant debug information.
This all looks great, but, there are problems.
The main one is that when you #include a bunch of source files together then the compiler treats them as one translation unit, with one anonymous namespace, but the programmers writing the code see each source-file as independent. This causes obscure compiler errors in Chromium as identically named “local” functions and variables conflicted, but only in the jumbo configuration. The jumbo build effectively meant we were programming in an odd dialect of C++ with surprising rules around global namespaces.
Jumbo builds (if taken to excess) can also make incremental builds slower – because the compile-times for a large batch of source files can be non-trivial – and more tweaks are needed to work around this. Even though the average is better there are a bunch of 99th percentile file which take longer to compile, especially if jumbo_file_merge_limit is set too high.
Jumbo builds also creates additional coupling at link time as “unused source files” get linked in and then require all of their dependencies as well. Without care this can lead to shipping binaries that are bloated with test code, or surprising link errors.
Additionally, Google’s massively parallel goma build never benefited from jumbo. So, jumbo builds were deemed too much of a hack and were turned off. I was only able to use it for this post by syncing to the commit just before it was disabled – and I had to fix two name conflicts in anonymous namespaces to get Chromium to compile with it.
Reductio ad absurdum
The logical extension of source files with little code and lots of header files would be source files with zero code and lots of header files. But surely such files would never exist. Right?
Well…
While working on this I noticed that compiling of source files generated by mojo (Chromium’s IPC system) took quite a while. While poking at these I found that 680 of the generated C++ files (21% of the total) contained no code. Due to the size of the header files these no-code C++ files were collectively taking about twenty minutes of CPU time to build! I landed a change that detected this situation and removed the #includes when no code was detected. This was a simple change that reduces Chromium’s build time significantly in absolute terms (four to five minutes elapsed time on a four-core laptop) but as a percentage (~1.6%) barely moves the needle.
Now that I have the scripts needed to create videos to show compile-time improvements I figured I might as well animate this one as well – you can see some of the source-files racing towards the origin (tested on a March 2020 repo):
Why so long?
My initial twitter guess was that build times grow roughly in proportion to the square of the number of translation units. If we assume a large project with N classes, each with a separate .cc file and .h file then the number of compile steps is N. If 10% of the header files are used (often indirectly) by each .cc file then our compile cost is N * 0.10 * N, which is O(N^2). QED.
Other work in this area
I am far from the first person to try their hand at investigating Chromium’s build times.This is hardly surprising because Chromium’s build times have always been daunting, and different visualizations often reveal different opportunities.
The color coding scheme (and the basis of my graphing code) that I used in my plots comes from this excellent 2012 post by thakis.
Much more recently an alternate denominator – tokens instead of lines of code – was proposed in this January 2020 document. I chose to stick to lines of code because everyone knows how to measure them and the correlations in Chromium are similar.
For understanding why individual source files take a long time to compile we now have the excellent -ftime-trace option for clang which emits flame charts showing where time went. This flag can be set in Chromium’s build by setting compiler_timing = true in the gn args. When this is done a .json file will be created for each file compiled, and these can be loaded into chrome://tracing, or programmatically analyzed. In particular the author of –ftime-trace has created ClangBuildAnalyzer which can do bulk analysis of the results.
This Chromium script looks for expensive header files in a Chromium repo. It could potentially be combined with –ftime-trace for more precise measurements.
ninjatracing converts the last ninja build’s build-step timing into a .json file suitable for loading into chrome://tracing file to visualize the parallelism of the build.
post_build_ninja_summary.py is designed to give a twenty line summary of where time went in a ninja build. If the NINJA_SUMMARIZE_BUILD environment variable is set to 1 then autoninja will automatically run the script after each build, as well as displaying other build-performance diagnostics. This script was enhanced recently to allow summarizing by arbitrary output patterns.
There is an ongoing project to improve Chromium build times by making more use of mojom forward declarations instead of full definitions wherever possible. This is tracked by crbug.com/1001360.
There is an ACM paper (temporary freely accessible) discussing unity/jumbo builds in WebKit in great detail.
Reproducing my work
The scripts and data that I created for this blog post are all on github. They simply leverage ninja and gn’s results to create .csv files, and allow easy queries of the .csv files. The web page source is available, with the live page itself here. Click on any point on a graph to save it as a .png, and hover over it to see details on individual source files.
But it’s not science unless other people can reproduce the results. In order to follow along you need to get a Chromium repo (not a trivial task, but possible, just follow these instructions). Then, you need to run one or both of the supplied batch files (consider running them manually since some steps may be error prone, take 5+ hours, and consume dozens of GB of disk space):
- test_old_build.bat – this batch file checks out an old version of Chromium’s source, patches it so that jumbo works, and then builds Chromium with five different settings. Expect this to take about 20 hours on a four-core machine with lots of memory and a fast drive.
- test_new_build.bat – this batch file assumes that you are synced to a recent version of Chromium’s source, it patches in a change so that system headers are tracked and then builds Chromium, and then does incremental builds with and without the mojo empty-file fix. Expect this to take about 13 hours on a four-core machine with lots of memory and a fast drive.
All of my test builds use the”-j 4” option to ninja, for four-way build parallelism. My laptop has four cores and eight logical processors, and ninja would default to ten-way parallelism, but I wanted each compile process to have a core to itself, to minimize interference. You should adjust this setting based on your particular machine and scientific interests. Using all logical processors with “-j 10” instead of “-j 4” makes my builds run about 1.28 times as fast.
If you don’t want to spend hours getting a Chromium repo and building multiple variants you can still do some Chromium build-time analysis. Just clone the repo and you can run count_costs.py on various .csv files with varying filters to see which files are the worst. If you find anything interesting, let me know.
And, if you want to fix any of the issues, Chromium is open source.
Issues found while writing this
- Empty files generated by mojo – crbug.com/1054626, fixed in this change
- Precompiled files are too big, causing wasted time creating them, wasted time loading them, and 5+ GB of wasted disk space – crbug.com/1061326, fixed in this change
- Precompiled files generated redundantly – crbug.com/1060840. This was reported a long time ago. Since then the number of redundant builds has increased, but the change to shrink precompile_core.h has made this redundant building less critical.
- Clang loads precompiled files slowly – this has been known for a while and is discussed on crbug.com/672115 and there is ongoing work on a patch to improve this.
- Windows Photos can’t correctly display images with alpha. I tweeted about that here and included a link to the bad photo here.
Conclusions
- Precompiled header files have tricky tradeoffs that are difficult to assess.
- Chromium’s average source-file size is smaller (393 lines) than I expected.
- If you are creating a large project and build times are important to you then the most important thing you can do is to prohibit small source files
Reddit discussion is here and (mostly) here.
Twitter discussions are here, here, here, and various other places.
Consider looking at ClangBuildAnalyzer as well (). It well aggregate the .json files from -ftime-trace to report other statistics. For example, I discovered that almost all of my unit test compile time is spent on std::tuple compilations.
Ah – excellent. I added a link to that and I’ll have to try it.
This blog post really has made too many to-do items for me…
On Minecraft, we created a unity build to experiment with build times (though ours are much lower than Chromium’s anyway). In addition to the gotchas you mention above, we ran into issues with #defines being on and off in unexpected places, causing TUs to behave very strangely.
In my experience, if you offer both a unity build and non-unity build, then your unity build is going to be constantly broken, and in unexpected ways, as devs who use the ‘normal’ build accidentally break the unity build. For that reason, our shipping builds never were built with the unity flag enabled (except for one platform that had some special limitations), and since we didn’t ship that way, and saw differences in behavior constantly between unity and non-unity builds, the unity builds fell out of favor pretty quickly.
I’m sure a happy medium could be found, but I suspect it would take some non-trivial investment in the build pipeline, along with an ongoing maintenance tax keeping the two flavors of builds in ‘sync’
Unreal Engine supports toggling unity build whenever you want for individual modules or the entire engine, and it works very well. Maybe worth looking at how they’ve done it?
Mozilla avoided breaking Unified builds by making all the automation/integration builds that happen on code checkin be unified builds, and all our test coverage and manual testing, and release builds are and have been for a very long time unified. We also have a habit of avoiding “generic” defines like “DEFAULT” or “PREFIX”. In some cases we add some #undefs at the start of a file. And we only combine files in the same source directory.
Some imported libraries (zlib for example) have unified turned off because they don’t have include-guards.
Unity builds have one hidden gotcha – they can lead to pretty funny ODR violations which are detected only at runtime…Oops the wrong function got called, sorry, don’t give them the same names even in different files… The real solution is switching the language for one designed to work on modern machines and optimized for programmer productivity … And it is not rust, sorry… Unfortunately very few language designers and language developers care for these things…
Mozilla uses unified builds exclusively now (based on directory); for a long time we maintained both, but for most of that time unified builds were the standard.
Occasionally we’d break something. Typically someone using unified that didn’t realize some other file in the directory had already included foo.h and so didn’t include it – which is fine, until the directory structure changes/new files added/etc, and the unify list changes and suddenly it’s not in a unit that already included foo.h. Note that this can and does happen regardless of if you support both. Similar things can happen for defines, as you note (and conflicts). Generally these are easy to resolve, and happen when someone lands a patch that adds/removes/moves files.
To resolve conflicts, we typically have a list of files in each moz.build file that shouldn’t be unified; something like:
webrtc_non_unified_sources = [
‘trunk/webrtc/common_audio/vad/vad_core.c’, # Because of name clash in the kInitCheck variable
‘trunk/webrtc/common_audio/vad/webrtc_vad.c’, # Because of name clash in the kInitCheck variable
‘trunk/webrtc/modules/audio_coding/acm2/codec_manager.cc’, # Because of duplicate IsCodecRED/etc
Note that most directories/moz.build files don’t have any.
I’m trying a -j 4 build to compare; I imagine our source trees are roughly comparable in size and complexity. Normally a build on my desktop here (14×2 core intel i9 7940 IIRC) takes around 15 minutes; and a fair bit of that is gated by single-threaded rust library link optimization (several minutes of it). Touching a typical c++ file and rebuilding (when you know there are no js/buildconfig/test changes) takes perhaps 15-30 seconds, most of that link time. We have a lot of Rust code now. In theory, 4 cores should be ~1/7th as fast (modulo hyperthread overhead), so you’d think circa maybe 1.25-1.5 hours (since the single-threaded rust stuff and linking aren’t affected by cores). Also note it’s not a laptop and has higher clockrates and better cooling.
Ok, with -j 4 a build took 44:18, so faster than I expected (assuming the -j 4 worked 100% correctly – it was WAY less than 100% cpu every time I looked. OS reported avg CPU utilization was 388%, which matches -j 4 pretty well
Great post, thanks. I remember reading somewhere that Unity builds were introduced in Chromium to ease the pain of the outside external contributors without any access to goma. Guess there is too many conflicts.
I wonder what happened with that ember reading somewhere
Great post!
A few more resources on build performance:
A very interesting analysis, and thanks for making the data available.
Is a list of the top-level includes for each source file available? If so, a regression analysis can be used to estimate the contribution of each header to compile time.
One I did earlier:
That would be an interesting project. The -ftime-trace data could be used to make that more accurate and more detailed. I hope to spend more time on that sort of work. Ultimately I want to focus on investigations that are most likely to lead to improvements, but that is difficult to predict.
Thanks for the post and the scripts! Much appreciated.
I successfully hacked your scripts to work on our open source project (and to run on macOS). I am just starting to dig into the data a bit and have already addressed a small issue with one file in our code.
My first thought – because I know this is an issue with our project – was an analysis of headers & includes as well. It’s a bit of a mess – many, many includes – and I’d like to target the ones that cost the most to compile and/or are processed most frequently.
Is there a way to do this with the ninja data or would we need to get it from the compiler like you mention? (I’m trying to use Apple’s clang and avoid installing new compilers…)
The ninja data just tells you what files were included, not how long they (and their descendants) took to process. For that you should look at -ftime-trace and ClangBuildAnalyzer (the aggregator for -ftime-trace).
Even that has limitations because if Foo.h includes Bar.h then it will be “charged” for that cost, unless Bar.h was previously included independently. Which is to say that assigning meaning to the numbers can be challenging. But, I am certain it will help.
I’m glad you found the scripts useful. I’d be curious as to what the issue you found was, and if you create any visualizations (or animations!) please share.
Thanks Bruce! That makes sense.
I have not created any visualizations yet – just skimming the CSV.
I sorted the CSV by time and looked at the top few.
One of the (#14) was taking 7.3 seconds and had 1264 deps.
I took a look at the file and removed a bunch of unnecessary includes. After re-compiling, it dropped to #90, taking 3.2 seconds with 489 deps. A quick win for a few minutes work and I haven’t even dug into the data yet.
I think, because of the nature of the code base, there are probably a handful of headers that are killing us – and one cpp file that dominates that we can’t do much about because it’s 3rd party (see max below).
count_costs (I have 8 cores):
442 files took 0.335 hrs to build. 0.295 M lines, 64.9 M dependent lines
Averages: 668 lines, 2.73 seconds, 146.7 K dependent lines per file compiled
min: 0.0, 50%ile: 2.1, 90%ile: 4.3, 99%ile: 12.4, max: 131.2 (seconds)
Thanks for the detailed post, and the view into tools to help manage truly large projects.
I’m the one on our team who, when build times creep up, sifts through our code base removing redundant #includes, converting to forward declarations, etc.
Selfishly, the next time someone complains to me that our builds are too slow, perhaps I will point them to this post to show them what long build times really are!
Oh, and “Hi, Randell!” Great to see your name pop up here.
If you just clean up after people they’ll never start keeping the place tidy themselves.
I don’t just clean up. Each round is a teaching opportunity. Some of our developers are fairly new to C++.
The primary rule is, “Always think twice before #including a header in another header.”
Next stop: C++ modules 🙂
Would be interesting to see numbers.
Regarding clang PCH performance, there seems to be another patch on top of the already mentioned one:
All the projects I’m involved with exclusively use Unity builds, which eliminates any problems with maintaining two configurations and also means we build the same way that we ship.
To get around issues with iteration time with large unity files, we compile locally modified files separately. We use FASTBuild (which I am the author of) to manage the unity creation (automatically based on files on disk) and also to automatically manage the exclusion of locally modified files (writable when checked out from source control).
All this means we can have fairly large Unity files (30 seconds+ each to compile) while still having great iteration times.
The setup we have now is literally over 10x faster than before, and I had a similar experiences (with all the same stuff in place) at my previous employers.
It seems for such large projects there is no way around a build cluster, especially if you are pulling from upstream regularly.
WebKit also uses unified builds when using CMake or Xcode; without them WebCore takes forever to compile. (With it enabled, it just takes a really long time 😉 ) I think there’s a general ban on “using namespace” at the global level to prevent issues when running the build. There’s a relevant paper, too: (some of the authors work directly on WebKit).
|
https://randomascii.wordpress.com/2020/03/30/big-project-build-times-chromium/
|
CC-MAIN-2021-43
|
refinedweb
| 5,731
| 63.8
|
CS::SndSys::SoundCyclicBuffer Class ReferenceAn implementation of a cyclic buffer oriented for sound functionality. More...
#include <csplugincommon/sndsys/cyclicbuf.h>
Detailed DescriptionAn implementation of a cyclic buffer oriented for sound functionality.
Definition at line 34 of file cyclicbuf.h.
Constructor & Destructor Documentation
Construct the cycling buffer with a specific maximum storage size.
Member Function Documentation
Add bytes to the cyclic buffer.
The bytes must fit. Use GetFreeBytes() to check for available space and AdvanceStartValue() to make more space available.
Advance the first byte pointer of the cyclic buffer so that data below this value can be overwritten.
Clear the buffer and reset the start and end values to the provided value.
Get data pointers to copy data out of the cyclic buffer.
The positional value associated with one byte beyond the last byte in the buffer.
Return the number of free bytes in the cyclic buffer.
Get the buffer length of the cyclic buffer in bytes.
Definition at line 81 of file cyclicbuf.h.
The positional value associated with the first byte of data in the buffer.
The documentation for this class was generated from the following file:
- csplugincommon/sndsys/cyclicbuf.h
Generated for Crystal Space 1.0.2 by doxygen 1.4.7
|
http://www.crystalspace3d.org/docs/online/api-1.0/classCS_1_1SndSys_1_1SoundCyclicBuffer.html
|
CC-MAIN-2016-18
|
refinedweb
| 204
| 52.26
|
- 31 Oct, 2017 8 commits
- 30 Oct, 2017 32 commits
- Dmitriy Zaporozhets authored
This reverts merge request !15009
- Annabel Dunstone Gray authored
Enable NestingDepth (level 6) on scss-lint Closes #39582 See merge request gitlab-org/gitlab-ce!15073
- Douwe Maan authored
Fix 500 error for old (somewhat) MRs Closes #36540 See merge request gitlab-org/gitlab-ce!14945
- AlexWayfer authored
- Pawel Chojnacki authored
Upgrade Gitaly to v0.50.0 See merge request gitlab-org/gitlab-ce!15050
- Yorick Peterse authored
Merge branch '39054-activerecord-statementinvalid-pg-querycanceled-error-canceling-statement-due-to-statement-timeout' into 'master' Resolve "ActiveRecord::StatementInvalid: PG::QueryCanceled: ERROR: canceling statement due to statement timeout" Closes #39054 See merge request gitlab-org/gitlab-ce!15063
Add docs for backing up to Google Cloud Storage See merge request gitlab-org/gitlab-ce!15074
Add missing circuitbreaker metrics to prometheus documentation See merge request gitlab-org/gitlab-ce!15062
* master: (51 commits) Move locked check to a guard-clause Ci::Build tag is a trait instead of an own factory [CE backport] Saved configuration for issue board Use the correct project visibility in system hooks Add changelog more readable changelog Make merge_jid handling less stateful in MergeService Fetch the merged branches at once remove extra whitespace use a delegate for `username` to be more future friendly Merging EE doc into CE add changelog entry Avoid using Rugged in Gitlab::Git::Wiki#preview_slug Cache commits on the repository model Remove groups_select from global namespace & simplifies the code Change default disabled merge request widget message to "Merge is not allowed yet" Semi-linear history merge is now available in CE. Remove repetitive karma spec Improve spec to check hidden component Rename to shouldShowUsername ...
- Grzegorz Bizon authored
Ci::Build tag is a trait instead of an own factory See merge request gitlab-org/gitlab-ce!15077
Fix widget of locked merge requests not being presented See merge request gitlab-org/gitlab-ce!15069
- Zeger-Jan van de Weg authored
Minor annoyance of mine, and there were a couple of things wrong, for example: 1. Switching on a property is just a trait 2. It didn't inherrit from its parent Find and replace through the code based fixed all occurances.
[CE backport] Saved configuration for issue board See merge request gitlab-org/gitlab-ce!15009
Use the correct project visibility in system hooks Closes #39496 See merge request gitlab-org/gitlab-ce!15065
|
https://foss.heptapod.net/heptapod/heptapod/-/commits/4bce563c6b66cffd87cd32921214148d4de541e0
|
CC-MAIN-2022-21
|
refinedweb
| 402
| 51.38
|
Announcing ML.NET 1.4 Preview and Model Builder updates (Machine Learning for .NET)
Cesar, Image Classification and more!
Following are some of the key highlights in this update:
ML.NET Updates
ML.NET 1.4 Preview is a backwards compatible release with no breaking changes so please update to get the latest changes.
In addition to bug fixes described here, in ML.NET 1.4 Preview we have released some exciting new features that are described in the following sections.
Database Loader (Preview)
This feature introduces a native database loader that enables training directly against relational databases. This loader supports any relational database provider supported by
System.Data in .NET Core or .NET Framework, meaning that you can use any RDBMS such as SQL Server, Azure SQL Database, Oracle, SQLite, PostgreSQL, MySQL, Progress, IBM DB2, etc.
In previous ML.NET releases, since ML.NET 1.0,(); DatabaseSource dbSource = new DatabaseSource(SqlClientFactory.Instance, connectionString, commandText); IDataView trainingDataView = loader.Load(dbSource); // ML.NET model training code using the training IDataView //... public class SentimentData { public string FeedbackText; public string Label; }
This feature is in preview and can be accessed via the
Microsoft.ML.Experimental v0.16-Preview nuget package available here.
For further learning see this complete sample app using the new DatabaseLoader.
Image classification with deep neural networks retraining (Preview)
This new feature enables native DNN transfer learning with ML.NET, targeting image classification as our first high level scenario.
For instance, with this feature you can create your own custom image classifier model by natively training a TensorFlow model from ML.NET API with your own images.
Image classifier scenario – Train your own custom deep learning model with ML.NET
In order to use TensorFlow, ML.NET is internally taking dependency on the Tensorflow.NET library.
The Tensorflow.NET library is an open source and low level API library that provides the .NET Standard bindings for TensorFlow. That library is part of the SciSharp stack libraries.
Microsoft (the ML.NET team) is closely working with the TensorFlow.NET library team not just for providing higher level APIs for the users in ML.NET (such as our new ImageClassification API) but also helping to improve and evolve the Tensorflow.NET library as an open source project.
We would like to acknowledge the effort and say thank you to the Tensorflow.NET library team for their agility and great collaboration with us.
The stack diagram below shows how ML.NET implements these new DNN training features. Although we currently only support training TensorFlow models, PyTorch support is in the roadmap.
As the first main scenario for high level APIs, we are currently focusing on image classification. The goal of these new high-level APIs is to provide powerful and easy to use interfaces for DNN training scenarios like image classification, object detection and text classification.
The below API code example shows how easily you can train a new TensorFlow model which under the covers is based on transfer learning from a selected architecture (pre-trained model) such as Inception v3 or Resnet.
Image classifier high level API code using transfer learning from Inceptionv3 pre-trained model
var pipeline = mlContext.Transforms.Conversion.MapValueToKey(outputColumnName: "LabelAsKey", inputColumnName: "Label") .Append(mlContext.Model.ImageClassification("ImagePath", "LabelAsKey", arch: ImageClassificationEstimator.Architecture.InceptionV3)); //Can also use ResnetV2101 // Train the model ITransformer trainedModel = pipeline.Fit(trainDataView);
The important line in the above code is the one using the
mlContext.Model.ImageClassification classifier trainer which as you can see is a high level API where you just need to select the base pre-trained model to derive from, in this case Inception v3, but you could also select other pre-trained models such as Resnet v2101. Inception v3 is a widely used image recognition model trained on the ImageNet dataset. Those pre-trained models or architectures are the culmination of many ideas developed by multiple researchers over the years and you can easily take advantage of it now.
The DNN Image Classification training API is still in early preview and we hope to get feedback from you that we can incorporate in the next upcoming releases.
For further learning see this sample app training a custom TensorFlow model with provided images.
Enhanced for .NET Core 3.0
ML.NET is now building for .NET Core 3.0..
Model Builder in VS and CLI updated to latest GA version
The Model Builder tool in Visual Studio and the ML.NET CLI (both in preview) have been updated to use the latest ML.NET GA version (1.3) and addresses lots of customer feedback. Learn more about the changes here.
Model Builder updated to latest ML.NET GA version
Model Builder uses the latest GA version of ML.NET (1.3) and therefore the generated C# code also references ML.NET 1.3.
Improved support for other OS cultures
This addresses many frequently reported issues where developers want to use their own local culture OS settings to train a model in Model Builder. Please read this issue for more details.
Customer feedback addressed for Model Builder
There were many issues fixed in this release. Learn more in the release notes.
New sample apps
Coinciding with this new release, we’re also announcing new interesting sample apps covering additional scenarios:
New ML.NET video playlist at YouTube
We have created a ML.NET Youtube playlist at the .NET foundation channel with a list made of selected videos, each video focusing on a single and particular ML.NET feature, so it is great for learning purposes.
Access here the ML.NET Youtube playlist..
Happy coding!
This blog was authored by Cesar de la Torre and Eric Erhardt plus additional contributions of the ML.NET team.
Acknowledgements
- As mentioned above, we would like to acknowledge the effort and say thank you to the Tensorflow.NET library team for their agility and great collaboration with us. Special kudos for Haiping (Oceania2018.)
- Special thanks for Jon Wood (@JWood) for his many and great YouTube videos on ML.NET that we’re also pointing from our ML.NET YouTube playlist mentioned in the blog post. Also, thanks for being an early adopter and tester for the new DatabaseLoader.
I really find it hard to step into the world of ML.NET because although I have a clear idea of what I want, I have no idea how to achieve this. There are sample applications that do work but the syntax is too difficult and less explained that I can understand how to make this work on my purposes.
Let’s say I want to add face recognization. I have a bunch of pictures of myself and of random people and I want to find myself on pictures. Where do I start? Do I create a folder called “me” and put all images in there? Ok and how to train a machine knowing that it should identify my face on these pictures in a folder called “me”? And how to I run it against other pictures telling the machine to focus on faces?
There is an “Image Classification” example. But the explanation is terrible. ”
mlContext.Transforms.Conversion.MapValueToKey(outputColumnName: “LabelAsKey”,
inputColumnName: “Label”,
keyOrdinality: ValueToKeyMappingEstimator.KeyOrdinality.ByValue)
.Append(mlContext.Model.ImageClassification(“ImagePath”, “LabelAsKey”,
arch: ImageClassificationEstimator.Architecture.InceptionV3,
epoch: 100,
batchSize: 30,
metricsCallback: (metrics) => Console.WriteLine(metrics)));”Is explained as ” you define the model’s training pipeline where you can see how easily you can train a new TensorFlow model” well for someone knowing what these lines do it might be “EASY” to see but I just ask myself what the hell is outputColumName supposed to be, arch? epoch? batchsize? metrocallback? I would love to see a basic walkthrough through ML.NET for completely beginners taking a real world scenario and explaining it from ground up by what you need first, how the data should be stored so ML.NET can read it, then over the syntax language and so on. As of now all of these samples seem to be only useful for those who understand ML.NET already.
Hi Max. Thanks a lot for your feedback. I completely a gree on it. However, take into account that this feature (the new Image Classification API natively training a TensorFlow model under the covers) is still in Preview and we’re actually working on having further documentation and tutorials/walkthroughs to be published at when the API is GA. But today we’re simply making the Preview available for early adopters.
In addition to that, that API might get further simplifications and improvements, that’s why it is still in Preview. 🙂
In the meantime, if you want to go ahead and try to create your own image classifier model with it, feel free to send me an email to cesardl at microsoft.com so I can explain any question you might have about that code and help you while you also provide additional feedback.
Has the Tensorflow .net library replaced the TensorFlowSharp version of the Tensorflow bindings?
Not sure wht you mean with ‘replaced’. In general? Within ML.NET?
In any case, for ML.NET, it does. Originally we were using code from TensorFlowSharp, but since release 1.3.1, ML.NET is taking dependency on TensorFlow.NET, including when simply scoring a TensorFlow model
Great work! How about support for Azure Storage Tables or CosmosDb?
If those data sources (Azure Storage Tables, Azure Cosmos DB or any other data source is important for your business scenarios, in addition to Relational Databases, please submit an issue at the ML.NET GitHub repo requesting it and explaining why those sources would work better for your scenarios, etc., ok?
That would help us on prioritize our backlog depending on the needs of the users like you. 🙂
There is a big problem with the model builder, it does not support vector columns. I have tried to export the data to text file directly from ML.Net with SaveAsText and the model builder cannot load the vector colums and cannot identify the columns. The same data text file can be loaded without any problem with ML.Net
Also with the SQL database I don’t know how to create the data with vectors in order to load it in model builder.
@Mario M. – For that problem with Model Builder can you submit and issue at the Model Builder Repo here?:
I know Model Builder in VS lacks of support for that feature (support vector columns) as of today, but the more feedback we get from users requesting it, the higher priority it’ll have in our backlog. 🙂
It’s a pity about
Console.WriteLine($”Predicted Label: {clickPrediction.PredictedLabel} – Score:{Sigmoid(clickPrediction.Score)}”, Color.YellowGreen);
in the DatabaseLoader example. You must have meant to use to ColorfulConsole Nuget package. OK, but ConsoleHelper uses System.ConsoleColor. It is remarkable that one can see scores at all 🙂
It would also be good to know what kind of results you expect the program to produce. This also applies to practically every other every other sample program.
I want create a bot using ML.Net, can you tell me what features can be usefull for this?
@Ashkan – Great question! – You can run an ML.NET to make predictions within a Bot, very easily. Basically, a Bot when using the Microsoft Bot Framework () is nothing more than an ASP.NET Core WebAPI, and since you can run ML.NET on any .NET application as long as it is .NET Core or .NET Framework and runnning on X64 or x86 (See details in ML.NET supper here:), therefore you can do it very easily.
Steps:
– 1. Decide on what’s your machine learning problem to solve and model type (ML Taks).You can see many types here:
– 2. Once you have the model working okay in a test/console app, then you need to ‘deploy’ it into an ASP.NET Core WebAPI which is the Bot. For doing that (using an ML.NET model in a ASP.NET core WebAPI), research this sample and related tutorial so you use it properly with the PredictionEnginePool specially made for scalable apps such as a WebAPI (Bot in this case):
—
—
Hope it helps! 🙂
Hi i, great article, is it possible to use the model in a Xamarin app ?
Thanks
Hi Hernando. Good point. So, currently even when ML.NET is cross platform (Windows, Linux, macOS) thanks to .NET Core, there are many internal ML algorithms implemented natively in C/C++. Those ‘native’ areas don’t support ARM processors, currently. That means that running ML.NET on ARM based devices such as iOS or Android (Xamarin target) and also IoT ARM-based devices is not supported by ML.NET since in terms of processors we just support x64 and x86 (See our current OS and processors support here:).
Currently, there’s a clear workaorund for you which is to run the ML.NET models in HTTP services (or any remote solution running on a server) which would be consumed by the Xamarin apps.
However, support for ARM processors (and hence support for Xamarin) is in our roadmap and backlog.
Important, if you want to influence the priorities of our roadmap/backlog priorities, please do provide your feedback/requests in the ML.NET site repo as an issue or write your feedback in existing issues.
For this topic, please, write your feedback and why this scenario is important for you in the following issue (Feel free to re-open or create a new issue and link to this one):
Thanks for your feedback,
Thanks Cesar,
Yes I’ll definitely give feedback to support ARM processors .
I hope you guys understand what you are doing when taking a dependency on TensorFlow.NET
At least TensorFlowSharp was properly auto-generated from public C interface.
TensorFlow.NET team is quite misleading. They advertise on their NuGet package, that they support full TensorFlow API. And you can find most classes in the package, but when you look at the source code, they are basically empty even with no NotImplementedException being thrown. For instance:
And here’s the real one: for comparison.
I know because I am making a competing product, and pointed this problem out to them before, so since they implemented AdamOptimizer by hand-converting source code from Python. But will they maintain that converted code? I highly doubt that.
So just beware, that if you expose functionality in ML.NET through TensorFlow.NET some pieces might just silently do nothing.
@LOST_ Yes, we know what we are doing and we also know what you are doing 🙂 TensorFlowSharp is also a Microsoft product and its author himself has endorsed TF .NET, please see. Lets not bash people like that, there is merit to the work TF .NET has done and we really appreciate our collaboration with them. We also appreciate the quick fixes they have made for us to make the product more reliable and we are committed to making TF .NET even better. At the end of the day our high level DNN APIs are benchmarked against official Python TensorFlow APIs and we ensure the performance and accuracy numbers match, what C# Tensorflow bindings we use is implementation details, it is not important if some bindings are missing as long as we don’t need them. That being said, whenever we have needed any bindings the TF .NET has given to us very quickly and we really appreciate their effort.
The new version is great! Especially new features to work with database.
How to train and generate the model for Word2Vec ? And how to use it for words vectors like: king+woman-man=queen?
I have build this for NET implementation of Word2Vec.
I could not find how to do it in ML.NET. May be there are any samples?
@Win Pooh – Thanks for your feedback. About featurizing text into numeric vectors the way yo do it in ML.NET is with the transformer-estimator called ‘FeaturizeText’ that you can use from: ‘mlContext.Transforms.Text.FeaturizeText’.
You can see an example here (Sentiment Analysis):
However, simply featurizing text into numeric vectors might not be exactly what you mean? If so, can you elaborate further on the reasons why you think a specific Word2Vec DNN integration implementation would be important for ML.NET?
Thanks for your feedback.
@Cesar De la Torre Thank you for the answer.
That is my idea:
I have a book store database, one table contains books title, amount etc etc and description.
User has found a book by description and wants to find top N similar (by description) books.
Yes, it is not easy to implement, but I’d like to try to find some solutions. In any case it is interesting task :-).
It may be Word2Vec based solution like Doc2Vec or something other.
I have solved one subtask: it is ML.NET based lib which can identify language of given piece of text.
The model trained to recognize 7 languages: en, de, es, it, ro, ru, uk, but it can be extended.
By the way, TensorFlowNET contains the example:
We have customers using a similar approach than the one we use for ‘GitHub issues automatic labeling sample’ () but for detecting/recognizing multiple languages (en, de, it, etc…) or many other business scenarios.
It all depends on how the data is labeled.
Or, if the data is not labeled, then you could featurize the text then do a “books segmentation” by using a clustering algorithm?
Could you try a similar approach than the ‘GitHub issues automatic labeling sample’ I mentioned and if that is not enough for you send me feedback? – Feel free to send me further emails with further details to cesardl at microsoft.com.
Ok, I will check these ideas and let you know, thank you.
I run the demo DatabaseIntegration, there is the code in the Program.cs :
…
Console.WriteLine(“Training model…”);
var model = pipeline.Fit(trainTestData.TrainSet);
…
i.e. every time the model trained and created.
The question: Is it possible to save the model in a database after it once trained and created and then reuse the saved one?
It can be saved to zip file and loaded. But how to save/load it to database?
An ML.NET model can only be serialized and saved as a .ZIP file which can be stored in any place you can store files.
The most common approach is to store it as a file as part of the assets/resources of an application but it could also be stored as a file in a Blob in Azure Storage blobs, or in any Http repository.
About storing the .zip file as into a database.. could be if you store it as a blob/binary type in a table but you’d be responsible for the code reading/saving to that blob/binary type in a table database. We currently don’t have any ad-hoc API for that.
It is an interesting scenario, though. What are your motivations for doing this? Could you provide further feedback?
Hi,
Ok, for example, my scenario is:
I provide a client->server system to customers.
The client app contains some ML methods to find similar texts, to predict something etc.
And trained model provided in the db.
I.e. all applications work with this db and use the same model instead of zip file on every client machine.
I decide to retrain the model by some reasons and refresh it in the customer environment.
In case of database I provide one script and it is updated once in one place and the fresh model appears immediately.
In case of zip file it should be provided and distributed on every client’s machine. I am not sure that it was refreshed on all machines.
Well, it depends on the scenario. If the app is a web app or you have services (such as an ASP.NET WebAPI service), you can also load the ML.NET model from a remote HTTP endpoint like in the link below, but using FromUri() instead of .FromFile():
But I agree that if you want to run code “Within SQL Server” (such as a C# SQL Server Function or Stored Procedure), it makes sense to have everything you need secured and available in the database server. Not just for scoring but also for saving after training close to the database..
Another scenario would be for traditional client/server apps with the client apps directly accessing a database…
I’ve created this issue to continue the discussion in the open:
Thanks for the feedback! 🙂
Thank you, great! I have added comment to the issue.
Hi Cesar,
Is this feature in your plans? When we can wait it? Thanx! 🙂
|
https://devblogs.microsoft.com/dotnet/announcing-ml-net-1-4-preview-and-model-builder-updates-machine-learning-for-net/
|
CC-MAIN-2020-34
|
refinedweb
| 3,451
| 66.84
|
The.
Attached to this post is a version of
XDSP.H that uses DirectXMath and compiles for x86, x64, and ARM. It can be used with either Visual Studio 2012 (aka VS 11) or VS 2010 with the Windows 8.0 SDK, and compiles for both Win32 desktop applications and Windows Store apps.
Full documentation for XDSP can be found in the offline help for the DirectX SDK (June 2010) release. These functions are in the
XDSP C++ namespace, and memory buffers provided to these functions must be 16-byte aligned.
Note that
vmulComplex,
ButterflyDIT4_1, and
ButterflyDIT4_4 are internal implementation functions and not intended for direct use.
Note: There is a Windows Store app sample on MSDN Code Gallery that makes use of this header.
Update: This project is now available on GitHub under the MIT license.
Does it works in Windows Phone 8?
because I call FFT() in WinPRT, the assert() makes my app terminated.
XDSP.h compiles fine for Windows Store apps on x86, x64, and ARM as well as Win32 desktop x86 / x64. It should therefore work fine for Windows phone 8 as well as DirectXMath is supported on all of these platforms.
I suspect the problem is you aren't respecting the alignment requirement of the memory buffer (
XMVECTORrequires 16-byte aligned memory). On x86 and ARM, the default
mallocor
newis only 4-byte aligned. It is 16-byte aligned by default only on x64 native. You need to use
__aligned_malloc( size, 16 )to ensure the memory buffer you allocate is 16-byte aligned.
I recommend you take the time to read through the DirectXMath Programmer's Guide. It's not long, and gives you a lot of useful information for working with SIMD math.
Hi Chunk, thanks for your quick reply.
Actually, I'm porting Windows Store app sample on MSDN Code Gallery as you said to Windows Phone 8.(code.msdn.microsoft.com/…/Audio-player-with-real-c4413a30), It works on Win8(x64).
I'm sure it uses __aligned_malloc( size, 16 ) in FFTSampleAggregator::FFTSampleAggregator() of FFTSampleAggregator.cpp
Then it call XDSP::FFTInterleaved(), in FFTInterleaved(), it call FFT(). In FFT(), it gets error in assert((uintptr_t)pImaginary % 16 == 0);
Could you help to check it? or give me some hints.
thank you.
Hi Chunk, I also use Windows Store app sample on MSDN Code Gallery.(code.msdn.microsoft.com/…/Audio-player-with-real-c4413a30) on Windows RT, it also has same situation.
screen shot as below.
imageshack.us/…/o3uk.jpg
|
https://blogs.msdn.microsoft.com/chuckw/2012/04/30/xdsp-h-digital-signal-processing-helper-functions/
|
CC-MAIN-2017-34
|
refinedweb
| 415
| 66.84
|
avitosanXMembers
Content count42
Joined
Last visited
Community Reputation328 Neutral
About DavitosanX
- RankMember
Personal Information
- LocationMexico
- URL
DavitosanX replied to C's topic in For BeginnersTake a look at Paint.net. I like it because Gimp and Photoshop have far too many features and I don't know how to use them very well. Paint.net is a lot slimmer but still very capable. It feels like MS Paint, but it has layers, transparency and other interesting features. Also, it's free!
DavitosanX posted a topic in 2D and 3D ArtJust curious if anyone has ever used povray to develop graphical assets for their game. A friend of mine is currently enrolled on a povray course, using to for molecular models, and I looked into it. It's very interesting, and it creates great looking images. In my case, whenever I've tried to use blender to create a 3d image, the results are often less than satisfactory, but since in povray you don't 'sculpt' your image, but rather program it in code, I almost always get the result I want. I'm still working with basic shapes and lighting, but so far I've made some interesting images: These might not be very impressive, but I made them a lot faster than I would've done in Blender. Also, I'm not sure about this, but I understand that since this is ray tracing, not 3d meshes, the spheres, cylinders and cones have real curves, and should look better. So, is anyone using povray?
DavitosanX replied to Sten9911's topic in Games Career DevelopmentI'll point you to Tom Sloper's FAQ about the videogame industry. At least for me it was a real eye opener. To get started on game programming, you don't even have to spend money. There are several free online tutorials that will help you become fluent in any language you choose, and to get started in game programming. You can check out: - The official Python tutorial - Cpluplus.com (for C++ programming) - Lazy Foo's SDL tutorial - The SFML tutorial Those sites could keep you entertained for a couple of months :D Of course, if you have no interest in C++, python, SDL or SFML, then you can look for learning material on your subject of choice. Do ask around in these and other forums for recommendations on which tutorial to use, because it's not always a matter of taste, and a lot of tutorials are really better than others.
DavitosanX replied to Reksaida's topic in For BeginnersYou could always take a look into RPG Maker XV. I got it pretty cheap at a Humble Bundle sale, and the Steam Winter Sale is bound to start soon :D There are a few successful commercial games made with it, "To the Moon" being a good example. If you're a complete beginner, it may be helpful to start with a tool that allows you to focus on making a game, which RPG Maker does. It comes with a few basic assets (Map tiles, character sprites, music, etc) with which you could build an entire game if you wanted. I'd suggest you download the free Lite version and see if it would fit your project, and play around with it. Since you wnat to make an RPG, maybe it's not necessary that you reinvent the wheel (programming-wise) when there are already engines that can do the heavy lifting for you.
DavitosanX replied to Shay Yizhak's topic in Writing for GamesLooks great to me, but then again, english isn't my first language either. Since it's fantasy, you can turn this into an advantage, as the dialogue may sound faintly foreign, different from the player's everyday speech.
DavitosanX replied to RiyoHime's topic in For BeginnersWhen I started out with visual games, I found both Lazy Foo's SDL tutorials and the official SFML tutorials to be very helpful. I think that maybe you should take a step back and forget for a second about making a game. You said that you are already capable of coding a decent game on console, so maybe you don't need to worry about game-related programming right now. Instead, set these simple goals: - Create a window. - Keep that window running, and be able to close it. - Fill that window with color. - Display an image on this window. - Move the image through code. - Move the image through player input. Each one would be a project by itself, that you could complete rather quickly (a couple of hours in most cases). Once you are able to move a sprite arround with the keyboard arrows or the mouse, you'll already be able to explore a huge number of possibilities in game making.
DavitosanX replied to hakura88's topic in GDNet LoungeIf you set out to do something like this, and choose python over php, remember to check if your webpage's host can handle python scripts. Also, this link contains information on web frameworks for python:
DavitosanX replied to Ben Stuart's topic in For BeginnersIf you go the C++ way, check out SFML and the tutorials on their webpage. If you know your way around general programming, you can build a very decent game in relatively short time. Still, you shouldn't entirely discard the idea of working with a game engine. If you use Unity, you still have to do some programming in C#, but mostly for game logic, whereas in a project made from scratch you'd had to deal with all kinds of technical issues.
DavitosanX replied to DavitosanX's topic in General and Gameplay ProgrammingThank you both for the encouragement! I wil definitely take some time to really get into your answers. It may be evident now that my knowledge of vector math is poor, and I don't really know how to apply it. I'll read into that, as it seems it'll be far more useful than trigonometry for computer graphics. About encapsulation, I'm still trying to get it right. I've struggled to grasp OO concepts, but it appears I'm not so far off now. About optimization, well, I'll certainly take your suggestions into account. But maybe I'll wait a little before applying it, as it may turn a small concept program into a more complex, daunting task. Thank you again!
DavitosanX posted a topic in General and Gameplay ProgrammingHello! I'm not sure if this belongs here, or in the beginners' section, so excuse me if this code is too bad, or too basic. I had set a short term goal for myself as an amateur programmer: To implement a hexagonal grid, similar to the one found in the original Fallout. You should be able to move your mouse around and the hexagon that contains the mouse pointer should be highlighted. I thought it would be a good exercise, because unlike a square grid, determining which hexagon contains the mouse pointer is trickier. I did finish the program, and it does exactly what I want, but I do tend to overcomplicate things and I would appreciate it if people with more experienced took a look at it and gave me any tips. This was coded in python with pygame. import pygame import math INITIAL_HEXAGON_VERTICES = ((-40,-40),(40,-40),(45,0),(40,40),(-40,40),(-45,0)) GRID_HEIGHT = 10 GRID_WIDTH = 10 VERTEX_COUNT = 6 X_ELEMENT = 0 Y_ELEMENT = 1 FIXED_ANGLE = 0.122 #7 degrees in radians NOT_MOVING = (0,0) def calculate_angle(fixed_point,var_point): opposite = math.fabs(fixed_point[X_ELEMENT] - var_point[X_ELEMENT]) adjacent = math.fabs(fixed_point[Y_ELEMENT] - var_point[Y_ELEMENT]) if adjacent == 0: adjacent = 0.1 angle = math.atan((opposite/adjacent)) return angle class Hexagon: def __init__(self,num,ver): self.number = num self.vertices = ver class InputManager: def check_events(self): for event in pygame.event.get(): if event.type == pygame.QUIT: game.running = False def mouse_in_grid(self,mouse_pos,hexagons): result = 0 for counter,hexagon in enumerate(hexagons): if (mouse_pos[X_ELEMENT] > hexagon.vertices[5][X_ELEMENT] and mouse_pos[X_ELEMENT] < hexagon.vertices[2][X_ELEMENT] and mouse_pos[Y_ELEMENT] >= hexagon.vertices[0][Y_ELEMENT] and mouse_pos[Y_ELEMENT] < hexagon.vertices[3][Y_ELEMENT]): result = hexagon.number if (mouse_pos[X_ELEMENT] < hexagon.vertices[0][X_ELEMENT] and mouse_pos[Y_ELEMENT] < hexagon.vertices[5][Y_ELEMENT]): angle = calculate_angle(hexagon.vertices[0],mouse_pos) if angle < FIXED_ANGLE: result = hexagon.number if (mouse_pos[X_ELEMENT] > hexagon.vertices[1][X_ELEMENT] and mouse_pos[Y_ELEMENT] < hexagon.vertices[2][Y_ELEMENT]): angle = calculate_angle(hexagon.vertices[1],mouse_pos) if angle < FIXED_ANGLE: result = hexagon.number if (mouse_pos[X_ELEMENT] > hexagon.vertices[3][X_ELEMENT] and mouse_pos[Y_ELEMENT] > hexagon.vertices[2][Y_ELEMENT]): angle = calculate_angle(hexagon.vertices[3],mouse_pos) if angle < FIXED_ANGLE: result = hexagon.number if (mouse_pos[X_ELEMENT] < hexagon.vertices[4][X_ELEMENT] and mouse_pos[Y_ELEMENT] > hexagon.vertices[5][Y_ELEMENT]): angle = calculate_angle(hexagon.vertices[4],mouse_pos) if angle < FIXED_ANGLE: result = hexagon.number return result class Game: def __init__(self,resolution,caption): self.screen = pygame.display.set_mode(resolution) pygame.display.set_caption(caption) self.clock = pygame.time.Clock() self.running = True self.gray = (220,220,220) self.green = (50,240,50) self.black = (0,0,0) self.hexagons = [] self.current_hexagon = 0 def draw_screen(self): self.screen.fill(self.gray) if pygame.mouse.get_rel() != NOT_MOVING: self.current_hexagon = input_manager.mouse_in_grid(pygame.mouse.get_pos(),self.hexagons) pygame.draw.polygon(self.screen,self.green,self.hexagons[self.current_hexagon].vertices,3) pygame.display.flip() def calculate_grid_points(self): number = 0 for column in range(GRID_WIDTH): for row in range(GRID_HEIGHT): points = [] lift_hexagon = 0 if column % 2 != 0: lift_hexagon = 40 for point in range(VERTEX_COUNT): points.append( ((INITIAL_HEXAGON_VERTICES[point][X_ELEMENT] + (85 * column)), ((INITIAL_HEXAGON_VERTICES[point][Y_ELEMENT] + (80 * row))-lift_hexagon) ) ) new_hexagon = Hexagon(number,points) self.hexagons.append(new_hexagon) number += 1 def main_loop(self,framerate): self.calculate_grid_points() while self.running: self.clock.tick(framerate) input_manager.check_events() self.draw_screen() pygame.quit() input_manager = InputManager() game = Game((800,600),"Game") game.main_loop(60) Thanks in advance!
DavitosanX replied to Dalphin's topic in For BeginnersYou can pick any engine you like, even if it doesn't support python. I say that because whatever you have learned in python so far will most likely be applicable in another language. Unity 3d, for example, uses either Javascript or C# as its scripting languages, and those aren't that different to python when it comes to the basic building blocks. Of course, there will be some learning involved, but if you already know how to solve problems through programming, then learning some new syntax and a slightly different paradigm isn't too hard.
DavitosanX replied to Dalphin's topic in For BeginnersI found this site very helpful when I started learning C++. I hope it helps you too. Also, try out different IDE's. Choosing the one that feels most comfortable to you is crucial, because it makes the whole experience a lot less frustrating. Personally, I ended up using gedit and g++. Good luck on your endeavors!
- The point I'm trying to make is that these basic concepts are what a new programmer should be learning, and just like you said, they are very much alike between languages. It doesn't really matter is the OP starts out with Java or Javascript, learning these building blocks, and most importantly the reasoning behind them, will make the transition from one to the other painless.
- Well, I meant to say things like: - How do statements end (semicolon or no semicolon) - How to write if's, for's and while's - Operators - Class and function definition These are some differences between Python and C++ Python: #Statements need no semicolon print("Hello world!") #Function call. Arguments go inside parentheses. Strings go between quotes. x = 0 # Assignment operator #Structure of an IF if x == 0: # Comparison operators print("True") # Forced indentation for scope, no brackets. else: # Colon after if, elif, else print("False") list = [1,2,3,4] # Structure of a for loop for item in list: print(item) # Structure of a while loop while list[0] != 1: print("Looping...") # Function definition def foo(arg1, arg2): #keyword def, arguments between parentheses, colon -- code to run -- # Forced indentation # Class definition class Foo: def __init__(self): -- contructor code -- -- define more attributes methods -- C++ // Needs main function #include <iostream> using namespace std; // Statements need semicolons int main() { cout << "Hello World!" << endl; // Bitwise shift operator return 0; // Indentation is optional, brackets are necessary to indicate scope } int x = 0; // Assignment operator (strong-typed language, not syntax related, though) // Structure of an IF if (x == 0) { cout << "True" << endl; } else { cout << "False" << endl; } // For loop for (int i = 0; i < 10; i++) { // some code } // While loop while (variable == value) { // some code } // Function definition void Foo(int arg1, bool arg2) // starts with type, arguments between parentheses, also with type { // some code } // Class definition class Foo { int privateAttribute; // Attribute public: Foo() // Constructor { privateAttribute = 0; } } The examples might not be 100% accurate, but my point is that if you know what you're doing, syntax shouldn't be too much of a problem. How long did it take you to see the differences between the two blocks of code? And notice how syntax is consistent across the language, so the differences between a language and another tend to be consistent also. A different thing is the actual use and logic of each language which can be extremely different, and it can take weeks or months to make the switch from one language to another.
DavitosanX replied to ObscurityGamer's topic in For BeginnersOne of the benefits of using RPG Maker is that it has a Ruby-based script editor, though I'm not sure if it's also included in the lite version. With it, you can learn some basic programming, while developing an actual, playable game. Also, not trying to discourage you, but you'll realize just how massive a project an RPG truly is. Of course you can do it, even by yourself, but it's going to be hard work, and a long, long road before it's completed. If you'd like to start programming with python, you should take a look at the tutorial on python's website. Make sure you understand every concept fully before moving on, and do lots of exercises! There's nothing worse than speeding through tutorials, since you won't remember much, and won't learn much either.
|
https://www.gamedev.net/profile/200505-davitosanx/?tab=smrep
|
CC-MAIN-2017-30
|
refinedweb
| 2,375
| 63.8
|
Problem Statement
The problem “Maximum path sum in a triangle” states that you are given some integers. These integers are arranged in the form of a triangle. You are starting from the top of the triangle and need to reach the bottom row. For doing this, you move to the adjacent cells in the next row. So when you are moving down the triangle in the defined manner, what is the maximum sum you can achieve?
Example
1 2 3 5 8 1
12
Explanation
You can simply move down the path in the following manner. 1-> 3-> 8, this path will make you attain a maximum sum that is 12.
Approach
So how do we solve the Maximum path sum in a triangle? Until now, we are pretty much familiar with these kinds of problems. Whenever we are provided with these kinds of problems. The brute force approach always is to first generate all the possible ways to reach your destination. And then keep on updating the answer for the optimal result, by calculating the sum for each path. But this approach is highly inefficient as this approach requires us to generate the paths. And we know that path generation is a task that has exponential time complexity which is not good.
So, to solve this we need to think of another approach. Then dynamic programming comes to our rescue. Because instead of generating the paths, if we could know somehow that what is the maximum that can be achieved from a cell to reach the bottom row. That way we can get the result for the cell which is adjacent to it but in the row above it. So, we use DP to solve the smaller subproblems. Then combining the results for those subproblems we find answers for the original problem.
First, we fill the answer for the cells in the last row. We know that the maximum sum that can be achieved if we start from the cells at the bottom row is the number itself. After that, we move to the row above the bottom row. For each cell in the current row, we can choose the DP values of the cells which are adjacent to it in the row just below it. This way we keep on going in an upward direction. As we reach the top row, we are done with the problem.
C++ code to find Maximum path sum in a triangle
#include <bits/stdc++.h> using namespace std; typedef long long ll; int maximumPathSumInTriangle(vector<vector<int>> &input) { int n = input.size(); //]; } int main() { int n;cin>>n; // number of rows vector<vector<int>> input(n, vector<int>(n, 0)); for(int i=0;i<n;i++){ for(int j=0;j<=i;j++) cin>>input[i][j]; } cout<<maximumPathSumInTriangle(input); } }
3 1 2 3 5 8 1
12
Java code to find Maximum path sum in a triangle
import java.util.*; class Main{ static int maximumPathSumInTriangle(int input[][], int n) { //]; } public static void main(String[] args) { Scanner sc = new Scanner(System.in); int n = sc.nextInt(); // number of rows int input[][] = new int[n][n]; for(int i=0;i<n;i++){ for(int j=0;j<=i;j++) input[i][j] = sc.nextInt(); } int answer = maximumPathSumInTriangle(input, n); System.out.print(answer); } }
3 1 2 3 5 8 1
12
Complexity Analysis
Time Complexity
O(N^2), as we moved across each row and each column. In the process, we traveled to each cell. And since there were O(N^2) cells in the triangle and the transition for DP took only O(1) operation. Thus, the time complexity is also polynomial.
Space Complexity
O(N^2) since we created a 2D DP array. Thus the space complexity is also polynomial.
|
https://www.tutorialcup.com/interview/dynamic-programming/maximum-path-sum-in-a-triangle.htm
|
CC-MAIN-2021-31
|
refinedweb
| 631
| 65.12
|
Mono for Android
Learn how to use the Fragments API in Mono for Android to create an application UI for multiple screens, such as a handset and tablet.
With the release of Android 3.0, Google added support for larger displays and attention-grabbing UI designs and layouts. On a tablet screen, UI components can be used to present better information. How does Android do this? It has a technology called Fragments, and I'll look at its implementation in the currently shipping operating system, Android 4. (Let's get past all the jokes about Android and fragmentation on its device platform.)
What's a Fragment? Most people are familiar with an Activity, which is a piece of an application that provides a screen full of information to a user. A Fragment represents a portion of an Activity’s screen. Multiple fragments can be combined into a single activity. In addition, a Fragment is modular and may be reused across multiple Activities.
Figure 1 shows an Activity that contains the entire screen of information, and two Fragments. The Fragment on the left is a master view of data. When the user selects an item from the ListView, the detail view on the right is filled with information.
Fragment Types
Developers can opt to use several types of fragments within their applications. A Fragment class is a base class for other fragment types. It's from this class that an application’s user interface or behavior can be within an Activity.
A DialogFragment is used to display a floating dialog. This class is an alternative to the dialog helper methods in the Activity class.
A ListFragment displays a list of items much like a ListView, and the items are managed by an adapter. Conceptually, this is similar to a ListActivity.
A PreferenceFragment displays a hierarchy of preference objects. This is similar to a PreferenceActivity.
A WebViewFragment is one that contains a webview and displays content within the webview.
Fragments can bet set so that they are usable from both a handset and a tablet. In this example, when a user touches a Star Trek character in the ListView, the detail Fragment gets filled with an image of the character along with a quote from that character. Both the master and the detail Fragments are shown to the user at the same time on a tablet. On a handset, the ListView is shown within one screen of information. When the user selects a Star Trek character, the detail view is shown with the image and the quote.
Let’s look at the application. Figure 2 shows the application in a tablet emulator. The Star Trek character list is on the left-hand side, and the character image and quotes are on the right-hand side.
When run on an Android 4.03 (codenamed Ice Cream Sandwich) based handset, the result looks like Figure 3 and Figure 4. Figure 3 shows the master view of Star Trek characters.
When a user is selected in the ListView, a picture and a quote from that user are shown in the detail view, seen in Figure 4.
The Code
Let’s look at the code to make this happen. The first code that's run is the Activity that performs the initial load.
[Activity(Label = "ICS Fragment Sample", MainLauncher = true, Icon = "@drawable/Enterprise")]
public class Activity1 : Activity
{
protected override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
SetContentView(Resource.Layout.Main);
}
}
In this code, the Resource.Layout.Main is loaded via the call to SetContentView. The next step is to look at the Fragments that are loaded. In the project, there are two folders. One is the layout folder, and the other is the layout-large folder as shown in Figure 5.
The layouts for a normal layout screen are in the Layout directory, and the Layout-Large directory loads the Main.axml file. The file that's loaded depends on the screen size of the device.
Let’s look at the layout file for a tablet display. In this code, shown in Listing 1, a Fragment within the application is loaded. This Fragment is named the TitlesFragment. Notice that the namespace of the application is included, and it's a fully qualified DNS name. My personal experience has been that the fully qualified DNS entry is needed, or it won’t work; you may have better luck another way.
The next step is to look at the content of the TitlesFragment class in Listing 2. Several things are worth noting about the TitlesFragment class:
Now, let’s take a look at the Content of the DetailsFragment class, shown in Listing 3. A few things are worth noting in the code:
Fragment APIs
The previous section explained how to use a Fragment as part of the xml content contained within an .axml file in the Resource Layout directories. Fragments can also be loaded programmatically. This is done through the FragmentManager, which is found in Android 3.0 and later. It's available via an Activity’s .FragmentManager property.
In the case of the sample code, the application will add and remove Fragments in an Activity as follows:
This operation isn’t all that can be done with a Fragment. Finding Fragments can be performed by calling FindFragmentById() or FindFragmentByTag() methods of the FragmentManager.
Android is very stack-based regarding Activities. Fragments can be a part of this as well. The Fragment Transaction object has the method .AddToBackStack() to add the existing transaction state to the stack. The Fragment Manager’s .PopBackManager() allows an application to programmatically simulate the user clicking the back button, so that the previous state can be obtained.
A Fragment can access the Activity that it's within by calling the .Activity property of the Fragment. This is convenient when performing background operations that must then be synchronized with the main UI thread via the RunOnUIThread(…) method.
Fragments can handle their own lifecycle. This includes events to handle Resume, Pause, Start, Stop and several points in the destruction of the Fragment. Most of these events flow down from the parent Activity.
This is only an introduction to the Fragments API in Mono for Android. As I've shown, you can use Fragments to create an application UI for both a handset and a tablet. There's more to come on this subject very soon in this column.
|
http://visualstudiomagazine.com/articles/2012/12/13/android-4-and-fragments.aspx
|
CC-MAIN-2015-06
|
refinedweb
| 1,063
| 66.44
|
EJB - EJB
clients communicating with ur server, there will not always 100 session beans servicing the clients.. instead the container instantiates a reasonable number of bean instances, say 25, to service the 100 clients simultaneously.
Now
EJB - EJB
.
*) Stateful S.Beans can be passivated and reuses them for many clients.
Stateless... to :
Thanks
use of ejb - EJB
use of ejb what is the use of ejb?? Hi friend,
Some points to be remember for using ejb :
* Applications developed by using the enterprise beans deal with a variety of clients, simply by writing
few lines
difference - EJB
for many clients.
*)It maintain the state of the client with server Interfaces
EJB Interfaces
Interface in java means a group of related methods with empty bodies.
EJB have generally 4... for creating Remote interface.
package ejb;
import
ejb - EJB
EJB - EJB
java bean - EJB
for the programmer where as in ejb these middle ware services are built in programmes... accessed by remote client where as business logic kept in ejb can be accessed by remote and local clients........... Hi Friend,
Difference
EJB - Java Interview Questions
state should not be retained i.e. the EJB container
destroys a stateless... beans can support multiple clients, they provide the better scalability for
applications that require large numbers of clients.
# Stateful Session Beans 3.0
EJB 3.0
... are the Java EE server side components that run inside the ejb container... with the client, maintaining session for the clients retrieving and holding data
Chapter 1. EJB Overview
Chapter 1. EJB OverviewPrev Part I. Exam Objectives Next
Chapter 1. EJB...,
for version 2.0 of the EJB specification.
CharacteristicsEnterprise
What is EJB 3.0?
What is EJB 3.0
This lesson introduces you with EJB 3.0, which is being used extensively for
developement of robust, scalable and secure applications.
What is EJB
EJB Project
EJB Project How to create Enterprise Application project with EJB module in Eclipse Galileo?
Please visit the following link:
Chapter 5. EJB transactions
Chapter 5. EJB transactionsPrev Part I. Exam Objectives Next
Chapter 5. EJB...
EJB transactions are a set of concepts and a set.
EJB, Enterprise java bean- Why EJB (Enterprise Java Beans)?
Why EJB (Enterprise Java Beans)?
Enterprise Java Beans or EJB..., Enterprise
Edition (J2EE) platform. EJB technology enables rapid and simplified
Java - EJB
Weblogic - EJB
j2ee - EJB
java - EJB
JMS - EJB
JEE - EJB
JSP -
Building a Simple EJB Application Tutorial
Building a Simple EJB Application - A Tutorial
... EJB and a client web application
using eclipse IDE along with Lomboz
plug... introduction to EJB development and some of the Web
development tools
Chapter 3. Develop clients that access the enterprise components
Chapter 3. Develop clients that access the enterprise...;
Chapter 3. Develop clients that access the enterprise components
Implement Java clients calling EJBs
Application client projects
Stateful Session Beans Example, EJB Tutorial
types of clients: an application client and a web
client.
here are following... to be destoryed by EJB
container before removing the enterprise bean
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/92065
|
CC-MAIN-2015-35
|
refinedweb
| 513
| 60.31
|
It's taken me forever to get back to this, but let me say first, wow, loads
of comments!
> David VomLehn wrote:
>
> Adds the C and header source files for the Cisco PowerTV platform.
>
> Out of curiosity, is there more information about this platform
> somewhere, like what the bootloader is, what the end user availability
> is, how to bootstrap it etc? The kernel support files on their own is
> only one part of the puzzle... If it turns out that this board is not
> something that you can buy/find/steal, then that may influence
> its chances of being merged too (see the voyageur thread between
> Ingo and James recently, as an example). That, and what the long
> term maintenance you foresee for the codebase may also have
> an impact on whether it makes sense to merge it or not.
This is for a series of cable settop boxes, which is one of those much
derided locked-down platforms. So, they are only sold in large quantities
directly to cable companies. On the other hand, there will be hundreds of
thousands of them, so they will be maintained for quite some time to come.
> Anyway, hope you find the feedback useful.
Yes, lots of useful things here. See replies in-line.
> Paul.
> ---
> arch/mips/include/asm/mach-powertv/asic.h | 124 +
> arch/mips/include/asm/mach-powertv/asic_regs.h | 136 +
> arch/mips/include/asm/mach-powertv/dma-coherence.h | 123 +
> arch/mips/include/asm/mach-powertv/interrupts.h | 234 ++
> arch/mips/include/asm/mach-powertv/war.h | 27 +
> arch/mips/powertv/Kconfig | 17 +
> arch/mips/powertv/Makefile | 37 +
> arch/mips/powertv/asic/Kconfig | 24 +
> arch/mips/powertv/asic/Makefile | 24 +
> arch/mips/powertv/asic/asic_devices.c | 2902
> +++++++++++++++++++
> arch/mips/powertv/asic/asic_int.c | 146 +
> arch/mips/powertv/asic/irq_asic.c | 115 +
> arch/mips/powertv/cevt-powertv.c | 247 ++
> arch/mips/powertv/cmdline.c | 51 +
> arch/mips/powertv/csrc-powertv.c | 84 +
> arch/mips/powertv/init.c | 127 +
> arch/mips/powertv/init.h | 10 +
> arch/mips/powertv/memory.c | 183 ++
> arch/mips/powertv/pci/Makefile | 26 +
> arch/mips/powertv/pci/fixup-powertv.c | 14 +
> arch/mips/powertv/pci/pci.c | 35 +
> arch/mips/powertv/pci/pciemod.c | 2921
> ++++++++++++++++++++
> arch/mips/powertv/pci/pcieregs.h | 333 +++
> arch/mips/powertv/pci/powertv-pci.h | 12 +
> arch/mips/powertv/powertv-clock.h | 10 +
> arch/mips/powertv/powertv_setup.c | 351 +++
> arch/mips/powertv/reset.c | 69 +
> arch/mips/powertv/reset.h | 8 +
> arch/mips/powertv/time.c | 47 +
>
> I don't know enough about this platform to know what are critical
> and non critical support items, but a general suggestion is to start
> with just the core board support plus serial console and ethernet
> (i.e. so you can boot and NFS root) and use that as your baseline
> commit, and then look to adding in additional features in separate
> commits (i.e. say the asic and pcie support in this case?)
I'm going to drop PCI support for this patchset since it's currently unused.
The asic stuff is important for setting up the system but I'm going to
break asic_devices.c up. I've been wanting to do this anyway, now I have
a reason to do it immediately.
> It helps to make things somewhat more digestible chunks, as when
> things come out in one giant chunk you probably loose out on
> review feedback (i.e. people get scared off).
I agree. Reviewing is hard enough as it is.
> diff --git a/arch/mips/include/asm/mach-powertv/asic.h
> b/arch/mips/include/asm/mach-powertv/asic.h
> new file mode 100644
> index 0000000..4240e4e
> --- /dev/null
> +++ b/arch/mips/include/asm/mach-powertv/asic.h
> @@ -0,0 +1,124 @@
> +#ifndef _ASM_ASIC_H
> +#define _ASM_ASIC_H
>
> _ASM_MACH_POWERTV_ASIC_H
>
> at least that seems to be the norm used by other platforms.
Sounds good.
> +#include <asm/mach-powertv/asic_regs.h>
> +
> +#define DVR_CAPABLE (1<<0)
> +#define PCIE_CAPABLE (1<<1)
> +#define FFS_CAPABLE (1<<2)
> +#define DISPLAY_CAPABLE (1<<3)
>
> align with tabs, not spaces?
Definitely. Much of this stuff was originally written to a very different
coding standard which uses spaces for lining things up. It's pretty hard to
do this automatically, so I've missed some things.
> +/* Platform Family types
> + * For compitability, the new value must be added in the end */
> +enum tFamilyType {
>
> Is the leading "t" prefix relevant? You'll probably get less grief
> from kernel community folks if you remove it.
Yeah, this is that other coding standard thing. Consider it gone.
> +extern enum tAsicType platform_get_asic(void);
> +extern enum tFamilyType platform_get_family(void);
>
> Aside from the magic "t" again, I think it would really be a win
> for you in terms of acceptance if you could get rid of all the
> CamelCase from the variable names. When people see that as
> they are reviewing, it puts them off on the wrong foot immediately.
It's outta here. This you can actually automate pretty well.
> +extern unsigned long gPhysToBusOffset;
>
> Leading "g" here like the "t" above?
Gone.
> +#ifdef CONFIG_HIGHMEM
> +/*
> + * TODO: We will use the hard code for conversion between physical and
> + * bus until the bootloader releases their device tree to us.
> + */
> +#define phys_to_bus(x) (((x) < 0x20000000) ? ((x) + gPhysToBusOffset) :
> (x))
> +#define bus_to_phys(x) (((x) < 0x60000000) ? ((x) - gPhysToBusOffset) :
> (x))
> +#else
> +#define phys_to_bus(x) ((x) + gPhysToBusOffset)
> +#define bus_to_phys(x) ((x) - gPhysToBusOffset)
> +#endif
> +
> +/*
> + * Determine whether the address we are given is for an ASIC device
> + * Params: addr Address to check
> + * Returns: Zero if the address is not for ASIC devices, non-zero
> + * if it is.
> + */
> +static inline int asic_is_device_addr(phys_t addr)
> +{
> + return !((phys_t)addr & (phys_t) ~0x1fffffffULL);
> +}
> +
> +/*
> + * Determine whether the address we are given is external RAM mappable
> + * into KSEG1.
> + * Params: addr Address to check
> + * Returns: Zero if the address is not for external RAM and
> + */
> +static inline int asic_is_lowmem_ram_addr(phys_t addr)
> +{
> + /*
> + * The RAM always starts at the following address in the
> processor's
> + * physical address space
> + */
> + static const phys_t phys_ram_base = 0x10000000;
>
> Could probably bury all the magic constants for the board in one
> header file. POWERTV_RAM_BASE or similar.
Agreed. This should help make the code more understandable.
> +#define ASIC_RESOURCE_GET_EXISTS 1
>
> Defined but never used?
This is used by loadable kernel modules to determine whether it can use
asic_resource_get(). This allows the same source to be used for 2.6.14,
which doesn't have this function, and 2.6.24, which does.
> +enum eSys_RebootType {
> + kSys_UnknownReboot = 0x00, /* Unknown reboot cause */
> + kSys_DavicChange = 0x01, /* Reboot due to change
> in DAVIC
...
> + * drivers may report as
> + * userReboot. */
> + kSys_WatchdogInterrupt = 0x0A /* Pre-watchdog interrupt
> */
> +};
>
> The "e" prefix and the "k" prefix (and the CamelCase) don't help
> the readability here either.
Gone.
> diff --git a/arch/mips/include/asm/mach-powertv/asic_regs.h
> b/arch/mips/include/asm/mach-powertv/asic_regs.h
> new file mode 100644
> index 0000000..8bdbec2
> --- /dev/null
> +++ b/arch/mips/include/asm/mach-powertv/asic_regs.h
> @@ -0,0 +1,136 @@
> +
> +#ifndef __ASIC_H_
> +#define __ASIC_H_
>
> _MACH_POWERTV_ASIC_REGS_H or similar
Yup.
> +/* ASIC register enumeration */
> +struct tRegisterMap { /* ==ZEUS== ==CALLIOPE== */
> + int EIC_SLOW0_STRT_ADD;
...
> + int UART1_DATA; /* 0x281818 0xA01818 */
> + int UART1_STATUS; /* 0x28181C 0xA0181C */
>
> This is kind of worse than the normal CamelCase; in that when I
> see "UART1_STATUS" written in code somewhere, I'm going to
> assume it is a macro and not a variable.
Makes sense.
> Also, since this is a register map, it would probably be better to
> explicitly specify/use the actual register size u32 or whatever
> instead of "int'.
Good suggestion.
> + int MIPS_PLL_SETUP; /* 0x1a0000 0x980000 */
> + int USB_FS; /* 0x1a0018 0x980030 */
> + int Test_Bus; /* 0x1a0238 0x9800CC */
> + int USB2_OHCI_IntMask; /* 0x1e000c 0x9A000c */
> + int USB2_Strap; /* 0x1e0014 0x9A0014 */
>
> Whitespace in this section seems mangled; best to have it all use
> hard tabs probably.
Actually, those comments are not ammended for new versions of the box
and the values are assigned elsewhere, so it really doesn't make sense to
keep them. Out of date documentation can be worse than none at all.
> +extern enum tAsicType gAsic;
> +extern const struct tRegisterMap *gRegisterMap;
> +extern unsigned long gAsicPhyBase; /* Physical address of ASIC
> */
> +extern unsigned long pAsicBase; /* Virtual address of ASIC */
>
> Again, cosmetic, but it makes for better readability if you choose a style
> and stick with it. I'd not put tabs between the types and the variable
> names, but tab aligning the comments is common.
This is within the bounds of the kernel coding style, but I agree that tab
aligning the comments is helpful.
> +#define asic_reg_offset(x) (gRegisterMap->x)
> +#define asic_reg_phys_addr(x) (gAsicPhyBase + asic_reg_offset(x))
> +#define asic_reg_addr(x) ((unsigned int *) (pAsicBase +
> asic_reg_offset(x)))
>
> Semi random whitespace usage in these three lines too.
Yup. Better if aligned.
> +#define asic_read(x) _asic_read(asic_reg_addr(x))
> +#define asic_write(v, x) _asic_write(v, asic_reg_addr(x))
>
> Same for macros, I think you'll find mostly no tabs between the
> define and the name in existing kernel code, if you are aiming
> for consistency
Old habits die hard, especially when it looks the same whether you use a tab
or a space. But it should be consistent and everyone uses a space.
> +
> +static inline unsigned int _asic_read(unsigned int *addr)
> +{
> + return readl(addr);
> +}
> +
> +static inline void _asic_write(unsigned int value, unsigned int *addr)
> +{
> + writel(value, addr);
> +}
>
> if the asic_read and write ops will never need some sort of wrapper, then
>
> why the asic_read --> _asic_read --> readl indirection? Sure, the compiler
>
> will probably clean it up in the generated code, but if it isn't used....
There was, once upon a time, a reason for this, but now it's cleaner as you
suggest.
> diff --git a/arch/mips/include/asm/mach-powertv/dma-coherence.h
> b/arch/mips/include/asm/mach-powertv/dma-coherence.h
> new file mode 100644
> index 0000000..b39f945
> --- /dev/null
> +++ b/arch/mips/include/asm/mach-powertv/dma-coherence.h
> @@ -0,0 +1,123 @@
> +/*
> + * This file is subject to the terms and conditions of the GNU General
> Public
> + * License. See the file "COPYING" in the main directory of this archive
> + * for more details.
>
> You probably want to stick your name/copyright in here, and then...
>
>
> + *
> + * Version from mach-generic modified to support PowerTV port
>
> ..a comment here like "Original generic version was"
>
>
> + *
>
> ...otherwise it kind of appears like Ralf made *this file* back in
> 2006.
Legally, I think Ralf's copyright even extends to the parts of this file
that have not been changed, so it applies to both files. Only the new
stuff can be copyright by Cisco, so I need to leave Ralf's copyright
notice in. But, IANAL.
> +#ifndef __ASM_MACH_GENERIC_DMA_COHERENCE_H
> +#define __ASM_MACH_GENERIC_DMA_COHERENCE_H
>
> not MACH_GENERIC anymore, but MACH_POWERTV
Yup.
> +struct device;
>
> Probably should just go and suck in the proper include header for
> struct device rather than do that.
This file was just copied from mach-generic/dma-coherence.h and I never noticed
this. Perhaps we should change the mach-generic version, too, but that would
be another patch, another day.
> +static inline bool is_kseg2(void *addr)
> +{
> + return (unsigned long)addr >= KSEG2;
> +}
> +
> +static inline unsigned long virt_to_phys_from_pte(void *addr)
> +{
> + pgd_t *pgd;
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *ptep, pte;
> +
> + unsigned long virt_addr = (unsigned long)addr;
> + unsigned long phys_addr = 0UL;
> +
> + /* get the page global directory. */
> + pgd = pgd_offset_k(virt_addr);
> +
> + if (!pgd_none(*pgd)) {
> + /* get the page upper directory */
> + pud = pud_offset(pgd, virt_addr);
> + if (!pud_none(*pud)) {
> + /* get the page middle directory */
> + pmd = pmd_offset(pud, virt_addr);
> + if (!pmd_none(*pmd)) {
> + /* get a pointer to the page table entry
> */
> + ptep = pte_offset(pmd, virt_addr);
> + pte = *ptep;
> + /* check for a valid page */
> + if (pte_present(pte)) {
> + /* get the physical address the
> page is
> + * refering to */
> + phys_addr = (unsigned long)
> +
> page_to_phys(pte_page(pte));
> + /* add the offset within the page
> */
> + phys_addr |= (virt_addr &
> ~PAGE_MASK);
> + }
> + }
> + }
> + }
>
> I'm not sure why something like the above (which doesn't really
> appear board specific in any way) needs to be in a board specific
> file. Maybe some comments about what is going on and what
> the board requirements are would help here. If nothing else,
> it might help people better in the know to suggest an alternate
> solution using existing kernel functionality.
This is definitely not board-specific and I would be quite interested in
knowing if there is a better way to do this.
> +static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr,
> + size_t size)
> +{
> + if (is_kseg2(addr))
> + return phys_to_bus(virt_to_phys_from_pte(addr));
> + else
> + return phys_to_bus(virt_to_phys(addr));
> +}
This is where the function in question is called. We have a virtual address
in high memory and need to get the dma_addr_t for it. Is there any other
way than walking the page table?
> +#endif /* __ASM_MACH_GENERIC_DMA_COHERENCE_H */
>
> Fix this one too (!GENERIC anymore)
Yup.
> diff --git a/arch/mips/include/asm/mach-powertv/interrupts.h
> b/arch/mips/include/asm/mach-powertv/interrupts.h
> new file mode 100644
> index 0000000..c37df64
> --- /dev/null
> +++ b/arch/mips/include/asm/mach-powertv/interrupts.h
> @@ -0,0 +1,234 @@
> +#ifndef _INTERRUPTS_H_
> +#define _INTERRUPTS_H_
>
> _MACH_POWERTV_
Yup.
> +/*************************************************************
> + * \brief Defines for all of the interrupt lines
> + *************************************************************/
>
> The /**** may break kerneldoc ; it looks for /** to start embedded self
> documentation; also the \brief or any other similar tagging specific to
> something not used in the base kernel tree would have to go.
I don't know whether /**** will break kerneldoc, but it makes sense to
get rid of it. The doxygen stuff is now converted to standard kernel-doc
annotations.
> +#define kIBase 0
>
> Is this always zero? (I'll probably see myself later in the code). But
> maybe it is cleaner to just list the offsets here, and then add in the
> kibase to the offset in the few places it is used (vs. the hundreds
> of listings of it below). Oh, and if the "k" is a magic prefix with no
> external meaning, it should again go.
I have to say that I have no idea what ibase is about. Sometimes, stuff you
inherit is just odd. And "k" is gone.
> +/*------------- Register: Int_Stat_3 */
> +/* 126 unused (bit 31) */
> +#define kIrq_ASC2Video (kIBase+126) /* ASC 2 Video Interrupt
> */
...
> +#define kIrq_SdDVP2 (kIBase+96) /* SD DVP #2 Interrupt */
> +/*------------- Register: Int_Stat_2 */
> +#define kIrq_HdDVP (kIBase+95) /* HD DVP Interrupt */
...
> +#define kIrq_DTCP (kIBase+86) /* DTCP Interrupt */
> +#define kIrq_PCIExp1 (kIBase+85) /* PCI Express 1
> Interrupt */
> +/* 84 unused (bit 20) */
> +/* 83 unused (bit 19) */
> +/* 82 unused (bit 18) */
>
> I'm somewhat confused by the "bit" references in all the
> unused ones. In fact, if the int_stat registers are all independent,
> then why are they not listed in independent groups of 32 instead
> of one large linearly ascending list?
Tricky hardware folks. The int_stat registers are not independent. We have four
32-bit registers, i.e. 128 bits, for things like interrupt masking, status,
etc. The bit values refer to the bit within a given register. Things are
listed in descending order because they wanted everything to be big-endian.
Thus, the most significant bit of the 128-bit interrupt vector bit 31 of
the int_stat register at the lowest address.
> +/*------------- Register: Int_Stat_1 */
> +/* 63 unused (bit 31) */
> +/* 62 unused (bit 30) */
> +/* 61 unused (bit 29) */
> +/* 60 unused (bit 28) */
> +/* 59 unused (bit 27) */
> +/* 58 unused (bit 26) */
> +/* 57 unused (bit 25) */
> +/* 56 unused (bit 24) */
>
> Something happened from here down; the whitespace went all to hell.
> I think the variable names became longer than the initial indent...
>
>
> +#define kIrq_BufDMA_Mem2Mem (kIBase+55) /* BufDMA Memory to Memory
> + * Interrupt */
> +#define kIrq_BufDMA_USBTransmit (kIBase+54) /* BufDMA USB
> Transmit
> + * Interrupt */
> +#define kIrq_BufDMA_QPSKPODTransmit (kIBase+53) /* BufDMA
> QPSK/POD
Fixed.
> diff --git a/arch/mips/include/asm/mach-powertv/war.h
> b/arch/mips/include/asm/mach-powertv/war.h
> new file mode 100644
> index 0000000..2f4a155
> --- /dev/null
> +++ b/arch/mips/include/asm/mach-powertv/war.h
> @@ -0,0 +1,27 @@
> +/*
> + * This file is subject to the terms and conditions of the GNU General
> Public
> + * License. See the file "COPYING" in the main directory of this archive
> + * for more details.
> + *
>
> Similar here as above, add your name in and distinguish it from the
> original.
Yup.
> +config BOOTLOADER_FAMILY
> + string "POWERTV Bootloader Family string"
> + default "85"
> + depends on POWERTV && !BOOTLOADER_DRIVER
> + This value should be specified when the bootloader driver is
> disabled
> + and must be exactly two characters long.
>
> Neither of these descriptions lend any useful information to
> an end user who doesn't know intimate details of the platform.
I've added a bit more description, enough so that if you know your model you
should be able to figure out what to put here.
> diff --git a/arch/mips/powertv/Makefile b/arch/mips/powertv/Makefile
> new file mode 100644
> index 0000000..87886e0
> --- /dev/null
> +++ b/arch/mips/powertv/Makefile
> @@ -0,0 +1,37 @@
> +#
> +# Carsten Langgaard, carstenl@mips.com
> +# Copyright (C) 1999,2000 MIPS Technologies, Inc. All rights reserved.
>
> Similar comment here; stick your name in and give reference to
> the original source -- vs. inadvertently making it look like the
> original authors wrote your file.
Yup.
> +# Makefile for the MIPS Malta specific kernel interface routines
> +# under Linux.
>
> Not malta anymore.
True.
> diff --git a/arch/mips/powertv/asic/Kconfig
> b/arch/mips/powertv/asic/Kconfig
> new file mode 100644
> index 0000000..48b85ea
> --- /dev/null
> +++ b/arch/mips/powertv/asic/Kconfig
> @@ -0,0 +1,24 @@
...
> +config MIN_RUNTIME_PMEM
> + bool "Support for minimum PMEM resource"
> + depends on MIN_RUNTIME_RESOURCES
> + Enables support for the preallocated Memory resource.
>
> PMEM might be a bad name choice; I know it has been used for
> "persistent mem" (i.e. preserved across reboots) in the past, etc.
You note later that this is exactly what PMEM is intended to stand for.
> +config MIN_RUNTIME_TFTP
> + bool "Support for minimum TFTP resource"
> + depends on MIN_RUNTIME_RESOURCES
> + Enables support for the preallocated TFTP resource.
>
> Should any of the above items have defaults specified? I wouldn't know
> what to choose, aside from peeking at the defconfig.
Yes, they certainly should. And now they do :-)
> diff --git a/arch/mips/powertv/asic/asic_devices.c
> b/arch/mips/powertv/asic/asic_devices.c
> new file mode 100644
> index 0000000..2e67979
> --- /dev/null
> +++ b/arch/mips/powertv/asic/asic_devices.c
> @@ -0,0 +1,2902 @@
...
> + *
> + * File Name: asic_devices.c
> + *
> + * See Also:
> + *
> + * Project: SA explorer settops
> + *
> + * Compiler:
> + *
> + * Author: Ken Eppinett
> + * David Schleef <ds@schleef.org>
> + *
> + * Description: Defines the platform resources for the SA settop.
> + *
> + * NOTE: The bootloader allocates persistent memory at an address which
> is
> + * 16 MiB below the end of the highest address in KSEG0. All fixed
> + * address memory reservations must avoid this region.
> + *
> +
> *****************************************************************************
> + * History:
> + * Rev Level Date Name ECN# Description
> +
> *----------------------------------------------------------------------------
> + * 1.0 Eppinett initial version
> +
> ****************************************************************************/
>
> You might want to consider stripping some of the cruft from
> these, if you have the flexibility to do so. Things like the
> content free changelog above, or the redundant listing of the
> file name, and the empty compiler tag.
Yup.
>
> +/******************************************************************************
> + * Forward Prototypes
> +
> *****************************************************************************/
> +static void pmem_setup_resource(void);
> +
>
> +/******************************************************************************
> + * Global Variables
> +
> *****************************************************************************/
> +enum tAsicType gAsic;
>
> The big banner messages for proto and global don't really add
> any value -- if you don't need them for consistency with some
> internal release, you could consider flushing them.
They are from the other coding style I referred to earlier. I don't see that
they actually add value; they've been made less obtrusive.
> +unsigned int gPlatformFeatures;
> +unsigned int gPlatformFamily;
> +const struct tRegisterMap *gRegisterMap;
> +EXPORT_SYMBOL(gRegisterMap); /* Exported for testing */
> +unsigned long gAsicPhyBase;
> +unsigned long pAsicBase;
> +EXPORT_SYMBOL(pAsicBase); /* Exported for testing */
> +struct resource *gpResources;
> +static bool usb_configured;
>
> Similar comments, whitespace, and one letter mystery prefixes.
Yup.
> +/*
> + * Don't recommend to use it directly, it is usually used by kernel
> internally.
> + * Portable code should be using interfaces such as ioremp,
> dma_map_single,
> etc.
> + */
> +unsigned long gPhysToBusOffset;
> +EXPORT_SYMBOL(gPhysToBusOffset);
>
> If you wipe this out completely and compile, are there any offenders left
> that can be easily swung over, thus allowing removal?
I'm not completely sure what you're saying, but I think you are thinking this
symbol could be eliminated because it's not used much. The symbol is also used
by loadable drivers, so it's not possible to do away with it.
> +static const struct tRegisterMap zeus_register_map = {
> + .EIC_SLOW0_STRT_ADD = 0x000000,
> + .EIC_CFG_BITS = 0x000038,
> + .EIC_READY_STATUS = 0x00004c,
> +
> + .CHIPVER3 = 0x280800,
> + .CHIPVER2 = 0x280804,
> + .CHIPVER1 = 0x280808,
> + .CHIPVER0 = 0x28080c,
> +
> + /* The registers of IRBlaster */
> + .UART1_INTSTAT = 0x281800,
> + .UART1_INTEN = 0x281804,
> + .UART1_CONFIG1 = 0x281808,
> + .UART1_CONFIG2 = 0x28180C,
> + .UART1_DIVISORHI = 0x281810,
> + .UART1_DIVISORLO = 0x281814,
> + .UART1_DATA = 0x281818,
> + .UART1_STATUS = 0x28181C,
>
> These all caps struct fields have to go. Actually I think that you
> might want to look at breaking the registermap struct into smaller
> chunks (i.e. ver chunk, uart chunk, int chunk, usb chunk). That,
> and try and reuse what you can across boards by specifying a base
> address and then add the offsets -- since at a glance, it appears that
> is all that is really changing from one board to the next (the offset).
Would that it were that simple. Offsets from one block of device registers
vary from one revision of the ASIC to the next. However, the offsets within
register blocks for a particular device do remain the same and we are slowly
transitioning away from this style of accessing things to something based
on offsets within structures defined for particular devices. Oh, the all caps
names are gone.
...
> + .Watchdog = 0x282c30,
> + .Front_Panel = 0x283800,
> +};
> +
> +static const struct tRegisterMap calliope_register_map = {
> + .EIC_SLOW0_STRT_ADD = 0x800000,
> + .EIC_CFG_BITS = 0x800038,
...
> + .Watchdog = 0xA02c30,
> + .Front_Panel = 0x000000, /* -not used- */
> +};
> +
> +static const struct tRegisterMap cronus_register_map = {
> + .EIC_SLOW0_STRT_ADD = 0x000000,
> + .EIC_CFG_BITS = 0x000038,
...
> + .Watchdog = 0x2A2C30,
> + .Front_Panel = 0x2A3800,
> +};
>
> Assuming that you can't collapse all of the above by some sort of
> sharing and listing of a board specific offset, you might instead
> consider adding <boardname>.h which has all the settings instead
> of having them cluttering up the main source file.
I've broken these into independent files in a subdirectory.
>
> +/******************************************************************************
> + * DVR_CAPABLE RESOURCES
> +
> *****************************************************************************/
> +struct resource dvr_zeus_resources[] =
> +{
>
>
> [...]
>
>
> +
>
> +/******************************************************************************
> + * NON_DVR_CAPABLE ZEUS RESOURCES
> +
> *****************************************************************************/
> +struct resource non_dvr_zeus_resources[] =
> +{
> +
> /**********************************************************************
> + * VIDEO1 / LX1
> +
> *********************************************************************/
> + {
> + .name = "ST231aImage", /* Delta-Mu 1 image and ram
> */
> + .start = 0x20000000,
> + .end = 0x201FFFFF, /* 2MiB */
> + .flags = IORESOURCE_IO,
> + },
>
>
> [...]
>
> All these resource items might also be good candidates to shuffle off
> to a board specific header too. That would also lend itself better to
> being able to select/enable just one of the board variants should you
> decide to go that route someday.
These, too, have been broken into independent files in a subdirectory. I've
been wanting to do this for a long time and just needed an excuse.
> [...same with the ~1000 lines that were here...]
>
> Actually these resources might be good candidates in the context of
> my earlier comment, about keeping it simple for the initial commit,
> i.e. basic serial and ethernet; leave out all these for the initial board
> commits.
Actually, this patchset is a much simplified version of what we are using. My
goal is to transition everything I can from our "hidden" source tree to
the mainline kernel. To get to this point, I've been able to excise large
chunks, but doing much more munging around with this will make it much
harder to merge new code in because the changes will be a few lines here and
a few lines there.
> +/*
> + *
> + * USB Host Resource Definition
> + *
> + */
...
> +static struct platform_device *platform_devices[] = {
> + &ehci_device,
> + &ohci_device,
>
> The USB support is probably something you can keep within the
> core set of commits, but it could still be a separate commit
> within that group if it helps scale down the size of the patch
> chunks.
We use USB for the console and for the network device. It has to be a part
of the base patchset.
> +static void __init fs_update(int pe, int md, int sdiv, int
> disable_div_by_3)
> +{
> + int en_prg, byp, pwr, nsb, val;
> + int sout;
> +
> + sout = 1;
> + en_prg = 1;
> + byp = 0;
> + nsb = 1;
> + pwr = 1;
> +
> + val = ((sdiv << 29) | (md << 24) | (pe<<8) | (sout<<3) | (byp<<2)
> |
> + (nsb<<1) | (disable_div_by_3<<5));
>
> Nobody has any hope of knowing what the above does, or if
> there is a bug in it. A platform that is unmaintainable is less
> likely to be merged.
Honestly, it's magic, taken from the hardware documentation. The names
correspond to the names from that documentation and the computation comes
from there, too. I'm not quite sure what to add. A comment like RTFHM isn't
going to help.
> +/*
> + * \brief platform_get_family() determine major platform family type.
> + *
> + * \param none
> + *
> + * \return family type; -1 if none
>
> Backslash tags would have to go.
Gone.
> +enum tFamilyType platform_get_family(void)
> +{
> +#define BOOTLDRFAMILY(byte1, byte0) (((byte1) << 8) | (byte0))
>
> Group all the defines at the top just after the header includes?
Done.
>
>
> +
> + unsigned short bootldrFamily;
> + static enum tFamilyType family = -1;
> + static int firstTime = 1;
> +
> + if (firstTime) {
> + firstTime = 0;
> +
> +#ifdef CONFIG_BOOTLOADER_DRIVER
> + bootldrFamily = (unsigned short) kbldr_GetSWFamily();
> +#else
> +#if defined(CONFIG_BOOTLOADER_FAMILY)
> + bootldrFamily = (unsigned short) BOOTLDRFAMILY(
>
> I'm not sure if checkpatch.pl warns about spaces between casts and
> the names they operate on.
Checkpatch didn't complain about this and I don't know of a particular
convention. Usage seems to vary in the kernel code.
> +#undef BOOTLDRFAMILY
>
> No undef needed.
Gone.
> +/*
> + * \brief platform_get_asic() determine the ASIC type.
> + *
> + * \param none
> + *
> + * \return ASIC type; ASIC_UNKNOWN if none
>
Gone.
> + * \brief platform_configure_usb() usb configuration based on platform
> type.
> + *
> + * \param int divide_by_3 divide clock setting by 3
> + *
> + * \return none
>
> More backslash tags. I'll stop flagging them from here on, you get
> the idea by now, I'm sure.
Sure do.
> +static void platform_configure_usb(void)
> +{
> + int divide_by_3;
>
> A little comment on what this magic constant controls would
> be a good idea.
It's more hardware magic.
> +static int __init platform_devices_init(void)
> +{
> + pr_crit("%s: ----- Initializing USB resources -----\n", __func__);
>
> Presumably pr_crit maps onto KERN_CRIT -- but message is more
> informational than critical.
Yes, pr_crit() is in linux/kernel.h. But your point is well taken. This is
more like a KERN_NOTICE or KERN_INFO level of message.
> + * PERSISTENT MEMORY (PMEM) CONFIGURATION
>
> OK, so PMEM does mean persistent. I forget what it said that
> the "P" was for above...
Yup.
> diff --git a/arch/mips/powertv/asic/asic_int.c
> b/arch/mips/powertv/asic/asic_int.c
> new file mode 100644
> index 0000000..94b6ca9
> --- /dev/null
> +++ b/arch/mips/powertv/asic/asic_int.c
> @@ -0,0 +1,146 @@
> +/*
> + * Carsten Langgaard, carstenl@mips.com
> + * Copyright (C) 2000, 2001, 2004 MIPS Technologies, Inc.
>
> I'm guessing the attribution here also needs a cleanup.
Yes.
> + * Routines for generic manipulation of the interrupts found on the MIPS
> + * Malta board.
> + * The interrupt controller is located in the South Bridge a PIIX4 device
> + * with two internal 82C95 interrupt controllers.
>
> Chances are that this Malta description has no bearing whatsoever
> anymore either.
That would be correct, sir!
> +static DEFINE_SPINLOCK(mips_irq_lock);
>
> asic_irq_lock?
Sure.
> +/*
> + * Version of ffs that only looks at bits 12..15.
> + */
> +static inline unsigned int irq_ffs(unsigned int pending)
> +{
> +#if defined(CONFIG_CPU_MIPS32) || defined(CONFIG_CPU_MIPS64)
>
> Won't the CPU_MIPS32 always be true, hence making the else
> clause just useless dead code?
True.
> diff --git a/arch/mips/powertv/asic/irq_asic.c
> b/arch/mips/powertv/asic/irq_asic.c
> new file mode 100644
> index 0000000..693abab
> --- /dev/null
> +++ b/arch/mips/powertv/asic/irq_asic.c
> @@ -0,0 +1,115 @@
> +/*
> + *
> + * Modified from arch/mips/kernel/irq-rm7000.c:
>
> I'd just add "which was" to the end of the above line for clarity.
See previous comments.
> diff --git a/arch/mips/powertv/cevt-powertv.c
> b/arch/mips/powertv/cevt-powertv.c
> new file mode 100644
> index 0000000..cecbf40
> --- /dev/null
> +++ b/arch/mips/powertv/cevt-powertv.c
...
> + * The file comes from kernel/cevt-r4k.c
>
> It would be good to know why this file had to fork off its own
> copy (i.e. what was lacking in the original). Some small comment
> here to that effect would be useful.
Good question. But the contractors who did the initial port are gone and I
don't know the answer. I did attempt to drop in cevt-r4k.c but it didn't work.
Unless anyone thinks this is a blocker for acceptance, I'd rather defer this
question until later. If I can use the standard arch/mips/kernel files, I
would rather do so.
> +#ifdef CONFIG_MIPS_MT_SMTC
>
> If you've forked the file to be board specific, and if you are never
> going to select MT_SMTC then you should purge the stuff that you
> are never going to use/enable (here, and in the additional cases
> below).
Yes, this just makes it harder to read and I don't seeing us needing this.
> diff --git a/arch/mips/powertv/cmdline.c b/arch/mips/powertv/cmdline.c
> new file mode 100644
> index 0000000..ee570a1
> --- /dev/null
> +++ b/arch/mips/powertv/cmdline.c
> @@ -0,0 +1,51 @@
> +/*
> + * Carsten Langgaard, carstenl@mips.com
> + * Copyright (C) 1999,2000 MIPS Technologies, Inc. All rights reserved.
>
> Looks like an attribution update is needed here too.
Yup.
> diff --git a/arch/mips/powertv/csrc-powertv.c
> b/arch/mips/powertv/csrc-powertv.c
> new file mode 100644
> index 0000000..c032660
> --- /dev/null
> +++ b/arch/mips/powertv/csrc-powertv.c
...
> + * The file comes from kernel/csrc-r4k.c
>
> Same here, a comment covering the deviation would be good.
Same answer as above: who knows?
> diff --git a/arch/mips/powertv/init.c b/arch/mips/powertv/init.c
> new file mode 100644
> index 0000000..6d7b229
> --- /dev/null
> +++ b/arch/mips/powertv/init.c
> @@ -0,0 +1,127 @@
> +/*
> + * Copyright (C) 1999, 2000, 2004, 2005 MIPS Technologies, Inc.
> + * Authors: Carsten Langgaard <carstenl@mips.com>
> + * Maciej W. Rozycki <macro@mips.com>
>
> Attribution update.
> diff --git a/arch/mips/powertv/init.h b/arch/mips/powertv/init.h
> new file mode 100644
> index 0000000..763472e
> --- /dev/null
> +++ b/arch/mips/powertv/init.h
> @@ -0,0 +1,10 @@
> +/*
> + * Definitions from powertv init.c file
> + */
>
> OK, it is a small file, but you probably want to stick some header
> on it regardless.
It doesn't hurt...
> diff --git a/arch/mips/powertv/memory.c b/arch/mips/powertv/memory.c
> new file mode 100644
> index 0000000..a57972f
> --- /dev/null
> +++ b/arch/mips/powertv/memory.c
> @@ -0,0 +1,183 @@
> +/*
> + * Carsten Langgaard, carstenl@mips.com
> + * Copyright (C) 1999,2000 MIPS Technologies, Inc. All rights reserved.
>
> This too should be updated and reference the original file
> path/location that it was based on, and cover how it was
> changed.
Done.
> +#ifdef CONFIG_HIGHMEM_256_128
>
> Do these options even exist in a Kconfig? I don't recall seeing them.
> Maybe that is part of the TODO as well.
No. That was part of the stripping down of the production kernel to get
something small enough to reasonably be reviewed.
> diff --git a/arch/mips/powertv/pci/Makefile
> b/arch/mips/powertv/pci/Makefile
> new file mode 100644
> index 0000000..7bf9f8c
> --- /dev/null
> +++ b/arch/mips/powertv/pci/Makefile
> @@ -0,0 +1,26 @@
> +#
> *****************************************************************************
> +# Make file for PowerTV PCI driver
>
> minor nit -- Makefile
Sort of. It is a file for the "make" command. Either seems reasonable.
> diff --git a/arch/mips/powertv/pci/pci.c b/arch/mips/powertv/pci/pci.c
> new file mode 100644
> index 0000000..3358b5f
> --- /dev/null
> +++ b/arch/mips/powertv/pci/pci.c
> @@ -0,0 +1,35 @@
> +/*
> + * Copyright (C) 1999, 2000, 2004, 2005 MIPS Technologies, Inc.
> + * Authors: Carsten Langgaard <carstenl@mips.com>
> + * Maciej W. Rozycki <macro@mips.com>
> + *
>
> Update attribution, reference original, mention deltas for this
> specific platform.
Yup.
> +void __init mips_pcibios_init(void)
> +{
> + asic_pcie_init();
> +}
>
> Now that I see the file in its entirety, I wonder if this can't just
> be rolled into another appropriate source file....
As previously mentioned, I'm just skipping PCI as much as possible since we
don't use it.
> diff --git a/arch/mips/powertv/pci/pciemod.c
> b/arch/mips/powertv/pci/pciemod.c
> new file mode 100644
> index 0000000..f152fc5
> --- /dev/null
> +++ b/arch/mips/powertv/pci/pciemod.c
> @@ -0,0 +1,2921 @@
...
> + *
> + * File Name: pciemod.c
> + *
> + * Project: NGP
> + *
> + * Compiler: gnu C (gcc)
> + *
> + * Author(s): Tom Haman
> + *
> + * Description: Routines implementing kernel PCIE Module.
> + *
> + * Documents: PCIE Software Design Document
> + *
> + * NOTES:
> + *
> + *
>
> -----------------------------------------------------------------------------
> + * History:
> + * Rev Level Date Name ECN# Description
> + *
>
> -----------------------------------------------------------------------------
> + * 1.00 03/27/06 Tom Haman --- Initial version for NGP
> (Zeus)
> + *
>
> -----------------------------------------------------------------------------
>
> Project, compiler, and changelog etc can go as per prev.
Yup.
> +
>
> *******************************************************************************
> +
>
> ******************************************************************************/
>
> Drop the giant banners again?
Yeah, they're pretty gross.
> +static struct proc_dir_entry *PCIE_pProc; /* proc directory entry */
> +static int LogLevel = SA_INFO;
> +static struct tPCIERegs *PCIE_RegsPtr;
> +static int PCIE_irqrequest_pcie;
> +static u32 PCIE_initialized;
> +static u32 *timerptr;
> +static spinlock_t PCIE_lock;
>
> whitespace cleanup on the above?
Yup.
> +static void pcie_delay(u32 ms);
> +static int pcie_reset_ethernet(void) ;
> +static void pcie_uSecDelay(u32 us);
>
> I'm not sure why a delay is tied to a bus. A delay is a delay, no?
Uh, when I look at this actual function, it becomes obvious that the author
was, umm, new to Linux. If we ever need PCI, this will need some clean up.
I've stuck an #error in there.
> +static int pcie_WriteProc(struct file *pfile, const char __user *pbuff,
> + unsigned long bytecnt, void *data);
>
> I haven't got to what these proc functions do yet, but just by
> looking at the name I'm thinking seq_file?
Yes.
> +/*---------- Temporary Fixes ------------ */
>
> Some sort of comment about what these workarounds are for would
> not go amiss.
No kiddin'. It looks like some address swizzling, but I don't know the
details. If we ever support PCI...
> +#ifdef PCIE_PLL_FIX
> +
> + static struct tTBRegs *TB_RegsPtr;
> + static unsigned int scr_data_in[kSCR_DEPTH];
>
> Don't indent here.
I won't!
>
> *******************************************************************************
> + * PCIE General Interface
> +
>
> *******************************************************************************
> +
>
> *******************************************************************************
> +
>
> *******************************************************************************
>
> Mega banner. Might as well just adopt proper kerndoc format
> and let it make use of the info below.
Done.
> I've snipped the largish pcie code block from here; having that
> support as a separate commit would really help the digestibility
> of the board support -- even I didn't have it in me to go over it now.
I completely understand. I've incorporated all of your comments so that if
we do decide to support PCI again at a later date, we'll start from a better
place.
> diff --git a/arch/mips/powertv/pci/powertv-pci.h
> b/arch/mips/powertv/pci/powertv-pci.h
> new file mode 100644
> index 0000000..98c087e
> --- /dev/null
> +++ b/arch/mips/powertv/pci/powertv-pci.h
> @@ -0,0 +1,12 @@
> +/*
> + * Local definitions for the powertv PCI code
> + */
> +
> +#ifndef _POWERTV_PCI_H_
> +#define _POWERTV_PCI_H_
>
> Header on the file and no extraneous tabs?
Yup.
> +extern int LogLevel;
>
> Is this a separate loglevel from the normal one used by dmesg?
> If so, then why?
Yes, this is different. I think the one you are thinking of is
console_loglevel. This variable is module-specific and used to control
the volume of debugging messages. This is different than console_loglevel.
> +#endif
> diff --git a/arch/mips/powertv/powertv-clock.h
> b/arch/mips/powertv/powertv-clock.h
> new file mode 100644
> index 0000000..6f8c17b
> --- /dev/null
> +++ b/arch/mips/powertv/powertv-clock.h
> +/*
> + * Carsten Langgaard, carstenl@mips.com
> + * Copyright (C) 2000 MIPS Technologies, Inc. All rights reserved.
>
> Attribution updates.
Yup.
> +#ifdef CONFIG_64BIT
> +#warning TODO: 64-bit code needs to be verified
> +#define PTR_LA "dla "
> +#define LONG_S "sd "
> +#define LONG_L "ld "
> +#define PTR_ADDIU "daddiu "
> +#define REG_SIZE "8" /* In bytes */
> +#endif
> +
> +#ifdef CONFIG_32BIT
> +#define PTR_LA "la "
> +#define LONG_S "sw "
> +#define LONG_L "lw "
> +#define PTR_ADDIU "addiu "
> +#define REG_SIZE "4" /* In bytes */
> +#endif
>
> You've got tabs after the defines in the above two blocks.
Fixed.
> +static int panic_handler (struct notifier_block *notifier_block,
>
> Inconsistent coding style --- foo () vs. foo() -- probably will get
> nagged by checkpatch.
This is odd. All this code has been through checkpatch but it didn't complain.
It's fixed, anyway.
> +#ifdef CONFIG_DIAGNOSTICS
> + failure_report((char *) cause_string,
> + have_die_regs ? &die_regs : &my_regs);
> + have_die_regs = false;
> +#else
> + pr_crit("I'm feeling a bit sleepy. hmmmmm... perhaps a nap
> would... "
> + "zzzz... \n");
>
> You probably want a hint of what the real problem is printed
> when this message goes to the console. Unless a prev. message
> will have printed that already.
This is a panic notifier, which will only be called after panic() has printed
the message. We normally run with CONFIG_DIAGNOSTICS but that was one of the
things that I'll pick up in a following patchset.
> +void platform_random_ether_addr(u8 addr[ETH_ALEN])
> +{
> +#define NUM_RANDOM_BYTES 2
> +#define NON_SCIATL_OUI_BITS 0xc0u
> +#define MAC_ADDR_LOCALLY_MANAGED (1 << 1)
>
> whitespace
Yup.
> +#undef NON_RANDOM_BYTES
> +#undef NON_SCIATL_OUI_BITS
> +#undef MAC_ADDR_LOCALLY_MANAGED
>
> This is the end of a C file, no need for undef
Yup.
> diff --git a/arch/mips/powertv/reset.c b/arch/mips/powertv/reset.c
> new file mode 100644
> index 0000000..9756090
> --- /dev/null
> +++ b/arch/mips/powertv/reset.c
> @@ -0,0 +1,69 @@
> +/*
> + * Carsten Langgaard, carstenl@mips.com
> + * Copyright (C) 1999,2000 MIPS Technologies, Inc. All rights reserved.
>
> Attribution updates?
Yup.
> diff --git a/arch/mips/powertv/reset.h b/arch/mips/powertv/reset.h
> new file mode 100644
> index 0000000..79211ce
> --- /dev/null
> +++ b/arch/mips/powertv/reset.h
> @@ -0,0 +1,8 @@
> +/*
> + * Definitions from powertv reset.c file
>
> GPL header?
It's in there, now.
> +#include <linux/types.h>
> +#include <linux/init.h>
> +#include <linux/kernel_stat.h>
> +#include <linux/sched.h>
> +#include <linux/spinlock.h>
> +#include <linux/interrupt.h>
> +#include <linux/time.h>
> +#include <linux/timex.h>
> +
> +#include <asm/mipsregs.h>
> +#include <asm/mipsmtregs.h>
> +#include <linux/hardirq.h>
> +#include <asm/irq.h>
> +#include <asm/div64.h>
> +#include <linux/cpu.h>
> +#include <linux/time.h>
> +
> +#include <asm/mips-boards/generic.h>
> +#include <asm/mips-boards/prom.h>
> +
> +#include "powertv-clock.h"
...
> A file this simple can't possibly need all those header files
> included. And as such, the original attributions from whatever
> file it was originally cloned from are probably not relevant
> whatsoever (unless this file was actually written for this
> platform back in 1999!)
And it doesn't. I dropped 16 of the original 18.
> Paul.
I don't know if anyone will read down this far, but I wanted to repeat my
appreciation for Paul's comments. This was a lot of work on his part and
my code is the better for it.
David VL
|
http://www.linux-mips.org/archives/linux-mips/2009-05/msg00485.html
|
CC-MAIN-2014-52
|
refinedweb
| 6,177
| 59.4
|
Talk:Key:ref
Contents
- 1 ref & nat_ref
- 2 Multiple tags
- 3 Spaces
- 4 Standard for the US
- 5 Roads signed as leading to a particular ref
- 6 Names and References
- 7 ref for roles
- 8 Spaces in e-roads
- 9 ref for branch or franchise reference
- 10 GNIS id values
- 11 ids from other web services?
- 12 nation wide refs
- 13 reference to an external collection
- 14 On the ground verifiability of ref tags on roads
ref & nat_ref
Does it make sense to have both tags (ref and nat_ref) defined? At the moment I see a lot of use of nat_ref, where it is used for the street reference number of that country, for example in Germany:
- key="name" value="A123"
- key="highway" value="motorway"
- key="oneway" value="true"
- key="nat_ref" value="A123"
So, does it make sense to have both tags, ref and nat_ref?
Or in the case above, should one use "ref" instead of "nat_ref"?
RalfZ 13:08, 19 September 2006 (BST)
I can't see either way if it's allowed or not to use more than one ref for a way. There are places where two ways run on the same stretch of road for a while - the E6 and E20, for instance, is the same road between Helsingborg and Malmö in Sweden. --KristianThy 13:54, 10 December 2006 (UTC)
Spaces
in Germany, all street references have spaces between the letters and the numbers in the official version -- even the international street references; e.g. the motorway A 61 with the international reference E 31; what about official notation in other countries; the notation privat enterprises use in their maps is not meant here--privat companies often don't care about these things; there also seem to be dashes as a possibilty ... and of course we have some different writing systems in the world - any thoughts about this? -- Schusch 22:40, 5 May 2008 (UTC)
Standard for the US
How should these be used in the US? Should it be a unified scheme with the state abbreviation for state routes (e.g. IN-XX, MI-XX) and I-XX for interstate, US-XX for US highways? Or should we vary by state (Michigan state routes would use M-XX, Interstates would use IH-XX in Texas, many states use something like SR or SH for state routes) Random832 16:33, 18 September 2008 (UTC)
- I follow the scheme outlined in United_States_roads_tagging. So Michigan state route would be ref="US:MI XX" Alexrudd 21:47, 21 December 2008 (UTC)
- That leads to some rather clumsy labeling in the current Mapnik and Osmarender renderers. If these renderers are reconfigured to display this information more elegantly, maybe more people would follow the "US:XX ##" convention. Vid the Kid 16:56, 24 December 2008 (UTC)
For that matter, how should county road numbers be marked? There are some places where the county road number is displayed just as prominently as a state or US route marker. The "conventional" approach of "name=CR ##" works in places where that really is the road's only name, but what about counties that have actual names for roads, in addition to well-signed county road numbers? Vid the Kid 16:56, 24 December 2008 (UTC)
- If there's no ref=, there's no ref=. There's a few state highways in Oregon that don't have a name or a ref! Paul Johnson 20:38, 31 May 2009 (UTC)
Roads signed as leading to a particular ref
Likely in other countries some major but unnumbered roads have signposts guiding to a numbered road, i.e. the other roads number surrounded by a dashed line vs. solid line. I've now tagged these as leads_to_ref=*, in part just to discourage others from tagging these with a wrong ref=*. Mostly this happens where an arterial road leading out of a city center isn't a numbered road for the first kilometers. Any better suggestions? Alv 10:18, 20 September 2008 (UTC)
- I wouldn't think this is appropriate information for OSM. It's not really needed for routing applications, and this kind of information is typically not explicitly shown on maps, as it can be pretty easily seen from the map that such roads lead to such other highways already. If it were actually to be tagged, however, I would suggest something like ref:to=* or ref=TO *. Vid the Kid 17:01, 24 December 2008 (UTC)
Names and References
Many highways can be tagged with a matching name and reference. For example, name="United States Highway xx", ref="US xx". This is redundant, and I think the name tag can be removed. Is it strictly necessary to include the name tag in this case? --Elyk 02:30, 30 September 2008 (UTC)
- I think this particular phenomenon of redundancy comes not from someone deciding it should be that way, but from the TIGER database. It would seem that the name was carried over unchanged, and the ref was (for US and Interstate highways) derived from the name. It would have been extra work in the import process to then delete the longer name information, had anyone thought of it. I say, feel free to remove the "name" of such highways if they only duplicate/expand the information in the "ref" tag. Check and see if the road has an actual local name that you can put in the "name" tag, though. Vid the Kid 17:07, 24 December 2008 (UTC)
ref for roles
how can we state that a ref is only valid in a certain role?
for example, for a way that is both a highway=track and piste:type=downhill, the ref in that case only refers to usage as a piste; in this example, we have a "blue piste 10", but the underlying track has no ref (or at least, it's not '10').
can this be resolved without switching everything there to Relations/Routes? --Osm6150856065 09:57, 29 August 2009 (UTC)
Spaces in e-roads
There are some confusion as to whether or not there are spaces in e-road refs or not. Some countries have signes with spaces, some don't. The ref tag have a rule: map what is on the ground. But this can not apply to the int_ref tag as this have differant standards. int_ref needs one standard and in the case of e-roads this is the UNECE witch consistently writes "E ##" or "E ###". So no matter how the way is signed the int_ref is written with a space and two or three digits on e-roads. Other road networks might differ. --Gnonthgol 11:09, 8 March 2010 (UTC)
- The discussion was started long ago at Talk:WikiProject_Europe/E-road_network. Alv 11:20, 8 March 2010 (UTC)
ref for branch or franchise reference
In the context of
86215972 (iD, JOSM, Potlatch2, history, XML), I've used ref=Bear Branch Office to indicate the identity of a bank branch, where name=Artisans' Bank. --Ceyockey 04:20, 8 July 2011 (BST)
GNIS id values
I have used ref=GNIS:* in a few cases to indicate reference id in the United States Geograhical Service Geographic Names Information System resource. Is this a useful item to put under either the main table or "Special uses"? --Ceyockey 18:26, 14 August 2011 (BST)
- Maybe ref:GNIS=* would be better. Using ref=GNIS:* prevents using this tag for an other more general purpose. FrViPofm 11:08, 5 June 2012 (BST)
ids from other web services?
can this also be used to map places/features from other web services like facebook or foursquare? like this:
ref:facebook=325001280021 ref:foursquare=4adcda7af964a520e84621e3
--Shmias 14:09, 4 June 2012 (BST)
nation wide refs
In France, we aim using ref:FR:*=* for references applying to our country. Is it possible to use the principle for the other countries with tags such as ref:DE:*=* ref:UK:*=* and patiently to change the existing tags ? --FrViPofm (talk) 00:34, 20 February 2013 (UTC)
- I completely agree with such guideline. It shouldn't only apply to countries but to any extent when refs aren't regarding whole world.
- I'm willing to use ref:EU:*=* to references for Europe only.
- Nevertheless it may be hard to convert existing references with several thousand of uses (as usual). Fanfouer (talk) 12:52, 7 June 2015 (UTC)
reference to an external collection
Speaking about `natural:tree`, I'm participating to the insertion of trees in the Parque Centenario in Cartagena de Indias, Colombia. These trees are being inserted in a botanical database as well, and each of them receives an "Accession number" (consider this a code identifying the plant data).
We were discussing how to add this reference. The accession number has now been added to the OSM objects under the key `ref`, while I had been advocating using `ref` as a namespace prefix, where the tag name would be composed of three parts:
- the `ref` prefix,
- the name of the collection,
- the `accno` suffix.
I generally prefer not to fix things that work, but I foresee conflicts, and I would like to hear other thoughts. --Mariotomo (talk) 14:48, 9 February 2017 (UTC)
As routing software uses the ref-tag on roads to give instructions, shouldn't we be more strict that on roads the ref-tag is what is actually signed along the road. It is no use to have the GPS tell you turn into BD8975b when there is no sign that the road has that ref. And I'm not talking about that a particular intersection doesn't happen to be signed, I'm talking of roads that have similar refs entered in OSM where that ref does not occur anywhere along the entire length of the road!
• ref should be the signed ref as seen on signs along the road
• official_ref or unsigned_ref should be used for the official ref used by the road administration but that is not signed along the road
--Cohan (talk) 09:42, 24 August 2017 (UTC)
|
https://wiki.openstreetmap.org/wiki/Talk:Key:reference
|
CC-MAIN-2018-30
|
refinedweb
| 1,673
| 65.66
|
SharePoint is built on top of ASP.NET. There are several key classes/objects in the SharePoint infrastructure which customize the ASP.NET pipeline; or in the case of SharePoint 2010 the integrated ASP.NET/IIS pipeline. The abstract and base implementations for these classes, as well as important interfaces, are defined in the System.Web namespace. The SharePoint versions are all defined in the Microsoft.SharePoint.ApplicationRuntime namespace, in the Microsoft.SharePoint.dll assembly. Most of the information provided below comes from using .NET Reflector to inspect these classes, as well as inspecting the .config files on SharePoint servers.
The first of these classes is SPHttpApplication. This class derives from System.Web.HttpApplication, and is associated with SharePoint Web Applications through a global.asax file in the root directory for each WebApp. Its responsibilities are minimal – it registers VaryByCustomString output cache profiles and registers a EventHandler for unhandled exceptions. In SharePoint 2010, it also provides a new SharePointEndRequest event which is fired in the EndRequest event of the HttpApplication. Presumably, developers could hook up to this event easily in the global.asax file.
The next class is SPHttpHandler. It derives from System.Web.DefaultHttpHandler, which implements the IHttpHandler interface. SPHttpHandler is associated with all requests (well, GET, HEAD, and POST requests) via an <add /> element under <httpHandlers> in the web.config file for each SharePoint Web Application, at least in SharePoint 2007.
Most of the work is done by the base class. The SharePoint derivation adds an override for the OverrideExecuteUrlPath, which determines if this request should be handled by owssvr.dll. It also sets some headers and adds HttpOnly for cookies, though this method doesn’t seem like the proper place for that – which brings us to SharePoint 2010, where this handler is gone.
In SharePoint 2010, the handlers for all ASP.NET extensions are the default ones configured in applicationHost.config (under <location path=””>). So *.aspx is handled by System.Web.UI.PageHandlerFactory, *.asmx by System.Web.Services.Protocols.WebServiceHandlerFactory, etc., as they would be for a typical ASP.NET application. *.dll is handled by the ISAPI module in IIS7, which expects to load and execute a DLL in the Web Application’s directory.
owssvr.dll isn’t in the Web Application’s directory of course; it’s in 14\ISAPI. So a special entry is added for the /_vti_bin/owssvr.dll path, pointing it to this universal path. If you think about it, this is also why using a special HttpHandler to hand off to owssvr.dll won’t work anymore – we’re no longer passing all (*) requests to the same handler. Unless we override this, DLL extensions won’t be passed to the ASP.NET page handlers, and once we’re overriding it, might as well use the built-in configuration options to just pass the request straight to where it’s supposed to go.
The last, and probably most important, class is the SPRequestModule class. In 2007, this module is added to the ASP.NET pipeline via an <add /> element in the <system.web/httpModules> section in the web.config for each SharePoint Web Application. In 2010, the module is added to the integrated pipeline via the <system.webServer/modules> section, as you would expect.
SPRequestModule directly implements the IHttpModule interface and provides most of the additional configuration and processing needed for SharePoint pages. For example, the Init() method of SPRequestModule is responsible for registering SharePoint’s VirtualPathProvider, which allows pages to be retrieved from the database or file system, as appropriate. It provides other functions as well, which would make a good topic for another post some time.
On the topic of additional SharePoint modules, you’ll notice that a module named SharePoint14Module is also added in the <system.webServer/modules> section. No type is provided, because this is a reference to native module – our old friend owssvr.dll. Native modules must be declared in applicationHost.config in the <system.webServer/globalModules> section, then added to individual locations. You’ll find the SharePoint 14 module first declared there. I believe this indicates that owssvr.dll has now been re-written as an IIS7 native module (instead of an ISAPI filter), and is providing some functionality for each request.
That wraps up an overview of the SharePoint-ASP.NET integration points – SPHttpApplication, SPHttpHandler, and SPRequestModule. Enjoy!
Great post, thank you so much.
I have a question related to this: How can I inject something into any page content while keeping the original page content untouched?
The scenario I am try to implement is:
1. Inject "Before" and "After" to any page without changing master page or page layout.
2. Users won't see the words when they edit the pages, and once they save the pages, the words will show up.
3. Output cache will work fine
Any help will be appreciated and thank you so much
forums.iis.net/.../1184037.aspx
I'm having issue of above describe link
Wild card script mapping fails
|
http://blogs.msdn.com/b/besidethepoint/archive/2010/05/01/how-sharepoint-integrates-with-the-asp-net-infrastructure.aspx
|
CC-MAIN-2014-23
|
refinedweb
| 831
| 59.6
|
11.6. Applying digital filters to speech show how to play sounds in the Notebook. We will also illustrate the effect of simple digital filters on speech sounds.
Getting ready
You need the pydub package. You can install it with
pip install pydub or download it from.
This package requires the open source multimedia library FFmpeg for the decompression of MP3 files, available at.
How to do it
1. Let's import the packages:
from io import BytesIO import tempfile import requests import numpy as np import scipy.signal as sg import pydub import matplotlib.pyplot as plt from IPython.display import Audio, display %matplotlib inline
2. We create a Python function that loads a MP3 sound and returns a NumPy array with the raw sound data:
def speak(data): # We convert the mp3 bytes to wav. audio = pydub.AudioSegment.from_mp3(BytesIO(data)) with tempfile.TemporaryFile() as fn: wavef = audio.export(fn, format='wav') wavef.seek(0) wave = wavef.read() # We get the raw data by removing the 24 first # bytes of the header. x = np.frombuffer(wave, np.int16)[24:] / 2.**15 return x, audio.frame_rate
3. We create a function that plays a sound (represented by a NumPy vector) in the Notebook, using IPython's
Audio class:
def play(x, fr, autoplay=False): display(Audio(x, rate=fr, autoplay=autoplay))
4. Let's play a sound that had been obtained from:
url = ('' 'cookbook-2nd-data/blob/master/' 'voice.mp3?raw=true') voice = requests.get(url).content
x, fr = speak(voice) play(x, fr) fig, ax = plt.subplots(1, 1, figsize=(8, 4)) t = np.linspace(0., len(x) / fr, len(x)) ax.plot(t, x, lw=1)
5. Now, we will hear the effect of a Butterworth low-pass filter applied to this sound (500 Hz cutoff frequency):
b, a = sg.butter(4, 500. / (fr / 2.), 'low') x_fil = sg.filtfilt(b, a, x)
play(x_fil, fr) fig, ax = plt.subplots(1, 1, figsize=(8, 4)) ax.plot(t, x, lw=1) ax.plot(t, x_fil, lw=1)
We hear a muffled voice.
6. Now, with a high-pass filter (1000 Hz cutoff frequency):
b, a = sg.butter(4, 1000. / (fr / 2.), 'high') x_fil = sg.filtfilt(b, a, x)
play(x_fil, fr) fig, ax = plt.subplots(1, 1, figsize=(6, 3)) ax.plot(t, x, lw=1) ax.plot(t, x_fil, lw=1)
It sounds like a phone call.
7. Finally, we can create a simple widget to quickly test the effect of a high-pass filter with an arbitrary cutoff frequency: We get a slider that lets us change the cutoff frequency and hear the effect in real-time.
from ipywidgets import widgets @widgets.interact(t=(100., 5000., 100.)) def highpass(t): b, a = sg.butter(4, t / (fr / 2.), 'high') x_fil = sg.filtfilt(b, a, x) play(x_fil, fr, autoplay=True)
How it works...
The human ear can hear frequencies up to 20 kHz. The human voice frequency band ranges from approximately 300 Hz to 3000 Hz.
Digital filters were described in Chapter 10, Signal Processing. The example given here allows us to hear the effect of low- and high-pass filters on sounds.
There's more...
Here are a few references:
- Audio signal processing on Wikipedia, available at
- Audio filters on Wikipedia, available at
- Voice frequency on Wikipedia, available at
- PyAudio, an audio Python package that uses the PortAudio library, available at
See also
- Creating a sound synthesizer in the Notebook
|
https://ipython-books.github.io/116-applying-digital-filters-to-speech-sounds/
|
CC-MAIN-2019-09
|
refinedweb
| 575
| 69.58
|
Hi all, My projects are running fine with a Teensy 3.6 but for better performance I like to use the Teensy 4, I use a RA8875 display from Buydisplay but when compiling I get a error. What can I do to get my programs to work with the new Teensy.
Best,
Johan
#include <SPI.h>
#include <Wire.h>
#include <RA8875.h>
void setup() {
// put your setup code here, to run once:
}
void loop() {
// put your main code here, to run repeatedly:
}
RA8875.cpp:476:11: error: cannot convert 'volatile uint32_t* {aka volatile long unsigned int*}' to 'volatile uint8_t* {aka volatile unsigned char*}' in assignment csport = portOutputRegister(digitalPinToPort(_cs));//pinMode(_cs, OUTPUT);
|
https://forum.pjrc.com/threads/57280-RA8875-from-Buydisplay?s=8af58f5fae0364dc5023e395b0913026
|
CC-MAIN-2019-47
|
refinedweb
| 112
| 67.15
|
An adaptable binary function used to compare the objects to which pointers are pointing. More...
#include <utilities/ptrutils.h>
An adaptable binary function used to compare the objects to which pointers are pointing.
This class is for use with the Standard Template Library.
The first template argument T will generally not be a pointer class. Instead, this function will accept two const pointers to T. It will then dereference these pointers and compare these dereferenced objects using the given comparison function (which defaults to std::less, but which can be changed by passing a different second template argument).
The first argument type for this binary function.
The result type for this binary comparison function.
The second argument type for this binary function.
Compares the objects to which the given pointers are pointing.
The two pointers are dereferenced, and then a function of type Comp (the second template argument) is used to compare the dereferenced objects.
trueif the first dereferenced object is less than the second, or
falseotherwise.
|
http://regina.sourceforge.net/engine-docs/classregina_1_1LessDeref.html
|
CC-MAIN-2016-36
|
refinedweb
| 167
| 57.57
|
What's new in PyNGL and PyNIOPyNIO 1.4.3 | PyNIO 1.4.2 | PyNIO 1.5.0-beta | PyNIO 1.4.1 | PyNGL 1.4.0 | PyNGL 1.3.1 | PyNIO 1.4.0 | PyNIO 1.3.0b5 | PyNGL 1.3.0b4 | 1.3.0b1 | 1.2.0 | 1.1.0 | 1.0.0 | 0.1.1b8 | 0.1.1b7 | 0.1.1b6 | 0.1.1b5 | 0.1.1b4 | 0.1.1b3 | 0.1.1b1 | 0.1.0b1
Mostly bug fixes for 1.4.2.
PyNIO Version 1.4.3 - December 30, 2015
An internal bug fix release. PyNIO put under "conda".
PyNIO Version 1.4.2 - December 15, 2015
conda install -c ncar pynio
This version was a special release for the NCAR SEA conference in
April 2015. It has support for new features like groups and compound
data in NetCDF4 and HDF5 files. It was recently put under the "dbrown"
channel on conda:
PyNIO Version 1.5.0-beta - April 4, 2015
conda install -c dbrown pynio
PyNIO Version 1.4.1 - July 28, 2011
- Added a new NioFile Boolean method called 'unlimited' that takes a dimension name as its only argument and returns True if the dimension is an unlimited record dimension and False otherwise.
- Added support for unsigned integer types and large 64 bit integers both signed and unsigned. The byte type is now signed and the new ubyte type is unsigned. Users are cautioned to be on the lookout for variable type changes from signed to unsigned, or the reverse in the case of the byte type. This is a possible source of minor incompatibility between older versions of PyNIO and the latest version.
- PyNIO can now handle files that have variables or single dimensions that are larger than 2^31.
- Fixed a bug that caused the an import of Nio to fail if the numpy version release number had non-numeric characters such as 'rc3'.
- Fixed a serious memory issue that was caused by PyNIO inadvertently 'stealing' memory when attributes with numerical values were set with the file opened in write mode. This was causing a Python crash in the case of a user script that created a series of attributes that were to be set with the same values in multiple files.
- Fixed a problem with the global and variable 'attributes' dictionary not being updated immediately. Previously the file needed to be closed and reopened for changes to the 'attributes' dictionaries to take effect.
- Similarly, changes to the size of the unlimited dimension previously did not show up until the file was closed and then reopened. Now changes to the unlimited dimension size are immediately visible.
- Extended coordinate selection has been modified to no longer return an error if a coordinate variable for every dimension is not included in the selection. Now when a coordinate is not included, all elements of the corresponding dimension are returned.
- A bug reading shapefiles on 64-bit Linux systems was fixed. The global shapefile attributes layer_name and geometry_type no longer are set incorrectly to null values.
- PyNIO 1.4.1 includes very limited support for reading HDF5 files. Only simple numeric variables can be read from and HDF5 file. Groups and non-atomic datatypes are not yet supported. Note, however, that PyNIO does have full (read-only) support for HDFEOS5 files.
- For PyNIO users building from source code, a bug was fixed that caused an immediate error exit if the code was compiled without support for GRIB2. Also all formats except NetCDF are now optional; If there is no need for HDF4 it no longer needs to be built into the module.
- A new module-level dictionary attribute called "__formats__" has been added. The optional format names are keys and each value is 1 or 0 depending on whether the format is has been included in the particular instance of PyNIO during the build process. If all formats are included it will have the following key/value pairs:
{'hdfeos5': 1, 'netcdf4': 1, 'hdf4': 1, 'hdf5': 1, 'shapefile': 1, 'grib2': 1, 'hdfeos': 1, 'opendap': 1}. Note that in the case of 'netcdf4' the value indicates whether PyNIO is built with NetCDF4 classic model enabled.
- GRIB reading improvements:
- Fixed a bug involving the pre-defined GRIB1 grid 242. It had the wrong Dx and Dy values.
- Fixed a bug involving Canadian Meteorology Centre data both in GRIB1 and GRIB2 formats where global grids have the starting and ending longitude set to identical values. Now the longitude coordinates are correctly calculated to span a full 360 degrees.
- Improved the handling of GRIB1 tables from center 59, formerly known as the NOAA Forecast Systems Lab, and now called the Global Systems Division of the Earth System Research Laboratory. Standard NCEP tables are now used where possible for otherwise unknown table versions from center 59.
- Fixed a problem with user-defined GRIB1 parameter tables that contain an entry for parameter 0, which is never used as a parameter number but was confusing the table-reading code when it appeared.
- Fixed a problem with the GRIB2 parameter_discipline_and_category variable attribute that caused it to contain incorrect information in some cases.
- Added support for GRIB2 template 4.31, used for satellite products particularly in Europe.
- Fixed a problem with calculating latitude values for GRIB1 and GRIB2 Mercator grids.
PyNGL Version 1.4.0 - July 28, 2011
This version of PyNGL doesn't have a lot of new functionality. It was given a higher version number because it was significantly overhauled internally so it can be compiled with NCL version 6.0.0. The source code for this release cannot be linked against NCL versions 5.2.x or earlier.
- Ngl.dim_gbits - a new function that unpacks bit chunks from the rightmost dimension of the input array.
- Ngl.gc_inout - a new function that determines if a specified point is inside or outside of a spherical polygon. This is handy for masking data given a particular lat/lon polygon.
- Added a new example "hdf1.py" (image) to the PyNGL gallery showing how to use Ngl.dim_gbits to extract bit data for a cloud mask variable.
- Added a new example "shapefile3.py" (image) to the PyNGL gallery showing how to use Ngl.gc_inout to mask data given a lat/lon polygon.
- Added a new example "tickmark2.py" to the (image) to the PyNGL gallery showing how to remove the degree symbol from map tickmark labels.
- Ngl.labelbar_ndc - Fixed a bug that caused this function to seg fault.
- cnLabelBarEndStyle - fixed a bug that caused labelbar labels to be dropped randomly.
PyNGL Version 1.3.1 - August 22, 2010New functions
- Ngl.blank_plot - Creates a blank plot.
- Ngl.get_bounding_box - Retrieves the NDC bounding box values for a given graphical object.
- Ngl.nice_cntr_levels - Given min/max values and the maximum number of steps desired, calculates an array of "nice" equally-spaced values through the data domain.
New resources
- Added new resources that allow you to fill between curves in an XY plot:
nglXYFillColors
nglAboveXYFillColors/nglBelowXYFillColors
nglXYLeftFillColors/nglRightXYFillColors
New examples
- New shapefile1/shapefile2 - examples showing how to read and plot data from shapefiles.
- New tickmark1 - example showing how to create a "blank" plot to use for customizing tickmark labels.
- New viewport1 - example showing the different between a plot's viewport and its bounding box
- New fillxy1/fillxy2 - examples showing how to fill between curves in an XY plot.
Bugs fixed
- Ngl.add_polyline - fixed a bug where this function wasn't handling missing values properly.
PyNIO Version 1.4.0 - August 16, 2010
- PyNIO source build procedure updated to remove installation of NCL as a prequisite. See How to build PyNIO from source code for details.
- Optional read-only support added for HDF-EOS5 files. GRID, SWATH, and ZA data groups are supported. Also, limited metadata is provided by the interface for POINT metadata groups.
- Optional read-only support also added for OGR - Open Geospatial Consortium's Simple Feature formats including ESRI shapefiles, and MapInfo, GMT, and TIGER data.
- Fixed a problem that resulted in the generation of incorrect forecast times for statistical-process variables contained in GRIB2 files from NDFD (National Digital Forecast Database).
- Fixed a problem with attempts to write empty strings to HDF attributes.
- Fixed a problem that caused the assign_value method to fail when used to write to a variable with an unlimited dimension.
- Fixed a problem that caused NumPy arrays returned by PyNIO not to be writeable by the user. The WRITEABLE flag was inadvertently being set to False.
PyNIO Version 1.3.0b5 - April 1, 2010
- Updated code to build under Python 2.6.
- Added OPeNDAP capabilities for reading NetCDF files. You can now read a NetCDF file that is stored on an OPeNDAP server:
import Nio fname = "" f = Nio.open_file(fname) variables = f.variables.keys() print f print "variables",variables
- Added support for unsigned int and int64 types.
- Fixed several memory leaks.
- Fixed a bug with assigning to file variables with single element dimensions.
- Fixed a bug involving assigning to file variables with an unlimited dimension using non-explicit slice start and/or stop values ([:],[3:], etc).
- Fixed a problem that caused the '~' character and environment variables used in file paths not to be expanded properly when a file was opened for writing.
- Updates and changes to GRIB2 reader:
BACKWARDS INCOMPATIBILITY ALERT
- Go to the Earth System Grid web site and download the full source code for NCL version 5.2.1.
- In a suitable location in your filesystem execute these commands in a terminal window:
- gunzip -c ncl_ncarg_src-5.2.1.tar.gz | tar -xvf -
- cp -r ncl_ncarg-5.2.1/ni/src/db/grib2_codetables.previous </directory/of/choice/>
- export NIO_GRIB2_CODETABLES=</directory/of/choice/>grib2_codetables.previous (bash, ksh, sh)
or
setenv NIO_GRIB2_CODETABLES </directory/of/choice/>grib2_codetables.previous (csh, tcsh)
- Added partial support for GRIB2 grid type 204 (curvilinear orthogonal grids). This grid type is supported without coordinate data. Notably, however, the example file contains its own coordinate data as variables in their own right.
- Added support for reading (NCEP) Climate Forecast System Reanalysis (CFSR): 1979-2010 files. These GRIB2 files have a non-standard method for specifying statistical-process variables.
- Added support for product template 4.15 which defines statistical processes over a spatial region. This included adding a new attribute "type_of_spatial_processing" for statistical spatial variables as well as statistical suffixes to the variable names (e.g. _avg, _max) in order to distinguish otherwise identical variable names.
- Added support for GRIB2 files from NWS National Digital Forecast Database, which use an unusual form of record packing. These files are available from. (Thanks to Jennifer Adams, GrADS developer, for bringing them to our attention.)
The GRIB2 code tables have PyNIO variables that represent the aggregated GRIB records. To be exact, 92 of the original 371 short parameter names have changed.
Users for whom this is a problem can revert to the previous version of the tables by following these steps:
Now when any GRIB2 file is opened using PyNIO, it will use this previous version of the code tables.
- Updates and changes to GRIB1 reader:
- Added the following new GRIB1 parameter tables:
- ECMWF Parameter table version 133
- ECMWF Parameter table version 171
- ECMWF Parameter table version 172
- ECMWF Parameter table version 173
- ECMWF Parameter table version 174
- ECMWF Parameter table version 175
- ECMWF Parameter table version 201
- ECMWF Parameter table version 210
- ECMWF Parameter table version 211
- ECMWF Parameter table version 228
- ECMWF Parameter table version 230
- ECMWF Parameter table version 234
- US Weather Service - NCEP (Parameter table version 133)
- US Weather Service - NCEP Aviation World Area Forecast/ICAO (Parameter table version 140)
- US Weather Service - NCEP (Parameter table version 141)
- Japanese Meteorological Agency (Parameter table version 3)
- Updated the following GRIB1 parameter tables:
- ECMWF Parameter table version 128
- ECMWF Parameter table version 129
- ECMWF Parameter table version 131
- ECMWF Parameter table version 132
- ECMWF Parameter table version 140
- ECMWF Parameter table version 151
- ECMWF Parameter table version 160
- ECMWF Parameter table version 162
- ECMWF Parameter table version 190
- ECMWF Parameter table version 200
- US Weather Service - NCEP Oceanographic Parameters (Parameter table version 128)
- US Weather Service - NCEP (Parameter table version 129)
- US Weather Service - NCEP Land Modeling and Land Data Assimilation (Parameter table version 130)
- US Weather Service - NCEP North American Regional Reanalysis (Parameter table version 131)
- Support was added for one form of complex packing in GRIB1: specifically row by row complex packing without secondary bit maps.
PyNGL Version 1.3.0b4 - April 1, 2010
- New graphical output devices available.
We've added some new graphical output formats (workstation types). These are based upon an experimental graphics driver, and are considered beta-level capabilities:
- "png" or "newpng" for PNG output
- "newps" for PostScript
- "newpdf" for PDF
These formats can be used with Ngl.open_wks:
wks = Ngl.open_wks("png","test") ; Will create "test.000001.png" wks = Ngl.open_wks("newpdf","test") ; Will create "test.pdf" wks = Ngl.open_wks("newps","test") ; Will create "test.ps"
The "newpdf" workstation generally produces smaller output files than the original "pdf" type. Unfortunately, the "newps" output does not produce smaller PostScript files.
The "newps" and "newpdf" formats will remain in test mode for awhile, and may eventually replace the original "ps" and "pdf" formats. The original formats may be kept for backwards compatibility.
The old "png" format, which was never officially advertised, has been replaced by the new graphics driver; "png" is now synonymous with "newpng".
- New color tables donated by MeteoSwiss. Click on image for larger version.
- Added new resources that allow you to mask a Lambert Conformal map, nglMaskLambertConformal and nglMaskLambertConformalOutlineOn.
See the "conmasklc.py" example on the gallery page.
- There are two new ways to set the paper width and height for PS/PDF output, if you don't want the default 8.5 x 11 inches. The first way is to set wkPaperSize to a string like "A4", "legal", "ledger", etc.
The second way is to set the width and height directly using resources wkPaperWidthF and wkPaperHeightF. These resources are replacing nglPaperWidth and nglPaperHeight.
- A memory problem with Ngl.vinth2p was fixed.
Version 1.3.0b1 - August 18, 2008
This is our first open source version of PyNGL and PyNIO, and these two packages are now being released separately.
The source code licenses for PyNIO and PyNGL are similar to the The University of Illinois/NCSA Open Source license.
The binary licenses for PyNIO and PyNGL are similar to the old binary license for the previous versions. A separate binary license is necessary because of the additional 3rd party software included.
New features and augmentations in PyNGL
- New analysis functions:
- Ngl.betainc - Evaluates the incomplete beta function.
- Ngl.chiinv - Evaluates the inverse chi-squared distribution function.
- Ngl.linmsg - Linearly interpolates to fill in missing values.
- Ngl.regline - Calculates the linear regression coefficient between two series.
- New visualization and related functions:
- Ngl.datatondc - routine for converting between data space and NDC space.
- Ngl.ndctodata - routine for converting between NDC space and data space.
- Ngl.streamline_scalar - routine for generating streamlines colored by scalar fields.
- Ngl.streamline_scalar_map - routine for generating streamlines colored by scalar fields over a map.
- Ngl.wmstnm - draws station model data.
- Augmentations
- A new optional argument has been added to Ngl.asciiread to allow for the specification of a separator character, or character sequence.
- Ngl.add_cyclic, Ngl.add_new_coord_limits, Ngl.add_polymarker, Ngl.add_polygon, Ngl.add_polyline, Ngl.contour, Ngl.contour_map, Ngl.polygon, Ngl.polyline, Ngl.polymarker, Ngl.skewt_plt, Ngl.streamline, Ngl.streamline_map, Ngl.streamline_scalar, Ngl.streamline_scalar_map, Ngl.vector, Ngl.vector_map, Ngl.vector_scalar, Ngl.vector_scalar_map, Ngl.xy, Ngl.y - these plotting functions were updated to recognize masked arrays and to not plot values equal to the masked array's fill_value.
- Several examples in the gallery were updated to use masked arrays.
- New bar1/bar2 examples showing how to create bar charts.
- New clmdiv2 example showing how to color climate divisions by a third field.
- New ctrydiv1 example showing how to draw provinces of China and states of India.
- New datatondc example showing how to use Ngl.datatondc.
- New format example showing how to format tickmark labels.
- New ndc_grid example showing how to use Ngl.draw_ndc_grid to help place primitives using NDC coordinates.
- New streamline examples showing how to use Ngl.streamline_scalar_map and Ngl.streamline_scalar.
- New Ngl.wmstnm examples showing how to use Ngl.wmstnm.
- New nio04 example showing how to use Nio to open an existing NetCDF file and append data to a variable with an unlimited dimension.
- Some more PyNIO-specific examples will be added when available.
Extended subscripting added to PyNIO
PyNIO now supports an extended form of subscripting to specify subsets using a description string that can reference associated coordinate variables if they exist in the file. Dimensions can be transposed by varying the order of the coordinate axes in the subset specification. An option allows use of the CF 'axis' attribute, if the coordinate variable defines it, to specify dimensions using the standardized short-form aliases 'T', 'Z', 'Y', and 'X'. Depending on specification options the user can request that the data values on subset boundaries be linearly interpolated between grid points or limited to the nearest grid point. In addition, the user can use an auxiliary multidimensional variable to define a subset indirectly.
Support for maskedarray module added
By default PyNIO now returns a masked array for any variable with a '_FillValue' or 'missing_value' attribute. However, there is an option that allows the user to control this behavior in various ways. See new PyNIO options below.
When masked array data is written to a variable in a file opened for writing, masked values are replaced with the fill value specified either by the variable's "_FillValue" or "missing_value" attribute if it exists, or with the fill value currently associated with the masked array if no such attribute exists.
As mentioned in the augmentations list above, support for masked arrays has been added to many of the visualization functions and is included in some of the new analysis functions. These are the masked arrays introduced in numpy V1.0.
See the "overlay1.py" and "irregular.py" examples in the gallery for examples of using masked arrays.
PyNIO Attribute Additions
The attribute "attributes" has been added to both the NioFile and NioVariable types. This attribute contains the same information as is visible in the "__dict__" attribute. It has been added to provide a closer mapping to NetCDF terminology.Use of the "__dict__" attribute to access file and variable attributes is now deprecated.
A new module-level dictionary called "option_defaults" that contains the default values of all options defined by PyNIO has been implemented. By updating this dictionary, a user can change the default value for any settable option. This default option value will be used as the value of any option that is not set by means of "options" keyword argument to the "open_file" constructor, or, if available, by the "set_option" method of NioFile.
A new PyNIO option "MaskedArrayMode" has been implemented that allows the user to control PyNIO's behavior with respect to masked arrays. The default setting is "MaskedIfFillAtt", meaning that a masked array is returned if the variable in the file has either a "_FillValue" or a "missing_value" attribute. Other possible settings are "MaskedAlways": return a masked array for all variables; "MaskedNever": always return a regular NumPy array; and "MaskedIfFillAttAndValue": return a masked array only if a fill attribute has been defined and the subset requested actually contains at least one element equal to the fill value; "MaskedExplicit": use options "ExplicitFillValues", "MaskBelowValue", and "MaskAboveValue" to explicitly set one or more fill values and/or regions to mask, ignoring any fill value attributes.
"ExplicitFillValues", by default set to None, can be set to a scalar value or a sequence of values, which all become masked in the output array. The first value is set as the fill_value attribute of the MaskedArray. This option can be combined with additional masked values specified using "MaskBelowValue", and/or "MaskAboveValue".
"MaskBelowValue", by default equal to None, can be set with a single value below which all values become masked, except that if the value is greater than the value set for "MaskAboveValue", a band of masked values is defined.
"MaskAboveValue", by default equal to None, can be set with a single value above which all values become masked, except that if the value is less than the value set for "MaskBelowValue", a band of masked values is defined.
The new option "UseAxisAttribute" is a boolean attribute that specifies whether to use the short form names "t", "z", "y", and "x" as dimension names if the coordinate variable defines the "axis" attribute. As currently implemented, this is an "either/or" choice: if set True and if the "axis" attribute is defined, attempts to use the actual dimension name rather than the short-form name will result in an error.
Note that these new options generalize the concept of PyNIO options because they are not file-format specific. These options apply for any file type.
Also a new optional argument, called format has been added to the
NioFile
open_file constructor. This argument provide an
alternative to the current method of explicitly specifying file formats by adding
a "fake" suffix to the filepath argument. The format argument can be
set to any of the suffixes that can be tacked on to the filepath argument.
Do not confuse the format optional argument with the NetCDF-specific option,
"Format", used to specify different NetCDF file types.
- Instances of
PyErr_Printhave been removed from PyNIO. This changes the behavior of PyNIO in that scripts will exit unless the user traps the error in Python. This is an unavoidable incompatibility in order to bring PyNIO into compliance with standard Python practice.
- Several instances of
PyMem_DELin the PyNIO code have been replaced with
PyObject_DEL.
- PyNIO's subscripting code has been modified to behave according to Python semantics in cases where the slice 'start' value is greater than the slice 'stop' value. Formerly it exhibited NCL-like behavior in this case. That is, it would return a reversed slice (assuming a default or positive 'step' value). Now it returns an empty array.
- Fixed a problem that enables you to retrieve area group information for climate divisions (basically, this fix will enable you to run the new clmdiv2.py script).
- Ngl.asciiread was fixed so that it does not exit when encountering a blank line. This was necessary to get PyNIO to work with Python 2.5.
- The vector plotting functions: Ngl.vector, Ngl.vector_map, Ngl.vector_scalar, and Ngl.vector_scalar_map, were fixed such that if vcMonoLineArrowColor, vcMonoFillArrowEdgeColor, vcMonoFillArrowFillColor, or vcMonoWindBarbColor are set to False, then a labelbar is automatically drawn (unless if lbLabelBarOn is set to False and/or pmLabelBarDisplayMode is set to "Never").
Support for Numeric dropped
In this release of PyNGL and PyNIO, we are dropping support for Numeric. An informal survey indicated that most of not all of our users have made the transition to NumPy, or are at least planning to very soon. We are adding several new data analysis functions, and trying to keep support for Numeric and NumPy was getting to be a bit tedious.
- NetCDF variables that start with a number are not allowed
There was a change introduced in NetCDF versions 3.6.3 and 4.0 in which variable names beginning with a digit are no longer allowed.
This is actually in accordance with the documented rules, as well as the CF conventions, but was never enforced in previous versions. The NetCDF developers have decided that in order to keep past users of NetCDF happy, they will change the code back to allow such variables. This code has not yet been made official.
This has caused trouble for PyNIO in cases where GRIB files, which, in PyNIO's view, can contain parameters that begin with a digit, are copied to NetCDF.
- The coordinate space subscripting step size is calculated based on the first two elements of the coordinate array. When the the coordinate axis values are irregularly spaced and when not using interpolation, non-default coordinate space step sizes can led to strange results. Either use interpolation or use an index space step value.
- Dimension reordering does currently not work when an extended selection string contains an indirect selection using an auxiliary multidimensional coordinate variable.
Version 1.2.0 - July 23, 2007With this release, NumPy is now the default array module package for PyNGL and PyNIO. Numeric 24.2 will still be supported. How you import the two packages is different from previous versions: if you download the NumPy version, you can import PyNGL and PyNIO with:
import Ngl, NioIf you download the Numeric version, you must import PyNGL and PyNIO with:
import PyNGL_numeric.Ngl as Ngl import PyNGL_numeric.Nio as Nioand then you will be able to use all the "Ngl." and "Nio." calls in the usual way.
New functionality
- GRIB 2 reader
This version of PyNIO includes a new GRIB2 reader. Detailed information will be available soon in the PyNIO documentation.
- Coordinate variables for HDFEOS Swath and Grid data
This version of PyNIO recognizes the Geolocation variables provided with HDFEOS Swath data and also supplies coordinate variables for HDFEOS Grid data. Detailed information will be available soon in the PyNIO documentation.
- Beta level support for Classic model NetCDF 4
Note: netCDF-4 has not been tested on 64-bit systems yet, so PyNIO only supports netCDF-3 files on those systems.
On systems where we could build the 4.0 beta version of netCDF-4, this version of PyNIO can read and write NetCDF 4 files that conform to the classic model.. This release provides beta-level support because NetCDF 4 and the release of HDF 5 that it depends on are both still in the beta-testing phase of development. Detailed information is available in the PyNIO documentation.
New functions
- Ngl.add_new_coord_limits - adds new minimum/maximum values to X and/or Y coordinate arrays.
- Ngl.draw_ndc_grid - draws and labels grid lines at 0.1 NDC intervals.
- Ngl.define_colormap - defines a new color map for the given workstation.
- Ngl.generate_2d_array - generates nice smooth 2D arrays.
- Ngl.merge_colormaps - merges two color maps to create a new color map for the given workstation.
New color tables
- BlueDarkOrange18
- BlueDarkRed18
- BlueGreen14
- BrownBlue12
- Cat12
- GreenMagenta16
- posneg_1
- posneg_2
- prcp_1
- prcp_2
- prcp_3
- StepSeq25
New resources
- nglXRefLine/nglYRefLine - adds an X or Y reference line to an Ngl.xy or Ngl.y XY plot.
- nglXRefLineColor / nglYRefLineColor - changes the color of an X or Y reference line.
- nglXRefLineThicknessF / nglYRefLineThicknessF - changes the thickness of an X or Y reference line.
New pynglex examples
- multi_plot.py (new)
- spaghetti.py (new)
Version 1.1.0 - November 9, 2006With this release of PyNGL and PyNIO we now support both NumPy 1.0 and Numeric 24. Support for these two array modules is done via two separate packages that can safely be installed together.
In order to use the Numeric version, which is currently considered the default mode for PyNGL and PyNIO (but won't be once NumPy is considered to be the more prevalent package used), you just need to import Ngl and Nio:
import Ngl import NioTo use the NumPy 1.0 version, you need to import PyNGL_numpy and PyNIO_numpy:
import PyNGL_numpy.Ngl as Ngl import PyNGL_numpy.Nio as Nio
New functions
- Ngl.delete_wks
- Deletes a workstation object.
- Ngl.gc_tarea
- Finds the area of a spherical triangular patch.
- Ngl.gc_qarea
- Finds the area of a convex quadrilateral spherical patch.
Version 1.0.0 - July 12, 2006This release of PyNGL and PyNIO was built with Python 2.4.3 and Numeric 24.2. We also have a test version that was built with NumPy version 0.9.8. Both versions are available for download on the Earth System Grid website.
- New website
- PyNGL/PyNIO has a newly-designed and significantly improved website.
- Support for NumPy version 0.9.8
- There's a test version of PyNGL and PyNIO available that works with NumPy version 0.9.8. You can download this version from the same place as the Numeric version, on the Earth System Grid website. Both versions can be installed on the same system as they will not conflict with each other (the package names are "PyNGL" and "PyNGL_numpy").
The test NumPy version must be imported with:
import PyNGL_numpy.Ngl as Ngl import PyNGL_numpy.Nio as Niowhile the Numeric version of PyNGL and PyNIO can be imported with:
import Ngl import Nio
Due to the changing nature of NumPy (as it goes through several test phases), we can't guarantee that this test version will work with other versions of NumPy. We are interested in feedback, so please let us know if you download this version and have any kind of success with it.
- Changes to default graphical environment
- Based on our experience with other graphics packages we develop, we decided to change the default graphics environment for PyNGL. We believe these changes are for the better, but if you don't agree, you can change the default graphical environment for your site using a file called the "sysresfile."
Here's a summary of the changes:
- The default background color is white and the default foregound color is black for all graphical output (PostScript, PDF, X11 window, and NCGM). Previously, it was the reverse.
- The default font is helvetica instead of times-roman.
- The default color table is now "rainbow". It was previously "default".
- The default function code is now a tilde ('~') instead of a colon (':').
- Updates to PyNIO
- New 'print' methods were added for the NioFile and NioVariable objects. These print a nicely formatted summary of the contents of a file or an individual variable.
A new 'options' argument was added to 'open_file' to provide support for a number of format-specific options for GRIB and netCDF files. These options allow you to create larger NetCDF files using the 64 bit offset format variant, improve performance when reading or writing large netCDF files, and control various aspects of GRIB files, such as the type of interpolation used to read data on thinned grids.
Doc strings were added for the Nio module and all its objects, methods, and attributes.
- New resources added (at the moment, only one)
- nglPointTickmarksOutward
- New functions added
- Several new functions were added:
- Ngl.free_color - Removes a color entry from a workstation.
- Ngl.maximize_plot - Maximizes the size of the given plot on the workstation.
- Ngl.new_color - Adds the given color to the end of the color map of the given workstation.
- Ngl.new_dash_pattern - Adds new dash patterns to the existing table of dash patterns.
- Ngl.new_marker - Adds new markers to the existing table of /marker styles.
- Ngl.pynglpath - Returns full path names for selected abbreviated names.
- Ngl.set_color - Sets a color in the color map of the given workstation.
- Docstrings added for PyNGL functions
- Python docstrings were added for all of the PyNGL functions.
- Updates to functions
- Many PyNGL functions were updated to handle input types of scalar, tuples, lists, and Numeric arrays.
- More examples added
- Several examples were added, including:
- coast1.py - generates coastal outlines using three different levels of resolution
- contour1.py - shows how to turn a linear axis into a log axis
- contour2.py - uses shading patterns and fill colors to fill contours
- contour3.py - uses dashed line patterns to dash negative contour lines
- contour_xyz.py - contours random X, Y, Z data
- color2.py - uses named colors in a multi-curve XY plot
- color3.py - draws HSV color wedges and shows how to use color conversion functions
- color4.py - uses named colors to generate common colors
- color5.py - draws a Mandelbrot set
- map2.py - draws polygons, polymarkers, and polylines on a map
- multi_y.py - generates multiple Y curves with their own axes
- overlay.py - draws multiple overlays of contours and vectors on a map
- thickness.py - shows how to create various line thicknesses
- scatter2.py - shows how to create your own markers
- xy1.py - shows how to change the style of the axes in an XY plot
- xy2.py - attaches primitives (polygons, polylines, etc.) to a plot
- vector_pop.py - generates vectors on a POP grid
For a quick look at all of the available examples, see the new PyNGL gallery.
- Updates to Skew-T resources
- All of the Skew-T resources associated with the Ngl.skewt_bkg and Ngl.skewt_plt functions have been renamed to be more consistent with the rest of the PyNGL resources. The old names will still work. To see the new names, go to the Skew-T resources page.
Version 0.1.1b8 - September 22, 2005
- Nio module - a Python module that allows read and/or write access to a variety of data formats (netCDF, GRIB1, HDF4, HDF-EOS2, CCM History tape) using an interface modelled on netCDF. This module is part of the PyNGL version 0.1.1b8 distribution.
- All of the nglxxp.py and other pynglex examples were converted to use the new Nio module.
- New functions
- Ngl.add_annotation - adds one graphical object or plot as an annotation of another plot
- Ngl.nngetp - retrieves control parameter values for Ngl.natgrid
- Ngl.nnsetp - sets control parameter values for Ngl.natgrid
- Ngl.remove_annotation - removes an annotation from a plot
- Ngl.remove_overlay - removes a plot that was overlaid with the Ngl.overlay procedure
- Ngl.vinth2p - interpolates from CCSM hybrid coordinates to pressure coordinates
- New pynglex examples
Note: to run any of these examples, type "pynlgex xxxxx", where xxxx is one of the names in the list below:
- chkbay - generates a contour plot of triangular mesh data
- gaus - generate gaussian latitudes and weights for a specified number of latitudes
- meteogram - generates a meteogram based on code from John Ertl at the Fleet Numerical Meteorology and Oceanography Center
- multi_y - generates a two-curve XY plot with different Y axes for each curve
- nio_demo - demo program for the new Nio file I/O module
- topo1 - generates a nice-looking topographical contour plot using data from the USGS Earth Resources Observation System Data Center
- traj1 - generates a trajectory plot
- vinth2p - demonstrates use of Ngl.vinth2p function for interpolating from CCSM hybrid coordinates to pressure coordinates
- Example ngl04p.py converted to read a GRIB file (instead of a netCDF file).
New location for PyNGL binaries - April 5, 2005As of April 5, the PyNGL binaries are housed on the Earth System Grid website. For more information, see the download instructions.
Version 0.1.1b7 - March 17, 2005
- New functions
- Ngl.legend_ndc - draw legends positioned using NDC coordinates.
- Ngl.labelbar_ndc - draw labelbars positioned using NDC coordinates.
- Ngl.skewt_bkg - draw charts for skewt plot backgrounds.
- Ngl.skewt_plt - draws skewt plots.
- Ngl.gc_interp - interpolated points along a great circle.
- Ngl.gc_dist - calculates the distance along a great circle between two coordinates.
- Ngl.gc_convert - converts great circle distances to different units.
- Ngl.wmbarb - draws wind barbs.
- Ngl.wmbarbmap - draws wind barbs over maps.
- Ngl.wmsetp - sets parameter values for wmxxx functions.
- Ngl.wmgetp - retrieves parameter values for wmxxx functions.
- Ngl.get_named_color_index - self-explanatory.
- Ngl.fspan - returns an array of equally spaced floating point numbers in an inclusive specified range.
- New pynglex examples
- labelbar - draws label bars using Ngl.labelbar_ndc.
- legend - draws legends using Ngl.legend_ndc.
- skewt1 - draws skew-T charts using Ngl.skewt_bkg.
- skewt2 - draws skew-T plots with and without wind barbs using Ngl.skewt_plt.
- skewt3 - draws skew-T plots using real data using Ngl.skewt_plt.
- wmbarb - draws wind barbs using Ngl.wmbarb.
- wmbarbmap - draws wind barbs over maps using Ngl.wmbarbmap.
- irregular - shows how to linearize or "logize" an irregular axis, using the special resources "nglXAxisType" and "nglYAxisType".
- Bug fixes
- If cnFillColors or vcLevelColors are set, then nglSpreadColors gets set to False (if it's not already set).
- Check for tmEqualizeXYSizes setting. If it is not set, then default nglScale to True.
- Changed the retrieval of the resources trXCoordPoints and trYCoordPoints to be a multi-dimensional array (instead of a 1D array). This handles the case of the arrays being 2D.
Version 0.1.1b6 - January 12, 2005
- Release for internal testing. Never publically released.
Version 0.1.1b5 - December 12, 2004
- Added a couple of new pynglex examples, "color1.py" and "streamline.py".
- Made some minor changes so PyNGL will work with Python version 2.4.
Version 0.1.1b4 - December 10, 2004
- Fixed a minor bug fix that involved using Ngl.set_values in conjunction with having used one of the new nglX/YAxisType resources.
Version 0.1.1b3 - December 8, 2004
- Plot defaults
Some more default values for plot resources were added to help automate things for the user.
- New resource for renaming application resource file
A new resource called nglAppResFileName was added. This resource allows you to change the default name of the resource file. This resource can only be set during a call to Ngl.open_wks.
With this resource, you can also set two resources, appUsrDir and appFileSuffix, which allow you to change the default directory location of a resource file (the default is "./") and to change the suffix (default is ".res").
- New resources for forcing an irregular X and/or Y axis to be linear or log
Two new resources called nglXAxisType and nglYAxisType were added. These resources allow you to convert an irregular X or Y axis to linear or log. They should only be used if one of the scalar or vector field resources (sfXArray/sfYArray or vfXArray/vfYArray) have been set.
For an example, run:
pynglex irregular
- Bug fixes
- Even though the PNG driver is not fully operational, we went ahead and uncommented its code so users can test it out. You can get at the PNG driver by setting your workstation type to "png" in your call to Ngl.open_wks.
- Fixed the error checking for the special "ngl" resources, so that if any of them are misspelled, PyNGL will report it.
Version 0.1.1b1 - October 18, 2004
- Major changes
The big change in this version was the removal of the "ngl_" prefix from the names of all of the PyNGL plotting functions. This change was made due to feedback from PyNGL testers who suggested that:
import Nglwas preferrable to:
from Ngl import *Moving to this method made us realize that "Ngl.ngl_xxxx" would be redundant, and hence the "ngl_" prefix was removed. All documentation and examples were updated to reflect this change.
- Several new "pynglex" examples added
Since the last release, several new examples were added, and documentation for pynglex was written. Three of the new examples show how to draw contours on unique grids and/or triangular meshes. The fourth example shows how to draw a nice scatter plot using colorful polymarkers.
To generate the new examples, type:
pynglex geodesic seam ctnccl scatter1and view the resultant PostScript files. Below is a sample of the graphics created from these new examples. Click on any image for a larger view:
* The geodesic grid came to us from Dave Randall, Ross Heikes, and Todd Ringler of Colorado State University.
- Plot defaults
Some default values for plot resources were changed to help automate things for the user. For example, if the user turns on contour fill, then he/she will automatically get a labelbar on the plot. For a full list of such resources, see "Default resource settings for PyNGL graphics".
- Documentation updated
The PyNGL documentation was updated to reflect all of the above changes.
- Bug fixes
- The nglScale resource was set to True, as documented.
- If the wkOrientation resource is set, then the orientation will not get recalculated when the plot size is maximized to best fit in the output device. Note that if nglPaperOrientation is set, it will override the setting for
wkOrientation>.
|
http://www.pyngl.ucar.edu/whatsnew.shtml
|
CC-MAIN-2017-30
|
refinedweb
| 6,571
| 56.86
|
The Ext GWT team has been hard at work on our Ext GWT 3.0 release. We are happy to announce the availability of our first developer preview release. 3.0 is a major upgrade from 2.0 with a massive amount of changes and updates. One of the primary goals of 3.0 is the bring Ext GWT more inline with the current GWT best practices and designs. The release is not yet feature complete and we’re still under active development on the framework. This release provides a great opportunity for developers to see how the release is shaping up, previewing what 3.0 will look like and get to see how the API looks.
Our plan is to push out new preview releases quite often. Until we reach beta, the current code and public API will change. Once we have the API stable and all features are complete we will go to beta.
Download Ext GWT 3.0 Developer Preview 1
At present,.
Events and Their Handlers
In Ext GWT 2.0, we had a custom event and listener API, which could be used in several ways:
Button btn = new Button(); btn.addListener(Events.Select, new Listener<ButtonEvent>() { public void handleEvent(ButtonEvent be) { //respond to click } });
or
btn.addSelectionListener(new SelectionListener<ButtonEvent>() { @Override public void componentSelected(ButtonEvent ce) { //respond to click } })
With 3.0 we have replaced our custom event system in favor of the built-in GWT Handler API. Now the Ext GWT Components accept handlers and fire events in the same way as other GWT widgets.
Button btn = new Button(); btn.addHandler(new SelectHandler() { @Override public void onSelect(SelectEvent event) { //respond to click } }, SelectEvent.getType());
or
btn.addSelectHandler(new SelectHandler() { @Override public void onSelect(SelectEvent event) { //respond to click } });
This will make adding event handler methods in uibinder-enabled projects easier, and removes the need to send some events through an EventBus and others through Observable objects.
XTemplates
XTemplates are a powerful feature in Ext GWT. In previous releases, XTemplates were implemented using the same design and code as Ext JS. For 3.0, we have replaced the Ext JS JavaScript-based implementation with a compiled solution completely implemented in Java, in the same style as SafeHtmlTemplates.
interface TemplateTest extends XTemplates { @XTemplate("<div><span>{stock.name}</span></div>") SafeHtml renderStock(StockProxy stock); @XTemplate(source="template.html") SafeHtml renderStockExternal(StockProxy stock); }
For more detailed information on XTemplates in Ext GWT see our previous blog post.
Charting
For 3.0, we’ve introduced a completely new drawing and charting API. Like we did in Ext JS, we’re moving away from Flash to pure web standards, and the new APIs replace our Flash-based charting solution from 2.0. The new code uses SVG and VML rather than using any plugins.
Ext GWT 3.0 draws gorgeous charts using SVG, Canvas and VML ? with no need for any plugins. The charting system is integrated with the new data package and fully animated in every browser.
Themes & Appearance
Currently in Ext GWT 2.0, widgets are responsible for creating their DOM structure directly. This is done either by manually creating the elements or by using an HTML fragment. The HTML for the widget is created from strings, from an XTemplate, or by assembling DOM elements. as the widget is tightly bound to its current DOM structure. Second, it is also difficult to change the style or look and feel of a widget as the CSS styles are part of the widget, and added directly to the widget’s elements.
With Ext GWT 3.0, we have introduced a new way of rendering the view and styling a widget. This approach is very flexible and has many advantages compared to the previous method. It supports swapping in different DOM structures. We have written an in-depth article on Ext GWT 3.0 Appearance Design for the latest Sencha newsletter.
Data
GWT does not support introspection at runtime: Given an object, you are not able to get and set values in a generic way. There are essentially two ways to deal with this: store all sub-properties in a generic way, or provide a mechanism that has access to getters and setters.
The first approach closely approximates how JavaScript and other dynamic languages work: the target object implements a known interface that allows values to be retrieved and set. This is the approach we took with Ext GWT with the ModelData interface.
public interface ModelData { <X> X get(String property); Map<String, Object> getProperties(); Collection<String> getPropertyNames(); <X> X remove(String property); . While this provides excellent runtime access to all properties, it can cause larger code sizes, and makes the compiler’s job more difficult in deciding what code is not necessary.
The second approach is to use GWT Deferred Binding to generate code at compile time that knows how to “talk” to a given object. In earlier releases, this was accomplished through the BeanModel API, allowing Model instances to be created from regular Java Beans. Essentially, all the generic get and set calls are delegated to the wrapped Java Bean. Bean models would be created for every type that might need them, allowing the same level of reflection as ModelData provided, but again with greater code sizes, as the compiler would not always remove code that was not required.
For Ext GWT 3.0 we decided that we wanted to “rethink” the use of our Models and ?nd a cleaner solution. The goal of 3.0 is to support any bean-like object with get and set methods (POJO) or AutoBean (a new GWT feature) anywhere we require data in the framework. This includes our loaders, stores, data widgets, and templates. Similar to BeanModel, these methods to access properties are built at compile time, but are only created when requested by GWT.create, instead of aggressively loading data about all bean-like objects, letting the same code work with anything resembling a bean.
Properties are handled as ValueProvider
Other data can be retrieved in this way as well, such as keys for models, used to track objects in a Store. Extending the GWT ProvidesKey
These providers can be generated together, as part of a PropertyAccess
public class Person { public String getSSN() {...} public String getName() {...} public void setName(String name) {...} public List<Person> getChildren(){...} public void setChildren(List<Person>children) {...} } public interface PersonProperties extends PropertyAccess<Person> { ModelKeyProvider<Person> ssn(); ValueProvider<Person, String> name(); ValueProvider<Person, List<Person>> children(); }
As these will often be used to allow programmatic access to editing models, we have adopted some basic annotations and approaches from the GWT Editor framework. The name of each method is assumed to map to a property in the bean, and if two objects, say a ModelKeyProvider and a ValueProvider both need access to the same property, the @Path annotation may be used to specify the property instead. As in the Editor framework, the value of the path may include ?.’ to indicate further sub properties.
PropertyAccess objects can be useful to generate simple LabelProviders for ListView or ComboBox, or for interacting with a Store to queue changes that should not immediately take place. In other places, such as XTemplates or data widgets (which are still being developed), they will be generated automatically, working from cues the developer gives in code.
Legacy
We know many of these changes may make updating existing applications difficult. To ease those efforts, we have a separate JAR with classes that approximate behaviors available in 2.x, but like the rest of the 3.0 release, these are under the new package structure com.sencha.gxt. This will hopefully ease porting efforts, as the two sets of classes can co-exist.
The legacy JAR currently contains the basic ModelData interface and a ValueProvider implementation that can easily allow Stores and Data Widgets to read and write those objects without the need for PropertyAccess implementations. The Javascript-based XTemplate can also be found there, building Strings instead of SafeHtml.
Preview Notes and Gotchas
Since it’s just a preview, not all example are working at 100%. In addition, the current code has not been fully tested in all browsers and operating system. This holds true especially for Internet Explorer. We will be posting the 3.0 Explorer Demo in our next preview release. Of course by our final release we’ll be fixing all of these issues.
Also, for the first preview, we are asking for users to hold off on reporting bugs. If you have design comments or questions feel free to ask them in our 3.0 forums.
Next Drop
With 3.0, we are upgrading the layout engine for better performance and to support UIBinder. The new code is not in this drop but will be included in Dev Preview 2.
We have also started the work on implementing the Appearance design as described earlier. We are in the process of completing this work, and will also soon implement the Gray theme. While in this transition phase, we will have both our old resources (gxt-all.css) and our new resources contained within our theme modules.
Future Drops
There are several widgets and features not included in the release. This includes: Grid, TreeGrid, Fields, and others. These will be included in future releases.
Summary
We are excited about the preview of Ext GWT 3.0 and hope you will be as well. As a reminder, this is the first developer preview and is not ready for prime time. We would recommend using these previews to get a feel of the changes and to get an idea of how we are looking forward. We would not recommend starting to use the library at this time for any actual development.
Download Ext GWT 3.0 Developer Preview 1 today.
Looks cool!
Finally :))))) Downloading and trying out now
Finally :)
No we are waiting for Dev 2 Preview with UiBinder support :)
There you go. now you guyz deserve some bravos :)
Looking really promising
Looking forward for this major release.
Can you provide more details regarding the planed timeline for the beta and the final release.
About the BaseModel and ModelData
Suppose I have a business desktop like application with many windows. By many I mean several hundreds of them.
with GXT v2.x I could make one model class and one base window and generate all of the windows i needed
If I am right from what I have read so far, you suggest to use severa; hundreds of model classes with implemented gettes and setters and also several hundreds of interfaces?
I download and try it.
My chalenge here is show a line chart and provide an option do save chart as image.
I copied the source of Linechart example and want to implement this feature.
Any sugestions?
Thanks in advance.
@Raivis There is no reason you cannot continue to use models as Map
, with lots of properties keyed to strings. Included in the legacy jar is the ModelData interface, and we will be including the base classes that implement it as well. While some new features, such as GWT’s editor framework or the compile-time XTemplates will not work correctly with runtime models, the v2 XTemplate is also included in the legacy jar, so run-time templates can still be used.
@Rogerio
I dont think that s supported
@Alan
Thanks for your reply.
I’ve been searching on the web for any solution to save a widget as an image without success.
Anyway, I’ll keeping looking.
@Rogerio @Alain is correct. Saving charts to images is not supported.
@Rogerio
Unfortunatly GXT 3.xxx will also be a big problem for us because we will no be able to export the charts. In most of our applications users are generating reports and we were able to export the charts(Flash based ) to PD, PNG, JPG, etc,,, on the client. With the new release i dont know how we will do that. I might have to write a custom chart extenstion for GXT based on flash once i get my head around how GXT 3 works.
If you are working with GXT2.xxx i will soon release a solution that enables saving charts to different formats.
Check a demo here :
@Rogerio @Alain For the SVG engine you can save the DOM of the chart surface to a .svg file, which in turn can be converted to PDF, PNG, JPG, etc…
@Brendan
I m Aware of that. But there are some problems with that approach
1) Not cross browser
2) Cant be done on the client
So not suitable for us.
I hope the charts will support events & tooltips in the coming releases. Any plan of supporting stacked bars?
@alain Have you looked at phantomjs? Allows you to run a headless webkit on the server. create pdfs, images, etc.
@Henry
We dont want to hit there server for that.
It has to be done on the client for us. That s why phantomjs is also not an option.
@Alain: You can run PhantomJS on the client-side, there are executables for Windows, Mac OS X, and Linux. But if you want pure in-browser solution, then PhantomJS is not for you.
Hi @Brendan,
Do you have an implementation of that solution?
@Ariya, @Henry
We will go with Flash for now. Easier for us to integrate with GXT 2.xxx
For 3.xxx we will see how to do that if we ever switch.
@Alain
I’m trying to implement “save as image locally” in a gwt project but it’s not working fine.
Any tips?
thanks in advance.
@Rogerio
Are you using GXT 2.xxx or 3.xx ?
If you are using 2.xxx i might have a solution for you.
@Alain
I’m using GXT 2.xxx.
I downloaded the 3.xx only to test.
I’ll apreciate if you could share your solution.
Allright. I will bring our solution in a form where i can share that with you then i ll give you the code.
This might take one day or two dough. I hope this is ok for you.
@Alain,
that will be a great help.
Thank you very much.
@Sam
Yes we plan to implement all the things you mentioned.
@Rogerio
Plase hav a loot at :
and let me know if that s what you are looking for. If yes i ll be able to provide you the code tomorrow of the day after tomorrow
@Alain
Yes Alain. That will be a great feature on my project.
I’ll wait your message.
Thanks man.
the whole life have to learn endless,although now i cannt understand what you introduce,i believe i will get it in the future
@Lily
Was your comment for me ?
If yes what it is that you dont understand ? I just introduced a way to save GXT Chart on the client(no server and no executable)
@Rogerio
Do you have an E-Mail where i can reach you ?
What’s the timeframe for the DP2 release? I am anxious try UIBinder with GXT 3. Also, why not rename Ext GWT to GXT for 3.0 once and for all?
And I forgot to say that this is very very cool! I guess the multiple column sort in Grid is taking a backseat for the time being or is that still in the release?
@Alain,
Hi Alain, please send message to rogerio[dot]valente[at]gmail[dot]com
Thanks man.
It’s interesting to see that this support Canvas, quote: “Ext GWT 3.0 draws gorgeous charts using SVG, Canvas and VML”
In preview release of Ext JS 4, Canvas was supported then just before release, Canvas code was removed.
@Sebastien
Actually that is a misprint. Currently only SVG and VML are supported. We may eventually add a Canvas engine.
@Colin Alworth
Yes, I understand there’s a legacy package. But for how long will it be supported by GXT? I am afraid that after some versions of GXT I will have to rewrite my one Data class and one generator class to hundreds of new data classes and generator classes.
@raivis
. The legacy jar is made to work with the current Ext GWT api, which is not likely to change with respect to data models soon.
Take a look at the source of the ValueProvider instance that can read into a ModelData – it can be easily adapted to work with anything backed by Map
While this change is annoying to work with, closing this door has opened many others. Without this change, support for RequestFactory or AutoBeans are effectively impossible, and by making this change, we no longer need to support the BeanModel generation, but can use this more efficient method of reading properties without needing this psuedo-reflection to get access to arbitrary runtime methods.
Drop by #extgwt on freenode or file a ticket if you’d like to discuss this more – perhaps we’ve missed an important usecase or maybe we can explain better the direction we are headed.
Finally. Can’t wait to see what your new child can do :)
@Colin Alworth
I’ll wait until some data widgets will be available and see how to code tabular data view. I’ll fill ticket if I will still have questions after that.
Would PropertyAccess still be implemented Observable pattern (like ChangeEventSource with Model)?
@William Bonawentura
We don’t yet have such an API for this, as that was mainly consumed by the Bindings code, which is being replaced by GWT’s own editor framework. Such an interface can be defined again, but without the Map
@Alain
Im interested in your “save localy” solution.
I’ll apreciate if you could share your solution
Thanks :)
Adrien
Why does the Maven paragraph was removed from the article? Maven is now no longer supported?
|
https://www.sencha.com/blog/ext-gwt-3-dev-preview-1/
|
CC-MAIN-2016-07
|
refinedweb
| 2,973
| 65.93
|
This article covers the Tkinter Toplevel widget
Tkinter works with a hierarchical system, where there is one root window from where all other widgets and windows expand from. Calling the
Tk() function initializes the whole Tkinter application.
Often while creating a GUI, you wish to have more than just one window. Instead of calling the Tk() function again (which is the incorrect way) you should use the Tkinter Toplevel widget instead.
Differences
Calling the
Tk() function creates a whole Tkinter instance, while calling the
Toplevel() function only creates a window under the root Tkinter instance.
Destroying the
Tk() function instance will destroy the whole GUI, whereas destroying the
Toplevel() function only destroys that window and it’s child widgets, but not the whole program.
Toplevel syntax
window = Toplevel(options.....)
Toplevel Options
List of all relevant options available for the Toplevel widget.
Toplevel Example
This is a simple Toplevel function example simply to demonstrate how it works.
from tkinter import * root = Tk() window = Toplevel() root.mainloop()
This isn’t a very practical approach though, so we’ll discuss a more real life scenario in the next example.
Toplevel Example 2
In this example we’ll show you another way calling a new window. In most software, you start off with one window and can spawn multiple windows such as a “Settings Window”. This is in contrast to the previous example where we started directly with 2 windows.
The code below creates a button, that when clicked calls a function that creates a new Toplevel window with a widget in it. You might find this approach more suitable for your GUI.
from tkinter import * def NewWindow(): window = Toplevel() window.geometry('150x150') newlabel = Label(window, text = "Settings Window") newlabel.pack() root = Tk() root.geometry('200x200') myframe = Frame(root) myframe.pack() mybutton = Button(myframe, text = "Settings", command = NewWindow) mybutton.pack(pady = 10) root.mainloop()
Only use multiple windows when it makes sense to have more than one. It makes sense to have a separate window dedicated to settings, especially when it’s a large software when dozens of different settings.
Toplevel Methods
Another benefit of using Toplevel is the dozen different methods available to it that provide extra functionality.
Amongst the most useful these methods are
withdraw() and
deiconify() which can be used to withdraw and display the window respectively. Useful if you want to make the window disappear without destroying it. And also the resizable,
maxsize(),
minsize() and
title methods. Explanations mentioned below in the table.
Most of these methods are self-explanatory enough that you shouldn’t require any explanation beyond what is written here. The rest will be covered in another article soon, or you can always google it if you ever need it.
This marks the end of the Python Tkinter Toplevel article. Any suggestions or contributions for CodersLegacy are more than welcome. Relevant questions regarding the article material can be asked in the comments section below.
To learn about other awesome widgets in Tkinter, follow this link!
|
https://coderslegacy.com/python/tkinter-toplevel/
|
CC-MAIN-2021-21
|
refinedweb
| 498
| 57.67
|
Episode #18: Python 3 has some amazing types and you can now constructively insult your shell!
- Nice use of animated gif to showcase what it does.
- It’s a replacement for
dir()to use interactively.
pip install pdir2, but
import pdir.
pdir(something)gives you all that
dir()does, but splits things into categories like exceptions, functions, attributes, …
- each item on one line, and includes the first line of the docstring for the item.
- Also, uses colors nicely. (Except I need to run it in a shell with non-black background on my mac or I can’t see the docstring output. )
- Hugely useful if you use
dir()interactively.
- 😞 Readme is in markdown, pypi still can’t handle that well. Maybe a listener can do a pull request on it to spiff up the pypi page:
- Consider pairing this with ptpython
#2 Michael: How to recover lost Python source code if it's still resident in-memory
- Ooops: I screwed up using git ("git checkout --" on the wrong file) and managed to delete the code I had just written, but it was still running in memory.
- Uses
- Tools for injecting code into running Python processes
- A Python cross-version decompiler
- Main take-away: Really cool to attach pyrasite and explore a running Python process for many reasons.
#3 Brian: New Interesting Data Types in Python 3
types.MappingProxyType- acts like a dictionary, but it’s read only. Convenient for exposing a mutable data structure through an API and making it less convenient for the client code to modify things they aren’t supposed to.
types.SimpleNamespace- kind of a general purpose class with attributes defined by the constructor parameters you pass in. May be useful in places where you would use
collections.namedtuple.
typ``ing.NamedTuple- You define a class derived from
NamedTuple, and define attributes with types and optionally a default value. Constructor automatically assigns the attributes in order.
- These types help you to create quick small classes/types allow concise and readable code.
- Instant coding and shell answers via the command line
- Examples
- howdoi print stack trace python
- howdoi connect sqlalchemy
- howdoi python save dict
- howdoi debug python
- howdoi install ohmyzsh
- howdoi change path macos
- Notable related
#5 Brian: A python project from a listener of the show converts to asyncio and speeds up by 150x
- Project is a Python interface to a commercial cloud based CRM called Emarsys. But the specifics are kinda beside the point.
- Comment from episode 17: From Diego Mora Cespedes
_Another awesome episode, thanks Michael and Brian!_
_About asyncio being awesome, I had my own experience. I had to send information about hundreds of thousands of users to a CRM through their public API daily. Without asyncio, it would have taken 50 hours daily, which we all know is just not possible! After developing a sync (using requests) and async (using aiohttp) client for their API, I managed to send the information about the users to the CRM asynchronously, and it takes... ... ... wait for it... ... ... 20 minutes!_
_So that's 150 times faster than without asyncio!_
_Anyway, if you wanna take a look at the client I open sourced, here is the link:
_Oh yeah, fun fact: the first time I implemented the async functionality, I made too many API calls at the same time, my mac crashed and I think I DDoSed the CRM. Now I use semaphores, which allow you to limit the number of tasks you launch at the same time. So much Python awesomeness!_
- However, I can’t find where in this code he uses semaphores, so here’s an example that does use semaphores to limit how many connections to make at a time.
#6 Michael: cookiecutter-pyramid-talk-python-starter
An opinionated Cookiecutter template for creating Pyramid web applications starting way further down the development chain. This cookiecutter template will create a new Pyramid web application with email, sqlalchemy, rollbar, and way more integrated.
- Factored and organized: This code was generalized out of a large, professional web application
- Master layout template: Comes pre-configured with a master layout template. Navigation, CSS, JS, etc is factored into a single template file and is reused across all views
- Chameleon language: This template uses the chameleon template language (the cleanest template language for Python - we did say opinionated right?)
- Pyramid Handlers: Code is factored into handler / controller classes. You have the full power of object-oriented programming immediately available
- Secure user management: The app comes with full user management. Users can register, log in and out, and reset passwords. We use the passlib package for secure user storage using best practices SQLAlchemy data access: Comes with SQLAlchemy ORM preconfigured using sqlite
- Bootstrap and modern design: As you can see from the screenshot below, the app comes with bootstrap and fontawesome. It uses a free, open-source theme and images used under CC-Attribution.
- Logging with LogBook: A logging system for Python that replaces the standard library’s logging module. It was designed with both complex and simple applications in mind and the idea to make logging fun
- Runtime error monitoring with Rollbar: Rollbar adds runtime notifications and detailed error tracking and it comes pre-configured in this template
- Mailing list integration: Comes with Mailchimp integration. Just enter your API key and list ID to start collecting and managing users for your mailing list
- Outbound email with templates: The app has a set of static HTML files with placeholders that are loaded by the outbound email system and populated with user data
- Bower static resource management: Most templates are based on out-of-date files (css templates, js, etc.). This template is uses bower for it's static files. This means a single CLI command will get you the latest everything.
- Fast pages that are never stale: Every static resource is referenced with our own cache busting system. This means you can use extremely aggressive caching for performance on static files yet they immediately invalidate upon changes
- Comes with an entire online course: This template is built from the final project in Python for Entrepreneurs, a 20 hour course on building professional web apps in Python and Pyramid from Talk Python Training
|
https://pythonbytes.fm/episodes/show/18/python-3-has-some-amazing-types-and-you-can-now-constructively-insult-your-shell
|
CC-MAIN-2018-30
|
refinedweb
| 1,035
| 58.92
|
I.
As I mentioned a while back, we’re starting a new series on the Coverity Development Testing blog to share knowledge of interesting bugs, language quirks and so on. The first episode of Ask The Bug Guys is posted now. In this episode I take on two questions: first, why can’t you declare a const field of type parameter type? And second, why does overload resolution sometimes choose an unexpected method when an extension method on strings is available? Thanks to contributors Marcin and Adam for two interesting questions.
If you have questions about strange behavior in your C, C++, C# or Java programs, please email your questions (along with a concise reproducer of the problem) to
TheBugGuys@coverity.com. We’ll be posting our next set of responses in about two weeks.
(And speaking of Coverity, I was pleased to see this morning that Coverity is #73 on Outside Magazine’s “100 Best Places to Work” list. Indeed, we computer types do occasionally go outside! I never thought I’d get a photo of my hat on Outside Magazine’s web site, but stranger things have happened. You can see my Tilley Endurables T3G hat
in the background there.)
And with that, I’m taking a vacation from blogging for a month. I’ll be relaxing on the beaver-shark infested shores of Lake Huron for two weeks in August and then busy with other projects for a couple of weeks after that. We’ll pick up in September. Have a lovely rest of your summer and we’ll see you for more fabulous adventures in coding in the autumn.[1. Substitute winter and spring for readers in the southern hemisphere of course.]
Regarding the first question, it looks to me like it should occasion a review of the whole spec for the terms “reference type” and “reference-type” both, and work out whether they should be the “reference-type” from the grammar or the “reference type” that can be either a “reference-type” or a type-parameter known to be a reference type. Then the term “reference-type” would clearly exclude type parameters, which is consistent with the grammar.
Is that what you meant by clarifying the spec?
Also, what it the point of allowing reference types to be const, if the only allowed value for most such types is null? Is it a wart that opens the door to const strings, without special-casing that bit of the grammar to be specific to strings?
That is what I meant by clarifying the spec, yes.
As to the usefulness of the feature: were type parameter consts useful, one imagines that the bug would have been noticed at some point closer to 2005 than to now. 🙂 As to whether reference type consts being restricted to null or strings is useful, it is occasionally useful to be able to say something like
const LinkedListNode EmptyList = null;, but it’s not exactly a feature one gets that enthusiastic about.
Default parameter values for reference types are required to be constants, and even if `null` is the only available constant that’s better than not having any available.
A month is too long for readers ..
by the way, there’re four hats in the photo, three are in the background .. I guess this is your =>
googled for it, found a similar on Amazon.com, and it sells for $74 .. an expansive(expensive) hat!
Indeed, the Tilley Endurables T3G hat is the finest sailing hat in the world; the green underbrim reduces glare, it floats, it ties on, it repels rain, it has a pocket in the crown, it will be replaced for half price if lost and free of charge if it wears out. (I have worn many out, but it took some doing; I’ve been wearing this model of hat for over twenty years and I am extremely hard on my hats.)
They make many other fine travel and adventure clothing options as well. I’ve got several pairs of their shorts that I quite like.
What a magical hat!
Hi Eric,
Where is the RSS feed for the Coverity blog? I can’t see a link anywhere.
Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1412
Speaking of extension methods (warning: this is a tangent), I have ALWAYS wondered why you can’t create static extension methods for static classes.
Case in point: The File.Delete() method is a static method on a static class that is quite useful for deleting files. Except it has some annoying “features”, such as throwing an exception if the file doesn’t exist (really?) and of course failing on read-only files (makes sense). I want to create a ForceDelete() method that I can access via File.ForceDelete() — seems reasonable — which succeeds if the file doesn’t exist (after all, it’s not there either way, right?) and which also works even if the read-only flag is set. But instead I’m forced to create my own static class (“FileEx”) and put the method there. Bleh.
Is there some good, sound, fundamental reason for not allowing static extension methods on static classes that I’m missing? Just curious.
One of the big advantages of extension methods is that they allow the operand of even a static method to be specified before anything else. Such advantage would be unavailable with “static extension methods”. More useful, I would think, would be allowing a scope to start with a directive like `using class Math;`, and allow static members of that class to be used as though they were in the current scope (so one could say `Sqrt` rather than `Math.Sqrt`). If such things could be limited to the scopes where they were needed, namespace clutter would be avoided in places where such declarations weren’t helpful. but visual clutter could be avoided in places where they were.
|
https://ericlippert.com/2013/08/01/bug-guys-string-extensions-and-const-fields/
|
CC-MAIN-2017-30
|
refinedweb
| 986
| 69.92
|
Uncyclopedia:Pee Review/Down With This Sort Of Thing!
From Uncyclopedia, the content-free encyclopedia
edit Down With This Sort Of Thing! (Rw)
I picked this up from requested rewrites, and thought I'd have a go at it. The idea is to get the feel of a protest organised by a polite, middle class English type whose idea of rebellion is not wearing a tie to work. Ive got so far, and while I do have ideas to finish it off, I want to know if it's worth my while continuing or if the direction I'm taking is just not funny enough. Compare the current article to see where I've taken it from. --Sir Under User (Hi, How Are You?) VFH KUN 11:06, 23 August 2007 (UTC)
Great job Under user; talk about a foundation of distinction. I like the set-up and the phrasing very much. However, despite it's encompassed humor, the piece as a whole comes off as a bit of a one-trick pony. Don't be discouraged by this; it isn't missing much. The skeleton is just waiting for a little fleshing. :)
- Expansion - As cockhead as it is to reference one of my own pieces, I will say that a work like this could benefit from an idea called the "masked encyclopedic format" (shown in Mr. Kearsy, and also excellently displayed in Prettiestpretty's Gossip). This is the idea that, while you do NOT want to ruin the article with a the constrictions of a mock-encyclopedic structure, merely mirroring the structure through prose can accomplish the same goal; a beginning, middle and end that neatly wrap the article's concept. This sort of semi-guided format can dramatically improve a pieces flow. Now, you may not want to go as far as restructuring the piece: in that case, I'd suggest a namespace move. I could definitely see this as UnBooks: A Declaration Against This Sort of Thing! or something along those lines. In that case I'd probably have different formatting suggestions, but either change will significantly improve the piece. Such expansion will also probably give better platforms for varied humor, which would boost the overall funniness of the work and push it out of one-trick status.
- Pictures - They work. As decent as they are, the piece calls for something more specific. Something like an old gentleman with a scowl on his face as if to say "Most irregular!!" As is you could probably get away with what is there, but I think something more stand-out could add a layer of comedy that is currently lacking as such.
Keep me posted on your progress, and let me know if you'd like any further assistance. The grunt work is out of the way; put in the effort on the tweaks and I'm almost certain a VFH run is in your future (and shall be supported firmly by your's truly). :D --THINKER 07:51, 25 August 2007 (UTC)
|
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Pee_Review/Down_With_This_Sort_Of_Thing!
|
CC-MAIN-2015-32
|
refinedweb
| 504
| 71.04
|
Sentiment Analysis Using Scikit-learn..
Firstly install the
pandas,
numpy,
scikit-learn library.
!pip install pandas !pip install numpy !pip install scikit-learn
Let’s Get Started
import pandas as pd import numpy as np
git clone is a
Git command line utility which is used to target an existing repository and create a clone, or copy of the target repository.
!git clone
Cloning into 'IMDB-Movie-Reviews-Large-Dataset-50k'...
Reading an Excel file into a pandas DataFrame
df = pd.read_excel('IMDB-Movie-Reviews-Large-Dataset-50k/train.xlsx')
TF-IDF
# displaying top 5 rows of our dataset df.head()
!pip install git+ some dependencies performing("(.){2,}", "", x) print(x) ------- love you """['Reviews'] = df['Reviews'].apply(lambda x: get_clean(x)) df.head()
tfidf = TfidfVectorizer(max_features=5000) X = df['Reviews'] y = df['Sentiment'] X = tfidf.fit_transform(X) X
<25000x5000 sparse matrix of type '<class 'numpy.float64'>' with 2843804 stored elements in Compressed Sparse Row format>
Here, splitting the dataset into x and y column having 20% is for testing and 80% for training purposes.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
Support Vector Machine
Definition
SVM is a supervised machine learning algorithm that can be used for classification or regression problems. It uses a technique called the kernel trick to transform your data and then based on these transformations it finds an optimal boundary between the possible outputs.
The objective of a
Linear SVC (Support Vector Classifier) is to fit the data you provide, returning a “best fit” hyperplane that divides, or categorizes your data. From there, after getting the hyperplane, you can then feed some features to your classifier to see what the “predicted” class is.
clf = LinearSVC() clf.fit(X_train, y_train) neg 0.87 0.87 0.87 2480 pos 0.87 0.88 0.88 2520 accuracy 0.87 5000 macro avg 0.87 0.87 0.87 5000 weighted avg 0.87 0.87 0.87 5000
x = 'this movie is really good. thanks a lot for making it' x = get_clean(x) vec = tfidf.transform([x])
vec.shape
(1, 5000)
clf.predict(vec)
array(['pos'], dtype=object)
clf.predict(vec)
array(['pos'], dtype=object)
Python convert the byte stream (generated through pickling) back into python objects by a process called as
unpickling.
import pickle pickle.dump(clf, open('model', 'wb')) pickle.dump(tfidf, open('tfidf', 'wb'))
Conclusions:
- Firstly, We have loaded the IMBD movie reviews dataset using the pandas dataframe.
- Then define get_clean() function and removed unwanted emails, urls, Html tags and special character.
- Convert the text into vectors with the help of the TF-IDF Vectorizer.
- After that use a linear vector machine classifier algorithm.
- We have fit the model on LinearSVC classifier for binary classification and predict the sentiment i.e. positive or negative on real data.
- Lastly, Dump the clf and TF-IDF model with the help of the pickle library. In other words, it’s the process of converting a python object into a byte stream to store it in a file/database, maintain program state across sessions or transport data over the network.
|
https://kgptalkie.com/sentiment-analysis-using-scikit-learn/
|
CC-MAIN-2021-17
|
refinedweb
| 517
| 60.21
|
U++ SQL
Basic Use and Description
For this section, the example used will be oriented to PostgreSQL use. See the SQL example packages provided in the Upp examples for using MySQL and SQLite as well.
The Schema description file (.sch file)
In each schema description file, you describe the table and column layout of your database.
Postgresql Example ("person_db.sch"):
TABLE_ (PERSON)
SERIAL_ (PERSON_ID) PRIMARY_KEY
STRING_ (NAME, 25)
DATE_ (BIRTH_DATE)
INT_ (NUM_CHILDREN)
DATE_ (DATE_ADDED) SQLDEFAULT(CURRENT_DATE)
END_TABLE
TABLE_ (EMPLOYEE)
SERIAL_ (EMPLOYEE_ID) PRIMARY_KEY
STRING_ (DEPARTMENT, 50)
STRING_ (LOCATION, 50)
DATE_ (START_DATE)
BOOL_ (IS_SUPERVISOR)
TIME_ (WORKDAY_START)
TIME_ (WORKDAY_END)
INT64 (PERSON_ID) REFERENCES(PERSON.PERSON_ID)
In this schema, we have described a 'person' table and an 'employee' table, with the foreign key 1 to 1 relationship "an employee is a person".
The different types mentioned in this example map to SQL types. More information about types should be referenced by looking at the source code header files for the database type. In this example, all of the types referenced are found defined in the file "PostgreSQLSchema.h" from the "PostgreSQL" U++ package.
Each type declaration has 2 variants; one with an underscore "_" and one without. When an underscore is used, an SqlId object is automatically created for use as a variable in your source files. When not used, you must manually define the SqlID object in your source. Reference the SqlId objects section below for further explanation.
Note: if you use a name more than once, you should use an underscore only the first time you declare the name, otherwise you will get "already defined" compilation errors. This is shown in the above example where the column name "PERSON_ID" is used twice; there is an underscore only the first time it is used.
Source Files (for PostgreSQL example)
Header file includes/defines ("person.hpp"):
#include <PostgreSQL/PostgreSQL.h>
#define SCHEMADIALECT <PostgreSQL/PostgreSQLSchema.h>
#define MODEL <MyPackage/person_db.sch>
#include "Sql/sch_header.h"
Source file includes ("person.cpp"):
#include "person.hpp"
#include <Sql/sch_schema.h>
#include <Sql/sch_source.h>
Session objects:
PostgreSQLSession m_session;
The session object is used to control the connection and session information. Each database dialect will have its own session object to use.
Database connection using session:
bool good_conn = m_session.Open("host=localhost dbname=persons user=user1 password=pass1")
The Open() function returns a true or false value depending on success of connecting to database.
SqlId objects:
SqlId objects aid the formation of sql statements by mapping database field/column names to local variables.
SqlId all("*");
SqlId person_name("NAME");
We will now be able to use "all" and "person_name" in our SQL CRUD statements in our code.
As mentioned previously, all of the declarations in our schema file that end in an underscore will automatically be declared as SqlId variables we can access in our source code.
Example use of SqlId variables:
sql * Insert(PERSON)(NAME, "John Smith") (BIRTH_DATE, Date(1980,8,20)) (NUM_CHILDREN, 1)
The variables PERSON, NAME, BIRTH_DATE, NUM_CHILDREN were available to us even though we didn't define them in our source. We could have also used the variable person_name instead of NAME as we defined it ourselves.
Sql objects
Sql objects are used for CRUD operations on the database; they operate on a session.
Sql sql(m_session); //define Sql object to act on Session object m_session.
Queries
Select example:
sql * Select(all).From(PERSON).Where(person_name == "John Smith");
Note: Here we can use "all" because we defined it as an SqlId variable above (same goes for "person_name").
Exceptions vs Checking for errors.
There 2 ways to make sql statements.
1. Manual error checking.
Manual error checking uses the asterisk ("*") operator when writing SQL statements.
sql * Select(all).From(PERSON).Where(NAME == "John Smith");
if(sql.IsError()){
Cout() << m_session.GetErrorCodeString() << "\n";
}
2. Exception handling.
Specify exception handling by using the ampersand ("&") operator when writing SQL statements.
try{
sql & Select(all).From(PERSON).Where(NAME == "John Smith");
}catch(SqlExc& err){
Cout() << err << "\n";
// Or we can get the error from the session too...
*Remember, SqlExc is a subclass of Exc, which is a subclass of String, so it can be used as a string to get its error.
Getting Values from Sql Queries
The Fetch() method will fetch the next row resulting from the query into the Sql object and return true. If there are no more rows to fetch, it will return false.
while(sql.Fetch()){
Cout() << Format("Row: %s %s %s \n", \
AsString(sql[NAME]), \
AsString(sql[BIRTH_DATE]), \
AsString(sql[NUM_CHILDREN]));
You can reference each row by SqlId as above, or by integer array index (Ie. "sql[0]").
Notice the use of AsString() here. sql[id] returns a U++ Value type object. You can then convert that Value type to its appropriate type afterward.
Last edit by cxl on 04/14/2014. Do you want to contribute?. T++
|
https://www.ultimatepp.org/srcdoc$Sql$BasicUse$en-us.html
|
CC-MAIN-2017-39
|
refinedweb
| 801
| 58.69
|
Cascade Classifier HAAR / LBP Advice
Hi,
I am using OpenCV and python to train HAAR and LBP classifiers to detect white blood cells in video frames. Since the problem is essentially 2D it should be easier than developing other object classifiers and there is great consistency between video frames.
So far I have been using this tutorial:...
This is an example frame from the video, where I am trying to detect the smaller bright objects:
Positive Images: -> nubmer=60 -> filetype=JPG -> width = 50 -> height = 80
->
->
-> etc
Negative Images: -> number= 600 -> filetype=JPG -> width = 50 -> height = 80 ->
->
->
-> etc
N.B. negative image were extracted as random boxes throughout all frames in the video, I then simply deleted any that I considered contained a cell i.e. a positive image.
Having set-up the images for the problem I proceed to run the classifier following the instructions on coding robin:
find ./positive_images -iname "*.jpg" > positives.txt find ./negative_images -iname "*.jpg" > negatives.txt perl bin/createsamples.pl positives.txt negatives.txt samples 1500 "opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 0.1 -maxyangle 0.1 maxzangle 0.1 -maxidev 40 -w 50 -h 80" find ./samples -name '*.vec' > samples.txt ./mergevec samples.txt samples.vec opencv_traincascade -data classifier -vec samples.vec -bg negatives.txt\ -numStages 20 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 60\ -numNeg 600 -w 50 -h 80 -mode ALL -precalcValBufSize 16384\ -precalcIdxBufSize 16384
This throws an error:
Train dataset for temp stage can not be filled. Branch training terminated.
But if I try with different parameters the file 'cascade.xml' is generated, using both HAAR and LBP, changing the minHitRate and maxFalseAlarmRate.
To test the classifier on my image I have a python script
import cv2 imagePath = "./examples/150224_Luc_1_MMImages_1_0001.png" cascPath = "../classifier/cascade.xml" leukocyteCascade = cv2.CascadeClassifier(cascPath) image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) leukocytes = leukocyteCascade.detectMultiScale( gray, scaleFactor=1.2, minNeighbors=5, minSize=(30, 70), maxSize=(60, 90), flags = cv2.cv.CV_HAAR_SCALE_IMAGE ) print "Found {0} leukocytes!".format(len(leukocytes)) # Draw a rectangle around the leukocytes for (x, y, w, h) in leukocytes: cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2) cv2.imwrite('output_frame.png',image)
This is not finding the objects I want, when I have run it with different parameters sometimes it has found 67 objects other times 0, but not the ones that I am trying to detect. Can anyone help me adjust the code to find the objects correctly. Many thanks
I would suggest you to start by removing the noise in your image (where you detect); then try another approach, like threshold (binary image) -> find contours -> fit ellipse -> filter long ellipses (ellipses with high elongation). Doing a cascade classifier combined with another classifier for eliminating the false positives, seems to be too complex for your case (IMHO)
@thdrksdfthmn you suggested the best and easy solution (IMHO)
Thanks, the thresholding approach was the first method I tried, but the results were not accurate enough.
i am trying an experimental approach. what is the aimed accuracy
Aimed at accuracy is 90 to 95%, but the more accurate the better obviously. Actually I have had good results with a classifier, but I am now trying to better optimise the training. I have reduced the size of the positive and negative images from 100 x 100 to 50 x 80, now I cannot get as good a result with Haar features.
What was the accuracy in the threshold manner? And what is now? Have you asked for ideas in the first approach; maybe you can very easily improve it?
@WillyWonka1964 i want to share my approach as draft code if you permit.
Yes, please any help is appreciated
|
https://answers.opencv.org/question/65284/cascade-classifier-haar-lbp-advice/?answer=65335
|
CC-MAIN-2020-45
|
refinedweb
| 617
| 58.18
|
MediaWiki::DumpFile::Pages - Process an XML dump file of pages from a MediaWiki instance
use MediaWiki::DumpFile::Pages; #dump files up to version 0.5 are tested $input = 'file-name.xml'; #many supported compression formats $input = 'file-name.xml.bz2'; $input = 'file-name.xml.gz'; $input = \*FH; $pages = MediaWiki::DumpFile::Pages->new($input); #default values %opts = ( input => $input, fast_mode => 0, version_ignore => 1 ); #override configuration options passed to constructor $ENV{MEDIAWIKI_DUMPFILE_VERSION_IGNORE} = 0; $ENV{MEDIAWIKI_DUMPFILE_FAST_MODE} = 1; $pages = MediaWiki::DumpFile::Pages->new(%opts); $version = $pages->version; #version 0.3 and later dump files only $sitename = $pages->sitename; $base = $pages->base; $generator = $pages->generator; $case = $pages->case; %namespaces = $pages->namespaces; #all versions while(defined($page = $pages->next) { print 'Title: ', $page->title, "\n"; } $title = $page->title; $id = $page->id; $revision = $page->revision; @revisions = $page->revision; $text = $revision->text; $id = $revision->id; $timestamp = $revision->timestamp; $comment = $revision->comment; $contributor = $revision->contributor; #version 0.4 and later dump files only $bool = $revision->redirect; $username = $contributor->username; $id = $contributor->id; $ip = $contributor->ip; $username_or_ip = $contributor->astext; $username_or_ip = "$contributor";
This is the constructor for this package. If it is called with a single parameter it must be the input to use for parsing. The input is specified as either the location of a MediaWiki pages dump file or a reference to an already open file handle.
If more than one argument is passed to new it must be a hash of options. The keys are named
This is the input to parse as documented earlier.
Have the iterator run in fast mode by default; defaults to false. See the section on fast mode below.
Do not enforce parsing of only tested schemas in the XML document; defaults to true
Returns the version of the dump file.
Returns the sitename from the MediaWiki instance. Requires a dump file of at least version 0.3.
Returns the URL used to access the MediaWiki instance. Requires a dump file of at least version 0.3.
Returns the version of MediaWiki that generated the dump file. Requires a dump file of at least version 0.3.
Returns the case sensitivity configuration of the MediaWiki instance. Requires a dump file of at least version 0.3.
Returns a hash where the key is the numerical namespace id and the value is the plain text namespace name. The main namespace has an id of 0 and an empty string value. Requires a dump file of at least version 0.3.
Accepts an optional boolean argument to control fast mode. If the argument is specified it forces fast mode on or off. Otherwise the mode is controlled by the fast_mode configuration option. See the section below on fast mode for more information.
It is safe to intermix calls between fast and normal mode in one parsing session.
In all modes undef is returned if there is no more data to parse.
In normal mode an instance of MediaWiki::DumpFile::Pages::Page is returned and the full API is available.
In fast mode an instance of MediaWiki::DumpFile::Pages::FastPage is returned; the only methods supported are title, text, and revision. This class can act as a stand-in for MediaWiki::DumpFile::Pages::Page except it will throw an error if any attempt is made to access any other part of the API.
Returns the size of the input file in bytes or if the input specified is a reference to a file handle it returns undef.
Returns the number of bytes of XML that have been successfully parsed.
Fast mode is a way to get increased parsing performance while dropping some of the features available in the parser. If you only require the titles and text from a page then fast mode will decrease the amount of time required just to parse the XML file; some times drastically.
When fast mode is used on a dump file that has more than one revision of a single article in it only the text of the first article in the dump file will be returned; the other revisions of the article will be silently skipped over.
This object represents a distinct Mediawiki page and is used to access the page data and metadata. The following methods are available:
Returns a string of the page title
Returns a numerical page identification
In scalar context returns the last revision in the dump for this page; in array context returns a list of all revisions made available for the page in the same order as the dump file. All returned data is an instance of MediaWiki::DumpFile::Pages::Revision
This object represents a distinct revision of a page from the Mediawiki dump file. The standard dump files contain only the most specific revision of each page and the comprehensive dump files contain all revisions for each page. The following methods are available:
Returns the page text for this specific revision of the page.
Returns the numerical revision id for this specific revision - this is independent of the page id.
Returns a string value representing the time the revision was created. The string is in the format of "2008-07-09T18:41:10Z".
Returns the comment made about the revision when it was created.
Returns an instance of MediaWiki::DumpFile::Pages::Page::Revision::Contributor
Returns true if the edit was marked as being minor or false otherwise
Returns true if the page is a redirect to another page or false otherwise. Requires a dump file of at least version 0.4.
This object provides access to the contributor of a specific revision of a page. When used in a scalar context it will return the username of the editor if the editor was logged in or the IP address of the editor if the edit was anonymous.
Returns the username of the editor if the editor was logged in when the edit was made or undef otherwise.
Returns the numerical id of the editor if the editor was logged in or undef otherwise.
Returns the IP address of the editor if the editor was anonymous or undef otherwise.
Returns the username of the editor if they were logged in or the IP address if the editor was anonymous.
While trying to build the XML::TreePuller object a fatal error occured; the error message from the parser was included in the generated error output you saw. At the time of writing this document the error messages are not very helpful but for some reason the XML parser rejected the document; here's a list of things to check:
Something went wrong with the XML parser - the error from the parser was included in the generated error message. This happens when there is a severe error parsing the document such as a syntax error.
The dump files created by Mediawiki include a versioned XML schema. This software is tested with the most recent known schema versions and can be configured to enforce a specific tested schema. MediaWiki::DumpFile::Pages no longer enforces the versions by default but the software author using this library has indicated that it should. When this happens it dies with an error like the following:
E_UNTESTED_DUMP_VERSION Version 0.4 dump file "t/simpleenglish-wikipedia.xml" has not been tested with MediaWiki::DumpFile::Pages version 0.1.9; see the ERRORS section of the MediaWiki::DumpFile::Pages Perl module documentation for what to do at lib/MediaWiki/DumpFile/Pages.pm line 148.
If you encounter this condition you can do the following:
The error message should have the version number of this module in it. Check CPAN and see if there is a newer version with official support. The web page
will show the highest supported version dump files near the top of the SYNOPSIS.
It is possible the issue has been resolved already but the update has not made it onto CPAN yet. See this web page
and check for an open bug report relating to the version number changing.
If you just want to have the software run anyway and see what happens you can set the environment variable MEDIAWIKI_DUMPFILE_VERSION_IGNORE to a true value which will cause the module to silently ignore the case and continue parsing the document. You can set the environment and run your program at the same time with a command like this:
MEDIAWIKI_DUMPFILE_VERSION_IGNORE=1 ./wikiscript.pl
This may work fine or it may fail in subtle ways silently - there is no way to know for sure with out studying the schema to see if the changes are backwards compatible.
You can use the same URL for rt.cpan.org above to create a new ticket in MediaWiki-DumpFile or just send an email to "bug-mediawiki-dumpfile at rt.cpan.org". Be sure to use a title for the bug that others will be able to use to find this case as well and to include the full text from the error message. Please also specify if you were adventurous or not and if it was successful for you..
|
http://search.cpan.org/dist/MediaWiki-DumpFile/lib/MediaWiki/DumpFile/Pages.pm
|
CC-MAIN-2015-35
|
refinedweb
| 1,495
| 53.61
|
Applications, packages and modules¶
Simba has three software components; the application, the package and the module.
Application¶
An application is an executable consisting of zero or more packages.
An application file tree can either be created manually or by using the tool simba.
myapp ├── main.c └── Makefile
Package¶
A package is a container of modules.
A package file tree can either be created manually or by using the tool simba.
A package file tree must be organized as seen below. This is required by the build framework and Simba tools.
See the inline comments for details about the files and folders contents.
mypkg ├── mypkg │ ├── doc # package documentation │ ├── __init__.py │ ├── src # package source code │ │ ├── mypkg │ │ │ ├── module1.c │ │ │ └── module1.h │ │ ├── mypkg.h # package header file │ │ └── mypkg.mk # package makefile │ └── tst # package test code │ └── module1 │ ├── main.c │ └── Makefile └── setup.py
Development workflow¶
The package development workflow is fairly straight forward. Suppose
we want to add a new module to the file tree above. Create
src/mypkg/module2.h and
src/mypkg/module2.c, then include
mypkg/module2.h in
src/mypkg.h and add
mypkg/module2.c to
the list of source files in
src/mypkg.mk. Create a test suite for
the module. It consists of the two files
tst/module2/main.c and
tst/module2/Makefile.
It’s often conveniant to use an existing modules’ files as skeleton for the new module.
After adding the module
module2 the file tree looks like this.
mypkg ├── mypkg │ ├── doc │ ├── __init__.py │ ├── src │ │ ├── mypkg │ │ │ ├── module1.c │ │ │ ├── module1.h │ │ │ ├── module2.c │ │ │ └── module2.h │ │ ├── mypkg.h │ │ └── mypkg.mk │ └── tst │ ├── module1 │ │ ├── main.c │ │ └── Makefile │ └── module2 │ ├── main.c │ └── Makefile └── setup.py
Now, build and run the test suite to make sure the empty module implementation compiles and can be executed.
$ cd tst/module2 $ make -s run
Often the module development is started by implementing the module header file and at the same time write test cases. Test cases are not only useful to make sure the implementation works, but also to see how the module is intended to be used. The module interface becomes cleaner and easier to use it you actually start to use it yourself by writing test cases! All users of your module will benefit from this!
So, now we have an interface and a test suite. It’s time to start the implementation of the module. Usually you write some code, then run the test suite, then fix the code, then run the tests again, then you realize the interface is bad, change it, change the implementation, change the test, change, change... and so it goes on until you are satisfied with the module.
Try to update the comments and documentation during the development process so you don’t have to do it all in the end. It’s actually quite useful for yourself to have comments. You know, you forget how to use your module too!
The documentation generation framework uses doxygen, breathe and sphinx. That means, all comments in the source code should be written for doxygen. Breathe takes the doxygen output as input and creates input for sphinx. Sphinx then generates the html documentation.
Just run
make in the
doc folder to generate the html
documentation.
$ cd doc $ make $ firefox _build/html/index.html # open the docs in firefox
Namespaces¶
All exported symbols in a package must have the prefix
<package>_<module>_. This is needed to avoid namespace clashes
between modules with the same name in different packages.
There cannot be two packages with the same name, for the namespace reason. All packages must have unique names! There is one exception though, the three Simba packages; kernel, drivers and slib. Those packages does not have the package name as prefix on exported symbols.
int mypackage_module1_foo(void); int mypackage_module2_bar(void);
|
https://simba-os.readthedocs.io/en/11.0.0/user-guide/applications-packages-and-modules.html
|
CC-MAIN-2019-30
|
refinedweb
| 629
| 60.82
|
Originally posted by: girishs wrote Fri Dec 06, 2013 1:29 pm
Can we use aerospike as a content store to store multiple documents ? If so then can you please point me to a sample Java implementation ?
Originally posted by: girishs wrote Fri Dec 06, 2013 1:29 pm
Can we use aerospike as a content store to store multiple documents ? If so then can you please point me to a sample Java implementation ?
This does depend on how large. Aerospike stores records in blocks. By default the size of this is 128KB. You can make this as large as 1MB. However, if you are looking to store objects larger than that, this will be an issue.
If you wish to change the block size, you can set it in the namespace with the “write-block-size”. Set this to 128KB or any multiple thereof up to 1 MB.
originally posted by girishs » Mon Dec 09, 2013 8:15 am
Thanks for the reply …Instead of storing in a single block (128 kb - 1 MB) ,is there a way to chunk a large document (100+ GB) while storing and when we assemble it while doing a retrieve operation…
Currently there is no built in mechanism for doing this. We do have customers that have written their own client-side functions to do this. However, for objects as large as 100GB+, we would probably not recommend using Aerospike at this time.
In the future, we will have a feature called Large Data Types (LDTs), which will allow you to store very large objects in parts, giving you random access to any piece of the data. This can allow you to load time series data or any data that is based on some index (such as time, etc). However, this is set for a 2014 release.
Could you share the nature of the large objects? This might help us guide you to a better answer.
|
https://discuss.aerospike.com/t/insert-a-large-document-using-java-client/101
|
CC-MAIN-2019-09
|
refinedweb
| 323
| 79.9
|
I have Anaconda with Python 3.7 and it’s up to date. I installed Geant4 as follows:
conda install -c conda-forge geant4
It worked as expected, but when I started writing a python program and wrote
from Geant4 import (all)
It returned with module Geant4 not found. I looked in anaconda file structure and found Geant4 10.6 directory with all the expected folders. In anaconda file structure under share directory I found a full set of shared libraries (.os) c files and data files. I found env variables pointing to the data files in the env. What I did not find was a Geant4 directory under python3.7/site-packages with .py files (Geant4.py) allowing Geant4 calls from python.
Has anyone encountered this problem when installing Geant4 via conda install on linux Ubuntu/Mint1.9 x64? How do you get the python site-package programs with a conda install?
|
https://geant4-forum.web.cern.ch/t/conda-geant4-install/6061
|
CC-MAIN-2022-27
|
refinedweb
| 153
| 68.87
|
I am writing a Java alert program that requires each client PC to listen for a notification from the server rather than having each PC call the server every few seconds. I am trying to make the program as quiet as possible so that only the server send outs a broadcast to all network PCs when an approved user is ready to send the alert. The server will not be constantly broadcasting over the network. The issue I am having is getting the code to compile. Since there are multiple servers (client PCs in this case) I can't input an IP address because there are well over 100 PCs that need to run the instructions when the notification goes out.
Any suggestions with what needs to change would be appreciated.
package alert.bat; import java.io.*; import java.net.*; public class Alert { public static void main(String args[]) { try { DatagramSocket datsock = new DatagramSocket(444); DataOutputStream output = new DataOutputStream (datsock.getOutputStream()); output.writeUTF("Broadcast Alert!!!"); output.flush(); output.close(); datsock.close(); } catch(Exception e) { System.out.println(e); } } }
|
https://www.javaprogrammingforums.com/java-networking/42602-reverse-roles-client-server-communication.html
|
CC-MAIN-2020-16
|
refinedweb
| 178
| 53.21
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.