text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
After waaaaaaay too long, I finally got around to porting the Arduino AS5040 code to python. This is very preliminary, but it does work.
As self-hosted versions of wordpress appear to do a lousy job of formatting, and python is formatting-specific, I’ll put the code on github where I can guarantee that it looks the way it should:
———————————————
#!/usr/bin/python
import Adafruit_BBIO.GPIO as GPIO
import time
def read_raw_val(data, chip_select, clock):
GPIO.setup(data, GPIO.IN)
GPIO.setup(chip_select, GPIO.OUT)
GPIO.setup(clock, GPIO.OUT)
a = 0
output = 0
readbit = 0
GPIO.output(chip_select, GPIO.HIGH)
time.sleep(0.01)
GPIO.output(clock, GPIO.HIGH)
time.sleep(0.01)
GPIO.output(chip_select, GPIO.LOW)
time.sleep(0.01)
GPIO.output(clock, GPIO.LOW)
time.sleep(0.01)
while a < 16:
GPIO.output(clock, GPIO.HIGH)
time.sleep(0.01)
readbit = GPIO.input(data)
output = ((output << 1) + readbit)
GPIO.output(clock, GPIO.LOW)
time.sleep(0.01)
a += 1
return output
while 1:
rawval = read_raw_val(“P9_15″, “P9_11″, “P9_12″)
print “read: ” + str(rawval)
print “raw rotation: ” + str(rawval >> 6)
time.sleep(1)
——————————————-
I still need to strip and decode errors, extend it another two bits for the AS5045, and make the whole thing into an importable library.
Note that on the Beaglebone, the Adafruit GPIO library doesn’t yet fully support setting pull-up or pull-down resistors on pins. As such, you may have trouble with some pins not working. I know these three pins work, because I’ve tested them. The code should work for Raspberry Pi without any problem, and at some point I’ll port this to Bonescript, where I do have control over setting the internal pull-up and pull-down resistors, so any set of I/O pins can be used.
Following on with step 1c from the article: Making a high resolution ADC from an Arduino Mini Pro
Well since the PWM output is only capable of 61069 steps, it really is not 16 bits, as 16 bit resolution would allow for 65535 discrete steps so I will call it a 15+ bit DAC.
Now that we actually have the DAC working, it is time to verify how well it works. Here is a look at some of the data from the first unit I tested but first a quick description of how the data was taken for this: I set up the PWM to take a step approximately every 1.2 seconds. I programmed the PWM to start from zero then when it reached 61069 it would start back down again. The PWM output was fed through the 4th order RC filter. To take the DC value I used an 18bit measure system (over +/-10V) that was programmed to average a about 16,000 measurements every 950mS from the filter output. We ended up with a total of 72064 data points over the 61069 output steps. Since I did not fully synchronize the data measured with every step taken the actual INL and DNL are approximate. The measured values I came up with for this Arduino Mini Pro board:
LSB = ~ 0.000076V (76uV)
INL= ~ 0.00225V (2.25mV)
DNL= ~ 0.0015V (1.5mV)
Min Measurement = 0.00313686V
Maximum Measurement = 4.64705V
Gain error = 0.928782628 ( Hmm.. this seems to explain the step response we were seeing on the scope capture in the filter article, when I was expecting a 409mV step and was seeing about 374mV on the scope)
Offset error = 0.00313686V
The DNL is really bad news. Why? Because it means effectively we are losing a little more than 4 bits of resolution. It is like this: From step to step there can be an error of 1.5mV so there is no point in looking for anything smaller than 1.5mV then for the full scale range of 4.647V divided by 1.5mV gives us about 3100 discrete values which is a little more than 11 bits of resolution. I really need to see if this is an inherent problem with the PWM generator design or if it is just an issue with this one device. So I will end up running this test again on another one or two more devices.
Below are graphs of the actual data for the first device, and also a copy of the raw data if you are interested in looking it over.
Here is a copy of the raw data, you will need office 2010 to open this since it has more than 65,535 rows of data
DAC_Raw_Data_B1
This is a graph of the measured data:
Arduino DAC Graphed output
So our 15+ bit DAC seems to work pretty well.
To calculate INL We took the end points of the data and subtracted a linear line from the actual data taken and ended up with a worst case measurement of about 0.00225V or approximately an error of 0.5% of the expected programmed output.
Here is what the INL graph looks like:
Arduino DAC INL
To calculate DNL We just took every data point and subtracted the previous measurement and graphed the result. This resulted in about a 0.0015V error from measurement to measurement, really this is a pseudo DNL, this would work much better with a synchronized step to measurement.
Here is a look at the DNL graph:
Arduino DAC DNL
So if you are wondering why is the DNL getting wider, lets examine the output a little closer.
Here is the output for several steps around 0.5V, looks pretty decent….
Here is the output for several steps around 2.5V, starting to see a real monotonicity issue here. I am going to take a guess that this part has some divider issue that is occurring on every forth step (it is the 3rd bit in the clock divider that may be does not do what is expected)
Here is the output for several steps around 4.5V and here it is getting close to the maximum output and where the worst case DNL was happening.
Following on with step 1b from the article: Making a high resolution ADC from an Arduino Mini Pro
Starting with simulating the filter, I found that a fourth order RC filter would suit my needs for getting a nice DC level out of the PWM signal. I am expecting less than 1mV peak to peak with this filter. The filter is a little slow for large steps but for what I am doing it should be fine. The filter bandwidth is fc = 15.9154943092[Hz]
There is a nice filter calculator page here:
Here is a look at the filter in LTSpice:
4th order RC filter for PWM signal
This is what each section of the filter looks like in the simulation:
4th order RC filter for PWM signal response
If you want to play with this in LTSpice, you can download this file here: PWM_131Hz_filter
Now it is time to go cut it on the LPKF
RC Filter in Eagle
After the LPKF and some solder
LPKF PWM Filter board
Actual DC output from the filter. I programmed a PWM to 10000, wait for 2S then set the PWM to 15000, wait 2 seconds again, repeat. It settles in less than a half second with this filter, very close to the simulation values, see above. I was expecting around a 409mV change in voltage but was only getting 374mV. Something to look into…
PWM Filter DC output switching between a setting of 10,000 and 15,000 every 2 seconds
Following on with step 1a from the article: Making a high resolution ADC from an Arduino Mini Pro
In order to make a high resolution DC source we need a 16 bit PWM output from the Arduino. The standard libraries give you access to 8bits on the PWM pins, this only gives 256 discrete levels, if you had 16bits it will yield 65535 steps. Timer1 on the Mini Pro and UNO is capable of going into 16bit mode. With just a little Googling I found a library that give access to the 16bit timer modes, as well as letting you reprogram the PWM output frequency, the library was created by the user ‘runnerup’ AKA Sam Knight (as the copyright header in the lib indicates) in the Arduino forums: Here
The library is located here: PWM Frequency Library , the library also is supposed to work on both the Arduino Mega and UNO.
Timer1 is the only 16bit timer on the atmega328 (UNO and Mini Pro) and it only comes out on two pins, 9 and 10. If you were to use the Mega it has many more pins capable of 16 bit mode on the PWM outputs. I got the below timer/bit/pins table from: Advanced Arduino: direct use of ATmega counter/timers
Being able to reprogram the frequency is very important since it has an effect on the actual resolution that can be achieved. See the table at the bottom for an output from the demo program from the PWM Frequency library that shows the differences for frequency to resolution. Since 131 Hz gives almost 16 bits of resolution at 61069 steps, this is what frequency I am going to use as I experiment. This will give approximately 81uV DC filtered step sizes on average using a 5V Mini Pro, we will try to verify this later, the best meter I have here is good for 100uV resolution in the 11V range, I will try to borrow one that has higher resolution in the next couple of weeks.
So lets try it out…
I wrote a quick program to try a few things: first did the frequency set? second does high resolution PWM work on both channels 9 and 10 independently ( there was a note in the library that you would lose the second channel )? third is it really high resolution?
We set PWM on channel 9 and 10 separately at a frequency of 131HZ ( reciprocal of 131Hz is 0.0076335…. seconds) the period should be about 7.63mS. See screen shot, yes to both of those!
Arduino Mini Pro running 131Hz 16Bit mode PWM
Now does the 16 bit resolution work. The reciprocal of 131Hz divided by 61069 steps is ~125nS. This picture shows where I set the PWM to a setting of 1. Yes it works!
Arduino Mini Pro running 131Hz 16Bit mode PWM 125ns
Here it is with PWM set to 2 and yes it is now ~250nS
Arduino Mini Pro running 131Hz 16Bit mode PWM 250ns
Demonstrate Frequency Effect On Resolution Table ( I only ran it to 3931 HZ, but you get the idea….)
I want to make a high voltage Geiger/ Photo Multiplier Tube (PMT) supply that is regulated by an Arduino UNO or Mini Pro. The Arduino would need to control the voltage by reading if it is above or below target then adjusting the voltage as necessary, as a side job it will set the user programmed voltage and also display the target output voltage to an LCD display. Since the high voltage supply may go to over 1.5KV and I want very fine resolution control of less than 250mV, this means as a minimum I will need to be able to read back at least 6000 discrete steps. Is it possible? Yes I do think it should be possible, the atmega328 is quite a capable little micro-controller and with just a handful of low cost external components (some resistors, capacitors, and a comparator) it should source a high resolution voltage as well as measure one. Lets find out.
I will do this in several steps over many days or weeks, here is my plan of attack:
1) Create a high resolution Digital to Analog Converter (DAC) source.
a) We need a 16 bit PWM output from the Arduino. (Completed 4/4/15 ) Click here: Link to this article
b) We need to filter the PWM signal down to a usable stable DC value. (Completed 4/5/15 )Click here: Link to this article
i) Simulate
ii) Protoype
c) Lets verify the Integrated Non Linearity (INL) , Differential Non Linearity (DNL), Gain and offset of the DAC source just to see how good it actually is. (In progress 4/11/15 )Click here: Link to this article
2) Create a high resolution Analog to Digital Converter (ADC) measure system.
a) The compare circuit.
b) Calibration.
c) Lets verify the INL, DNL, Gain and offset of the ADC just to see how good it actually is.
3) Finally the High Voltage supply.
a) The HV generation.
b) Control and measurement.
c) How well does it work?
So this came home with us.
giant phono cabinet
It’s a 1970’s phonograph, AM/FM radio, and eight track tape player. The tape player can record eight-track tapes.
Well, in theory it can. In practice, the radio worked, the phono sounded terrible, and the tape player didn’t work at all.
A new needle fixed the phono. Awesome. Three out of four functions. But the eight-track was in awful shape. It doesn’t switch tracks, it doesn’t play. The capstan is entirely missing the rubber friction drive that moves the tape.
So I did what any good geek would do: I operated and spliced in a Beaglebone, so it plays streaming internet radio.
First hurdle. It’s cold, and my basement, where the only ethernet cable lives, is really cold. I’m also lazy. So I used a cat5 cable from the beaglebone to my laptop, set up port forwarding on my laptop, and then I could ssh into the beaglebone via the usb cable, and do software upgrades via apt-get through the cat5 and then across the wireless network.
On bbb:
ifconfig eth0 192.168.7.2
ifconfig add default gw 192.168.7.1
On laptop, start with an ifconfig and look at the output. There should be two eth[x] entries, one of which will include the 192.168.7.2 entry. That’s actually the usb. So you want to configure the other one, which is the hardware associated with the port where the cat5 cable lives. For this example I’m presuming it’s eth1, and your wireless connection is wlan0.
sudo ifconfig eth1 192.168.7.1
sudo iptables –table nat –append POSTROUTING –out-interface eth1 -j MASQUERADE
sudo iptables –append FORWARD –in-interface wlan0 -j ACCEPT
sudo echo 1 > /proc/sys/net/ipv4/ip_forward
If you’re using xubuntu (or apparently a few other distros) that last line won’t work because for whatever reason ‘>’ doesn’t inherit sudo permissions. So you have to punt:
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
Now you can start work on the bbb.
apt-get update
apt-get upgrade
apt-get install alsa-base alsa-utils
Mine already had all the alsa stuff installed. A bunch of programs didn’t successfully upgrade, like apache and dbus, which I need to look into later, but I don’t actually use any of those for this project so I don’t care. [note 1]
Second hurdle: setting up audio.
I’m using a USB soundcard. This requires disabling HDMI, which I’m not using at all, so the soundcard can manage the sound.
apt-get install mpg321
wget
(you might want to change your working directory to /home/debian before doing that, rather than sticking an mp3 in /etc/modprobe.d)
and then you can mpg321 1456.mp3 and hear some noise.
mpg321 will also accept url’s, so:
mpg321
[note 1]
For some reason, on current beaglebones led_aging.sh is a screwed-up file, and prevents apt-get update from working correctly on half a dozen packages.
From here:
do this:
Replace the existing /etc/init.d/led_aging.sh script with:
#!
It should look like this:
[Unit]
Description=blinker
ConditionPathExists=|/home/debian/programming/python
[Service]
ExecStart=/home/debian/programming/python/blinker.py
SyslogIdentifier=blinker
Restart=always
[Install]
WantedBy=multi-user.target
Then go to /etc/systemd/system/multi-user.target.wants and make a symlink to that file.
ln -s /lib/systemd/system/blinker.service /etc/systemd/system/multi-user.target.wants/blinker.service
The file itself, if it’s python, must begin with #!/usr/bin/python (or wherever you have python installed) and be executable.
Then, systemctl –system daemon-reload
and systemctl start blinker.service
and your program should start immediately and also start every time you boot the beaglebone.
If your program is dependent on other services before it starts, you can add those under the [unit] group, like:
ConditionPathExists=|/home/debian/programming/python
After=network.target
That way it will wait until the network is up before running — which, in the case of what I’m building, is nice, because it’s streaming internet radio.
debugging audio
detail of eight track audio
kicad_board
phono circuit
phono control board replacement
phono control panel and button
phono all wired up
phono closed up
Copyright © 2015 Mad Scientist Hut Blog - All Rights ReservedPowered by WordPress & the Atahualpa Theme by BytesForAll. Discuss on our WP Forum
24 queries. 0.330 seconds.
|
http://madscientisthut.com/wordpress/
|
CC-MAIN-2015-35
|
refinedweb
| 2,870
| 70.43
|
All I want to do is make a simple page that allows users to update a database by filling in a form!
You have a form, you have an ASP.NET Core MVC application, but how on earth are you supposed to get the data from the form to your ASP.NET Controller actions?
Table of Contents
Turns out you have two options (well, actually there are more than that, but let’s focus on these two for starters).
#1 The manual way (using plain old HTML forms).).
Let’s start with the bit our users will actually see; the form itself.
<form method="POST"> <label for="firstName">Your first name</label> <input type="text" id="firstName" placeholder="Your name goes here" /> <input type="submit" /> </form>
On first glance, our HTML form looks reasonable. We’ve got a single text box (firstName) and the form is going to send its data via an HTTP Post.
Unless we indicate otherwise, this will post back to the same location used to serve the form.
So, using default routing conventions, we could use a controller like this…
public class FormController : Controller { [HttpGet] public IActionResult Index() { return View(); } [HttpPost] public IActionResult Index(string firstName) { return Content($"Hello {firstName}"); } }
We’ll see our form if we browser to
http://<YourSiteHere>/form.
When we submit the form, it will be posted to the same address (
http://<YourSiteHere>/form) (as an HTTP Post request).
But, we have a fatal flaw in our form! Run this, enter a name, submit the form and all you’ll get is a half-complete sentence.
For the value of the input to be submitted as form data, it needs to have a name attribute.
<input type="text" id="firstName" placeholder="Your name goes here" name="firstName" />
With the name attribute in place, ASP.NET MVC spots the incoming “firstName” value (in the submitted form data) and binds it to the
firstName parameter we specified in the
Index (POST) method on our controller.
Diagnose form data problems
When you come across issues like this, it isn’t always obvious what’s causing the problem.
This is where the Chrome network tab comes into its own.
Submit your form with the developer tools open (F12) and you’ll see the form post in the network tab.
Click the relevant request and check out the Form Data section.
In this example you can see that
firstName has been submitted as form data and so should be available to ASP.NET Core.
If you don’t see the values you expect then the chances are your form isn’t working properly and that’s where to focus your efforts on fixing the problem.
#2 Using Tag Helpers
Now we know how to manually set up our forms to post the right data, correctly tagged so our MVC controllers know how to handle it.
But this seems like a bit of a faff (and is asking for trouble every time we rename anything either in the HTML or the C#).
With ASP.NET Core you have an alternative.
You can use Tag Helpers to save you manually adding name attributes to all of your input fields.
First up, create a class to act as the model for our page.
public class FormModel { public string FirstName { get; set; } }
Now we can tweak our view (cshtml page) to use this model by adding a model declaration at the top.
@model FormBasics.Controllers.FormModel
When you try this for yourself, make sure you specify the correct namespace, where your model exists.
With that in place, we can start using tag helpers to do some of the plumbing (e.g. adding name attributes) we’d otherwise have to do ourselves.
<form asp- <label asp-</label> <input asp- <input type="submit" /> </form>
We’ve made a few changes here.
First we’ve explicitly defined the action and controller that this form will post back to, using
asp-action and
asp-controller. This isn’t strictly necessary, but it doesn’t hurt to be explicit, just in case we start moving things around, the form will continue to post to the correct controller action.
Secondly, because this page now knows about its model, we can use the
asp-for tag helper on our
input and
label elements, specifying the relevant property name from our model.
Here’s the resulting form.
And here’s how this will be rendered (it’s often worth taking a look at the source code in your browser to see what HTML we end up with in cases like this).
<form action="/Form" method="post"> <label for="FirstName">First Name</label> <input placeholder="Your name goes here" type="text" id="FirstName" name="FirstName" value=""> <input type="submit"> <input name="__RequestVerificationToken" type="hidden" value="<token_generated_here>"> </form>
Note how ASP.NET has picked up that the rendered form should make a
POST to
/form and has automatically included the
name attribute for the input field.
If you’re wondering what the
__RequestVerificationToken is, this is a neat way to reduce the risk of your application being duped by a cross site request forgery attack.
I’ve got a model and I’m not afraid to use it
Now we’ve got a model we can get rid of that string parameter in the Index POST action and replace it with a
FormModel parameter.
[HttpPost, ValidateAntiForgeryToken] public IActionResult Index(FormModel model) { return Content($"Hello {model.FirstName}"); }
ASP.NET MVC Core will bind the form data to your model automatically.
This also means we won’t find ourselves constantly updating the
Index action every time we need to handle another value submitted via the form.
Incidentally, I’ve also added the other half of the Request Verification Token check here (the
ValidateAntiForgeryToken attribute) to make sure this form has been posted from our site (and not a malicious site hosted by someeone else).
A better label
Finally, by default the
asp-for tag helper on our label has used the name of the property for its value.
We can improve this and be explicit about what the label should say, with an attribute on our model.
public class FormModel { [DisplayName("First Name")] public string FirstName { get; set; } }
Admittedly, this form isn’t going to win any prizes for its design, but at least the label reads a little better!
What about HTML Helpers?
If you’ve used previous versions of MVC, you’re probably familiar with the HTML Helpers from previous versions. They look like this…
@Html.TextBox("firstname")
One huge benefit of Tag Helpers over HTML Helpers, is that they leave you free to define your markup using standard HTML tags. Rather than have some magic render the entire element for you, the tag helper extends your standard HTML element. This is evident in our example where we are free to add a placeholder attribute to the FirstName input without jumping through extra hoops to bend an HTML Helper to our will.
In Summary
Lean on ASP.NET Core’s Tag Helpers to get your forms up and running.
Wire up your inputs (text boxes etc) to something on the model (using Tag Helpers) and the value entered by the user will be submitted as
formdata when the user submits the form.
ASP.NET Core’s model binding will then kick in and assign the posted values to an instance of the Model.
From here, you can do whatever you need to with the model (including saving it to a database etc.)
|
https://jonhilton.net/2017/08/17/how-to-get-data-from-an-html-form-to-your-asp.net-mvc-core-controller/
|
CC-MAIN-2022-05
|
refinedweb
| 1,251
| 60.24
|
pay alternatives and similar packages
Based on the "Third Party APIs" category
stripity_stripe9.5 7.1 pay VS stripity_stripeAn Elixir Library for Stripe.
slack9.3 6.2 pay VS slackSlack real time messaging client in Elixir.
tentacat9.2 6.4 pay VS tentacatSimple Elixir wrapper for the GitHub API.
google-cloud9.2 9.9 pay VS google-cloudThis repository contains all the client libraries to interact with Google APIs.
pigeon9.1 4.1 pay VS pigeonHTTP2-compliant wrapper for sending iOS and Android push notifications.
extwitter9.0 3.9 pay VS extwitterTwitter client library for Elixir.
gringotts8.9 2.6 pay VS gringottsA complete payment library for Elixir and Phoenix Framework similar to ActiveMerchant from the Ruby world.
ex_twilio8.8 5.7 pay VS ex_twilioTwilio API client for Elixir.
nadia8.7 6.5 pay VS nadiaTelegram Bot API Wrapper written in Elixir.
mailgun8.4 0.0 pay VS mailgunElixir Mailgun Client.
statix8.3 4.6 pay VS statixExpose app metrics in the StatsD protocol.
ethereumex8.3 4.9 pay VS ethereumexElixir JSON-RPC client for the Ethereum blockchain.
- 7.8 0.3 pay VS facebookFacebook Graph API Wrapper written in Elixir.
commerce_billing7.7 0.0 pay VS commerce_billingA payment-processing library for Elixir that supports multiple gateways (e.g. Bogus & Stripe).
MongoosePush7.6 7.5 pay VS MongoosePushMongoosePush is an simple Elixir REST service allowing to send push notification via FCM and/or APNS.
ex_statsd7.4 0.0 pay VS ex_statsdA statsd client implementation for Elixir.
Execjs7.2 0.0 pay VS ExecjsRun JavaScript code from Elixir
shopify7.1 1.3 pay VS shopifyEasily access the Shopify API.
spotify_ex7.1 3.8 pay VS spotify_exAn Elixir wrapper for the Spotify Web API.
kane6.9 2.4 pay VS kaneA Google Cloud Pub/Sub client.
sendgrid6.9 0.0 pay VS sendgridSend composable, transactional emails with SendGrid.
apns6.8 0.0 pay VS apnsApple Push Notifications Service client library for elixir.
sparkpost6.7 0.0 pay VS sparkpostAn Elixir library for sending email using SparkPost.
m2x6.7 0.0 pay VS m2xElixir client for the AT&T M2X, a cloud-based fully managed time-series data storage service for network connected machine-to-machine (M2M) devices and the Internet of Things (IoT). (Erlang Version).
diplomat6.7 3.1 pay VS diplomatA Google Cloud Datastore client.
elixtagram6.6 0.0 pay VS elixtagramInstagram API client for Elixir.
forcex6.6 0.4 pay VS forcexElixir library for the Force.com REST API.
mailchimp6.5 5.2 pay VS mailchimpA basic Elixir wrapper for version 3 of the MailChimp API.
lob_elixir6.5 2.4 pay VS lob_elixirSend postcards, letters and checks programmatically with Elixir
Stripe6.5 0.0 pay VS StripeStripe API client for Elixir
qiniu6.4 0.4 pay VS qiniuQiniu SDK for Elixir.
airbrakex6.2 0.0 pay VS airbrakexElixir client for the Airbrake service.
google_sheets6.1 0.0 pay VS google_sheetsElixir library for fetching and polling Google spreadsheet data in CSV format.
airbax5.9 0.0 pay VS airbaxException tracking from Elixir to Airbrake.
dnsimple5.8 6.6 pay VS dnsimpleElixir client for the DNSimple API v2.
instrumental5.7 0.0 pay VS instrumentalAn Elixir client for Instrumental.
amazon_product_advertising_clientAmazon Product Advertising API client for Elixir.
mandrill5.6 0.0 pay VS mandrillA Mandrill wrapper for Elixir.
riemann5.4 0.0 pay VS riemannA Riemann client for Elixir.
ex_gecko5.4 7.8 pay VS ex_geckoElixir SDK to communicate with Geckoboard's API.
bitpay5.3 0.0 pay VS bitpayElixir core library for connecting to bitpay.com.
keenex5.3 0.0 pay VS keenexA Keen.io API Client.
cashier5.2 0.0 pay VS cashierPayment gateway offering a common interface into multiple payment providers.
ExTrello5.1 0.0 pay VS ExTrelloAn Elixir library for interfacing with the Trello API
Stripy5.1 2.6 pay VS StripyMicro wrapper for Stripe's REST API.
pay_pal5.0 0.0 pay VS pay_palElixir library for working with the PayPal REST API.
dogstatsd5.0 0.0 pay VS dogstatsdAn Elixir client for DogStatsd.
airbrake4.8 0.0 pay VS airbrakeAn Elixir notifier for the Airbrake.
pagexduty4.7 0.0 pay VS pagexdutyA Pagerduty client for Elixir.
ex_twiml4.6 0.0 pay VS ex_twimlGenerate TwiML for your Twilio integration, right inside Elixir.
Scout APM: Application Performance Monitoring
Do you think we are missing an alternative of pay or a related project?
Popular Comparisons
README
[](eric.jpeg)
Pay
Pay is an Elixir Lib to deal with Paypal and other payment solutions. The lib's main goal is to be easy to extend other payment solutions.
It also uses Maru to receive the callback from the payment, so you don't need to worry about it. Just add the function that you want to run everytime that a payment is confirmed (or denied). {TODO}
Usage
Creating a Payment (you must use the PaypalPayment struct):
Payment.create_payment(%Paypal.Payment{intent: "authorize", payer: %{"funding_instruments" => [%{"credit_card" => %{"billing_address" => %{"city" => "Saratoga", "country_code" => "US", "line1" => "111 First Street", "postal_code" => "95070", "state" => "CA"}, "cvv2" => "874", "expire_month" => 11, "expire_year" => 2018, "first_name" => "Betsy", "last_name" => "Buyer", "number" => "4417119669820331", "type" => "visa"}}], "payment_method" => "credit_card"}, transactions: [%{"amount" => %{"currency" => "USD", "details" => %{"shipping" => "0.03", "subtotal" => "7.41", "tax" => "0.03"}, "total" => "7.47"}, "description" => "This is the payment transaction description."}]})
or if using Paypal as the payment method:
# create payment payment = Payment.create_payment(%Paypal.Payment{ intent: "sale", payer: %{"payment_method" => "paypal"}, transactions: [%{"amount" => %{"currency" => "USD", "details" => %{"shipping" => "0.03", "subtotal" => "7.41", "tax" => "0.03"}, "total" => "7.47"}, "description" => "This is the payment transaction description."}], redirect_urls: %{"return_url" => "", "cancel_url" => ""} }) approval_url = Enum.find(payment["links"], fn (x) -> x["rel"] == "approval_url" and x["method"] == "REDIRECT" end) # redirect user to approval_url["href"] # after user has approved the payment, we can execute it on return url call. Payment.execute_payment(%Paypal.Payment{id: "PAYMENT_ID_FROM_RETURN_CALL", payer: %{id: "PAYER_ID_FROM_RETURN_CALL"}})
then add the
pay to your
config/config.exs
config :pay, type: :paypal
And also your key from paypal:
config :pay, :paypal, client_id: "EOJ2S-Z6OoN_le_KS1d75wsZ6y0SFdVsY9183IvxFyZp", secret: "EClusMEUk8e9ihI7ZdVLF5cZ6y0SFdVsY9183IvxFyZp", env: :prod
In your mix file:
def deps do [{:pay, github: "era/pay"}] end def application do [applications: [:pay]] end
Phoenix + Pay
If you want an example of how to use it, take a look at era/extip. It's a very simple example of how to use pay with Phoenix Apps.
Contributing
- Fork it
- Create your feature branch (
git checkout -b my-new-feature)
- Create a Pull Request
TODO
- Support all Paypal API.
- Add pagar.me support.
- Add pagseguro support.
License
MIT
*Note that all licence references and agreements mentioned in the pay README section above are relevant to that project's source code only.
|
https://elixir.libhunt.com/pay-alternatives
|
CC-MAIN-2020-45
|
refinedweb
| 1,088
| 59.5
|
0
Hello, I am working on a program, and I noticed that all my threads are starting and stopping in exactly the same order I put them in, and they are not trading places in the output, or mixing output like concurrent objects should. Here is my code:
public class RaceHorseAppII { public static void main(String[] args) { new RaceHorse("Stan").run(); new RaceHorse("Tom").run(); new RaceHorse("Harry").run(); new RaceHorse("Finn").run(); new RaceHorse("Sawyer").run(); } public static class RaceHorse implements Runnable { private String name = ""; public RaceHorse(String name){ this.name = name; } public void run() { for(int i = 0 ; i < 50 ; i++){ System.out.println(name); } } }//end inner class }//end class
Does anyone know why it is doing this?
|
https://www.daniweb.com/programming/software-development/threads/455656/threads-acting-like-they-are-executing-serially
|
CC-MAIN-2018-43
|
refinedweb
| 120
| 68.36
|
last modified date in status bar?
is it possible to have last modified date in status bar?
it’s nice to have as a feature and useful sometimes,
can it be displayed somehow in npp (settings or plugin)?
with python script like discussed here
and code to get the last modified timestamp like
import os from datetime import datetime modified_time = os.path.getmtime(notepad.getCurrentFilename()) last_modified_date = datetime.fromtimestamp(modified_time)
Cheers
Claudia
ok but how does this refresh when I save the file (new date should be shown)?
yes, I assume the uiupdate callback is sufficient as it is triggered quite often but in case
it is needed, the notepad object provides also a filesaved callback.
So something like.DOCSIZE, '{}'.format(last_modified_date)) editor.callback(StatusbarSelOverride, [SCINTILLANOTIFICATION.UPDATEUI]) notepad.callback(StatusbarSelOverride, [NOTIFICATION.FILESAVED])
should do the trick, of course you need to adapt it to your needs - is just a quick tip
on how to achieve your goal.
Cheers
Claudia
yes, it’s pretty good!!
some help with date formating?
I would prefer dd/mm/yyyy hh:mm please :D
e.g. 13/02/2018 15:14
- Claudia Frank last edited by Claudia Frank
using
last_modified_date.strftime('%d/%m/%Y %H:%M')
should do the trick.
More infos, if other format is wanted can be seen here.
Cheers
Claudia
why do I have to link anywhere inside the file in order for it to work?
I mean when I double click on a file from windows explorer,
I have set initialization atstartup,
does it need something else/more maybe?
I think I found that, I added:
notepad.callback(StatusbarSelOverride, [NOTIFICATION.READY])
@patrickdrd said:
notepad.callback(StatusbarSelOverride, [NOTIFICATION.READY])
Interesting. I found that I don’t need the “READY”–I don’t have to click inside the document tab to see the mod time updated after loading a file. I don’t use Explorer integration such that a double-click opens the file in Notepad++, but it works for me either via Explorer right-click or drag-n-drop from Explorer.
- SalviaSage last edited by
patrickdd, have you been able to add the modified date to the section 1 of the statusbar, where the language name is, without overriding the language name? if so, can you please post the full code? thanks.
@SalviaSage said:
section 1 of the statusbar
Just change
.DOCSIZEto
.DOCTYPE. No need to post more code.
That kind of change should be really obvious from the third posting I made (the one with the images of the status bar) in this thread.
Scott, the difference is that notepad++ is closed in my case, before opening the file, in that case only initial script doesn’t work
@patrickdrd said:
notepad++ is closed in my case, before opening the file
Yeah, that’s a case I don’t think about, or code for. Because, as the best program on my PC, Notepad++ never gets closed. :-D
does READY callback work for you?
It seems that this one can’t really be used when running on linux.
BUFFERACTIVATED might be another callback which is useful in such cases.
Cheers
Claudia
.
|
https://community.notepad-plus-plus.org/topic/15698/last-modified-date-in-status-bar/8
|
CC-MAIN-2019-51
|
refinedweb
| 517
| 64.51
|
BBCode Python Module II.
BBCode Python Module
In the last few weeks I have been tinkering with a dynamic website created with Turbogears, but that's not what this blog entry is about. The website I have in mind is similar to a forum in that most of the content come from the users (can't tell you exactly what it is just yet). I wanted a way for users to post comments with simple formatting, but I didn't want to let them enter straight html - for all the problems that would cause. No doubt, some wise-guy would figure out that he could enter the tag! So I decided to implement something like BBCode, which I dubbed 'Post Markup'.
But once I came close to finishing Post Markup, I realised it was so much like BBCode, it was BBCode (which doesn't seem to have any strict definition anyway). My BBCode parser is deliberately quite relaxed, it will try to make sense of the BBCode rather than throwing errors. It will close open tags as well as handle overlapping tags so that it always produces valid XHTML snippets.
You can download postmarkup.py here. This code is 'politeware', you may use it for any purpose you want as long as you say 'thank you' and you promise not to sue me if it breaks! Let me know if you have any suggestions or bug-fixes.
Here's a quick example of basic use.
import postmarkup markup = postmarkup.PostMarkup().default_tags() bbcode = "[b]Hello, World![/b]" print markup.render_to_html(bbcode)
There are comments in the module. If you have any questions, please email or (better) post a comment here so I can build up documentation.
The following is a cut and paste of the test output. It shows the basic tags (bold, italic etc) and more advanced tags.
[b]Hello[/b]
[s]Strike through[/s]
[b]bold [i]bold and italic[/b] italic[/i]
[google]Will McGugan[/google]
[wiki Will McGugan]Look up my name in Wikipedia[/wiki]
[link]My homepage[/link]
[link][/link]
[quote Will said...]BBCode is very cool[/quote]
Will said...
BBCode is very cool
[b]Long test[/b] New lines characters are converted to breaks. Tags my be [b]ove[i]rl[/b]apped[/i]. [i]Open tags will be closed.
New lines characters are converted to breaks. Tags my be overlapped.Open tags will be closed.
-
|
http://www.willmcgugan.com/2007/3/
|
CC-MAIN-2013-48
|
refinedweb
| 401
| 75.3
|
Can't work out BaseDocument.StartPickSession
I want to do something very simple: Enter into a pick session & return the 1st object picked, ending the session & storing the object reference. I don't need a multi-pick session.
Here's my code:
def OnPick(flags, active, multi): if flags & c4d.PICKSESSION_FLAG_CANCELED: print "User cancel" doc.StopPickSession(cancel = True) else: doc.StopPickSession(cancel = False) print "active: ", active def main(): doc = documents.GetActiveDocument() doc.StartPickSession(OnPick, multi=False)
This seems to work & 'active' contains the picked object.
But I can't return any reference to the object in 'active'
as soon as I try to read it into a variable, or return it I get AttributeError: 'function' object has no attribute 'function'.
I'm afraid I just don't understand how pick sessions are supposed to work.
Any help would be very appreciated.
Any ideas anyone ?
I can't find an example anywhere on the internet for how a pick session works, except for one plugin cafe post, which is how I got as far as I have.
Hi @Graeme first of all, we process and answers question each days during the working week, so it's not necessary to bump a topic, this will not accelerate our answers.
Regarding your issue, PickSession is an async task, meaning, this will not block the current execution flow. So there is no real way to catch in the main method the result just after the StartPickingSession.
So the solution is on the OnPick either you do what you expect to do, or you reroute the data to where you want.
Here an example
import c4d def OnPick(flags, active, multi): if flags & c4d.PICKSESSION_FLAG_CANCELED: doc.StopPickSession(cancel=True) main(active) def main(pickedObjects=None): doc = c4d.documents.GetActiveDocument() if pickedObjects is None: doc.StartPickSession(OnPick, multi=False) else: print(pickedObjects) doc.StopPickSession(cancel=False) if __name__=='__main__': main()
If you have any questions, feel free to ask.
Cheers,
Maxime.
Thanks Maxime, I'll process this & see if I can work it out.
Thanks for the example.
So trying this out, there are a couple of problems:
- Once I have run the script & picked an object, it 'remembers' it & if I run the script again, pickedObjects contains the object I picked in that previous session. I want the selection to be cleared each time
- If I revert my scene to saved, run the script & press escape to cancel the pick session, it reliably hangs Cinema 4D - is there some kind of clean-up I need to be performing or msg to send ?
Thanks for your help.
I'm not able to reproduce or I don't understand your problem.
Just to be sure you execute the script within the script manager?
Same here I'm not able to reproduce on which version are you?
Cheers,
Maxime.
Without a further reply from you until tomorrow I will consider and mark this topic as solved, but feel free to open it again if you have more information.
Cheers,
Maxime.
|
https://plugincafe.maxon.net/topic/12436/can-t-work-out-basedocument-startpicksession
|
CC-MAIN-2021-17
|
refinedweb
| 502
| 64.91
|
. the performance of the shell sort depends on the type of sequence used for a given input array.
Some of the optimal sequences used are:
- Shell’s original sequence: N/2 , N/4 , …, 1
- Knuth’s increments: 1, 4, 13, …, (3k – 1) / 2
- Sedgewick’s increments: 1, 8, 23, 77, 281, 1073, 4193, 16577...4j+1+ 3·2j+ 1.
- Hibbard’s increments: 1, 3, 7, 15, 31, 63, 127, 255, 511…
- Papernov & Stasevich increment: 1, 3, 5, 9, 17, 33, 65,...
- Pratt: 1, 2, 3, 4, 6, 9, 8, 12, 18, 27, 16, 24, 36, 54, 81....
How Shell Sort Works?
- Suppose, we need to sort the following array.
- We are using the shell’s original sequence
(N/2, N/4, ...1) as intervals in our algorithm.
In the first loop, if the array size is
N = 8then, the elements lying at the interval of
N/2 = 4are compared and swapped if they are not in order.
- The 0th element is compared with the 4th element.
- If the 0th element is greater than the 4th one then, the 4th element is first stored in
tempvariable and the 0th element (ie. greater element) is stored in the 4th position and the element stored in
tempis stored in the 0th position.
This process goes on for all the remaining elements.
- In the second loop, an interval of
N/4 = 8/4 = 2is taken and again the elements lying at these intervals are sorted.
You might get confused at this point.
The elements at 4th and 2nd position are compared. The elements at 2nd and 0th position are also compared. All the elements in the array lying at the current interval are compared.
- The same process goes on for remaining elements.
- Finally, when the interval is
N/8 = 8/8 =1then the array elements lying at the interval of 1 are sorted. The array is now completely sorted.
Shell Sort Algorithm
shellSort(array, size) for interval i <- size/2n down to 1 for each interval "i" in array sort all the elements at interval "i" end shellSort
Python, Java and C/C++ Examples
# Python3 program for implementation of Shell Sort def shellSort(array, n): gap = n // 2 while gap > 0: for i in range(gap, n): temp = array[i] j = i while j >= gap and array[j - gap] > temp: array[j] = array[j - gap] j -= gap array[j] = temp gap //= 2 data = [9, 8, 3, 7, 5, 6, 4, 1] size = len(data) shellSort(data, size) print('Sorted Array in Ascending Order:') print(data)
// Shell sort in Java programming import java.util.Arrays; class ShellSort{ void shellSort(int array[], int n){ for (int gap = n/2; gap > 0; gap /= 2){ for (int i = gap; i < n; i += 1) { int temp = array[i]; int j; for (j = i; j >= gap && array[j - gap] > temp; j -= gap){ array[j] = array[j - gap]; } array[j] = temp; } } } public static void main(String args[]){ int[] data={9, 8, 3, 7, 5, 6, 4, 1}; int size=data.length; ShellSort ss = new ShellSort(); ss.shellSort(data, size); System.out.println("Sorted Array in Ascending Order: "); System.out.println(Arrays.toString(data)); } }
// Shell Sort in C programming #include <stdio.h>){ for(int i=0; i<size; ++i){ printf("%d ", array[i]); } printf("\n"); } int main(){ int data[]={9, 8, 3, 7, 5, 6, 4, 1}; int size=sizeof(data) / sizeof(data[0]); shellSort(data, size); printf("Sorted array: \n"); printArray(data, size); }
// Shell Sort in C++ programming #include <iostream> using namespace std;) { int i; for (i = 0; i < size; i++) cout << array[i] << " "; cout << endl; } int main() { int data[] = {9, 8, 3, 7, 5, 6, 4, 1}; int size = sizeof(data) / sizeof(data[0]); shellSort(data, size); cout << "Sorted array: \n"; printArray(data, size); }
Complexity
Shell sort is unstable sorting algorithm because this algorithm does not examine the elements lying in between the intervals.
Time Complexity
- Worst Case Complexity: less than or equal to
O(n2)
Worst case complexity for shell sort is always less than or equal to
O(n2).
According to Poonen Theorem, worst case complexity for shell sort is
Θ(NlogN)2/(log logN)2)or
Θ(NlogN)2/log logN)or
Θ(N(logN)2)or something in between.
- Best Case Complexity:
O(n*log n)
When the array is already sorted, the total number of comparison for each interval (or increment) is equal to the size of the array.
- Average Case Complexity:
O(n*log n)
It is around
O(n1.25).
The complexity depends on the interval chosen. The above complexities differ for different increment sequence chosen. Best increment sequence is unknown.
Space Complexity:
The space complexity for shell sort is
O(1).
Shell Sort Applications
Shell sort is used when:
- calling a stack is overhead. uClibc library uses this sort.
- recursion exceeds a limit. bzip2 compressor uses it.
- Insertion sort does not perform well when the close elements are far apart. Shell sort helps in reducing the distance between the close elements. Thus, there will be less number of swappings to be performed.
|
https://www.programiz.com/dsa/shell-sort
|
CC-MAIN-2020-16
|
refinedweb
| 840
| 62.78
|
- Using Resources
- Working with Menus
- Introduction to GDI (Graphics Device Interface)
- Handling Important Events
- Sending Messages Yourself
- Summary
Handling Important Events
As you've been painfully learning, Windows is an event-based operating system. Responding to events is one of the most important aspects of a standard Windows program. This next section covers some of the more important events that have to do with window manipulation, input devices, and timing. If you can handle these basic events, you'll have more than you need in your Windows arsenal to handle anything that might come up as part of a DirectX game, which itself relies very little on events and the Windows operating system.
Window Manipulation
There are a number of messages that Windows sends to notify you that the user has manipulated your window. Table 3.4 contains a small list of some of the more interesting manipulation messages that Windows generates.
Table 3.4 Window Manipulation Messages
Let's take a look at WM_ACTIVATE, WM_CLOSE, WM_SIZE, and WM_MOVE and what they do. For each one of these messages, I'm going to list the message, wparam, lparam, and some comments, along with a short example WinProc() handler for the event.
Message: WM_ACTIVATE
Parameterization:
fActive = LOWORD(wParam); // activation flag fMinimized = (BOOL)HIWORD(wParam); // minimized flag hwndPrevious = (HWND)lParam; // window handle
The fActive parameter basically defines what is happening to the windowthat is, is the window being activated or deactivated? This information is stored in the low-order word of wparam and can take on the values shown in Table 3.5.
Table 3.5 The Activation Flags for WM_ACTIVATE
The fMinimized variable simply indicates if the window was minimized. This is true if the variable is nonzero. Lastly, the hwndPrevious value identifies the window being activated or deactivated, depending on the value of the fActive parameter. If the value of fActive is WA_INACTIVE, hwndPrevious is the handle of the window being activated. If the value of fActive is WA_ACTIVE or WA_CLICKACTIVE, hwndPrevious is the handle of the window being deactivated. This handle can be NULL. That makes sense, huh?
In essence, you use the WM_ACTIVATE message if you want to know when your application is being activated or deactivated. This might be useful if your application keeps track of every time the user Alt+Tabs away or selects another application with the mouse. On the other hand, when your application is reactivated, maybe you want to play a sound or do something. Whatever, it's up to you.
Here's how you code when your application is being activated in the main WinProc():
case WM_ACTIVATE: { // test if window is being activated if (LOWORD(wparam)!=WA_INACTIVE) { // application is being activated } // end if else { // application is being deactivated } // end else } break;
Message: WM_CLOSE
Parameterization: None
The WM_CLOSE message is very cool. It is sent right before a WM_DESTROY and the following WM_QUIT are sent. The WM_CLOSE indicates that the user is trying to close your window. If you simply return(0) in your WinProc(), nothing will happen and the user won't be able to close your window! Take a look at DEMO3_7.CPP and the executable DEMO3_7.EXE to see this in action. Try killing the applicationyou won't be able to!
CAUTION
Don't panic when you can't kill DEMO3_7.EXE. Simply press Ctrl+Alt+Del, and the Task Manager will come up. Then select and terminate the DEMO3_7.EXE application. It will cease to existjust like service at electronics stores starting with "F" in Silicon Valley.
Here's the coding of the empty WM_CLOSE handler in the WinProc() as coded in DEMO3_7.CPP:
case WM_CLOSE: { // kill message, so no further WM_DESTROY is sent return(0); } break;
If making the user mad is your goal, the preceding code will do it. However, a better use of trapping the WM_CLOSE message might be to include a message box that confirms that the application is going to close or maybe do some housework. DEMO3_8.CPP and the executable take this route. When you try to close the window, a message box asks if you're certain. The logic flow for this is shown in Figure 3.20.
Figure
3.20 The logic flow for WM_CLOSE.
Here's the code from DEMO3_8.CPP that processes the WM_CLOSE message:
case WM_CLOSE: { // display message box int result = MessageBox(hwnd, "Are you sure you want to close this application?", "WM_CLOSE Message Processor", MB_YESNO | MB_ICONQUESTION); // does the user want to close? if (result == IDYES) { // call default handler return (DefWindowProc(hwnd, msg, wparam, lparam)); } // end if else // throw message away return(0); } break;
Cool, huh? Notice the call to the default message handler, DefWindowProc(). This occurs when the user answers Yes and you want the standard shutdown process to continue. If you knew how to, you could have sent a WM_DESTROY message instead, but since you haven't learned how to send messages yet, you just called the default handler. Either way is fine, though.
Next, let's take a look at the WM_SIZE message, which is an important message to process if you've written a windowed game and the user keeps resizing the view window!
Message: WM_SIZE
Parameterization:
fwSizeType = wParam; // resizing flag nWidth = LOWORD(lParam); // width of client area nHeight = HIWORD(lParam); // height of client area
The fwSizeType flag indicates what kind of resizing just occurred, as shown in Table 3.6, and the low and high word of lParam indicate the new window client dimensions.
Table 3.6 Resizing Flags for WM_SIZE
As I said, processing the WM_SIZE message can be very important for windowed games because when the window is resized, the graphics display must be scaled to fit. This will never happen if your game is running in full-screen, but in a windowed game, you can count on the user trying to make the window larger and smaller. When this happens, you must recenter the display and scale the universe or whatever to keep the image looking correct. As an example of tracking the WM_SIZE message, DEMO3_9.CPP prints out the new size of the window as it's resized. The code that tracks the WM_SIZE message in DEMO3_9.CPP is shown here:
case WM_SIZE: { // extract size info int width = LOWORD(lparam); int height =_SIZE Called - New Size = (%d,%d)", width, height); TextOut(hdc, 0,0, buffer, strlen(buffer)); // release the dc back ReleaseDC(hwnd, hdc); } break;
CAUTION
You should know that the code for the WM_SIZE message handler has a potential problem: When a window is resized, not only is a WM_SIZE message sent, but a WM_PAINT message is sent as well! Therefore, if the WM_PAINT message was sent after the WM_SIZE, the code in WM_PAINT could erase the background and thus the information just printed in WM_SIZE. Luckily, this isn't the case, but it's a good example of problems that can occur when messages are out of order or when they aren't sent in the order you think they are.
Last, but not least, let's take a look at the WM_MOVE message. It's almost identical to WM_SIZE, but it is sent when a window is moved rather than resized. Here are the details:
Message: WM_MOVE
Parameterization:
xPos = (int) LOWORD(lParam); // new horizontal position in screen coords yPos = (int) HIWORD(lParam); // new vertical position in screen coords
WM_MOVE is sent whenever a window is moved to a new position, as shown in Figure 3.21. However, the message is sent after the window has been moved, not during the movement in real time. If you want to track the exact pixel-by-pixel movement of a window, you need to process the WM_MOVING message. However, in most cases, processing stops until the user is done moving your window.
Figure
3.21 Generation of the WM_MOVE message.
As an example of tracking the motion of a window, DEMO3_10.CPP and the associated executable DEMO3_10.EXE print out the new position of a window whenever it's moved. Here's the code that handles the WM_MOVE processing:
case WM_MOVE: { // extract the position int xpos = LOWORD(lparam); int ypos =_MOVE Called - New Position = (%d,%d)", xpos, ypos); TextOut(hdc, 0,0, buffer, strlen(buffer)); // release the dc back ReleaseDC(hwnd, hdc); } break;
Well, that's it for window manipulation messages. There are a lot more, obviously, but you should have the hang of it now. The thing to remember is that there is a message for everything. If you want to track something, just look in the Win32 Help and sure enough, you'll find a message that works for you!
The next sections cover input devices so you can interact with the user (or yourself) and make much more interesting demos and experiments that will help you master Windows programming.
Banging on the Keyboard
Back in the old days, accessing the keyboard required sorcery. You had to write an interrupt handler, create a state table, and perform a number of other interesting feats to make it work. I'm a low-level programmer, but I can say without regret that I don't miss writing keyboard handlers anymore!
Ultimately you're going to use DirectInput to access the keyboard, mouse, joystick, and any other input devices. Nevertheless, you still need to learn how to use the Win32 library to access the keyboard and mouse. If for nothing else, you'll need them to respond to GUI interactions and/or to create more engaging demos throughout the book until we cover DirectInput. So without further ado, let's see how the keyboard works.
The keyboard consists of a number of keys, a microcontroller, and support electronics. When you press a key or keys on the keyboard, a serial stream of packets is sent to Windows describing the key(s) that you pressed. Windows then processes this stream and sends your window keyboard event messages. The beauty is that under Windows, you can access the keyboard messages in a number of ways:
With the WM_CHAR message
With the WM_KEYDOWN and WM_KEYUP messages
With a call to GetAsyncKeyState()
Each one of these methods works in a slightly different manner. The WM_CHAR and WM_KEYDOWN messages are generated by Windows whenever a keyboard keypress or event occurs. However, there is a difference between the types of information encapsulated in the two messages. When you press a key on the keyboard, such as A, two pieces of data are generated:
The scan code
The ASCII code
The scan code is a unique code that is assigned to each key of the keyboard and has nothing to do with ASCII. In many cases, you just want to know if the A key was pressed; you're not interested in whether or not the Shift key was held down and so on. Basically, you just want to use the keyboard like a set of momentary switches. This is accomplished by using scan codes. The WM_KEYDOWN message is responsible for generating scan codes when keys are pressed.
The ASCII code, on the other hand, is cooked data. This means that if you press the A key on the keyboard but the Shift key is not pressed or the Caps Lock key is not engaged, you see an a character. Similarly, if you press Shift+A, you see an A. The WM_CHAR message sends these kinds of messages.
You can use either techniqueit's up to you. For example, if you were writing a word processor, you would probably want to use the WM_CHAR message because the character case matters and you want ASCII codes, not virtual scan codes. On the other hand, if you're making a game and F is fire, S is thrust, and the Shift key is the shields, who cares what the ASCII code is? You just want to know if a particular button on the keyboard is up or down.
The final method of reading the keyboard is to use the Win32 function GetAsyncKeyState(), which tracks the last known keyboard state of the keys in a state tablelike an array of Boolean switches. This is the method I prefer because you don't have to write a keyboard handler.
Now that you know a little about each method, let's cover the details of each one in order, starting with the WM_CHAR message.
The WM_CHAR message has the following parameterization:
Table 3.7 Bit Encoding for the Key State Vector
To process the WM_CHAR message, all you have to do is write a message handle for it, like this:
case WM_CHAR: { // extract ascii code and state vector int ascii_code = wparam; int key_state = lparam; // take whatever action } break;
And of course, you can test for various state information that might be of interest. For example, here's how you would test for the Alt key being pressed down:
// test the 29th bit of key_state to see if it's true #define ALT_STATE_BIT 0x20000000 if (key_state & ALT_STATE_BIT) { // do something } // end if
And you can test for the other states with similar bitwise tests and manipulations.
As an example of processing the WM_CHAR message, I have created a demo that prints out the character and the state vector in hexadecimal form as you press keys. The program is called DEMO3_11.CPP, and the executable is of course DEMO3_11.EXE. Try pressing weird key combinations and see what happens. The code that processes and displays the WM_CHAR information is shown here, excerpted from the WinProc():
case WM_CHAR: { // get the character char ascii_code = wparam; unsigned int key_state = the ascii code and key state sprintf(buffer,"WM_CHAR: Character = %c ",ascii_code); TextOut(hdc, 0,0, buffer, strlen(buffer)); sprintf(buffer,"Key State = 0X%X ",key_state); TextOut(hdc, 0,16, buffer, strlen(buffer)); // release the dc back ReleaseDC(hwnd, hdc); } break;
The next keyboard event message, WM_KEYDOWN, is similar to WM_CHAR, except that the information is not "cooked." The key data sent during a WM_KEYDOWN message is the virtual scan code of the key rather than the ASCII code. The virtual scan codes are similar to the standard scan codes generated by any keyboard, except that virtual scan codes are guaranteed to be the same for any keyboard. For example, it's possible that the scan code for a particular key on your 101 ATstyle keyboard is 67, but on another manufacturer's keyboard, it might be 69. See the problem?
The solution used in Windows was to virtualize the real scan codes to virtual scan code with a lookup table. As programmers, we use the virtual scan codes and let Windows do the translation. Thanks, Windows! With that in mind, here are the details of the WM_KEYDOWN message:
Message: WM_KEYDOWN
WparamContains the virtual key code of the key pressed. Table 3.8 contains a list of the most common keys that you might be interested in.
lparaContains a bit-encoded state vector that describes other special control keys that may be pressed. The bit encoding is shown in Table 3.8.
Table 3.8 Virtual Key Codes
Note: The keys AZ and 09 have no VK_ codes. You must use the numeric constants or define your own.
In addition to the WM_KEYDOWN message, there is WM_KEYUP. It has the same parameterizationthat is, wparam contains the virtual key code, and lparam contains the key state vector. The only difference is that WM_KEYUP is sent when a key is released.
For example, if you're using the WM_KEYDOWN message to control something, take a look at the code here:
case WM_KEYDOWN: { // get virtual key code and data bits int virtual_code = (int)wparam; int key_state = (int)lparam; // switch on the virtual_key code to be clean switch(virtual_code) { case VK_RIGHT:{ } break; case VK_LEFT: { } break; case VK_UP: { } break; case VK_DOWN: { } break; // more cases... default: break; } // end switch // tell windows that you processed the message return(0); } break;
As an experiment, try modifying the code in DEMO3_11.CPP to support the WM_KEYDOWN message instead of WM_CHAR. When you're done, come back and we'll talk about the last method of reading the keyboard.
The final method of reading the keyboard is to make a call to one of the keyboard state functions: GetKeyboardState(), GetKeyState(), or GetAsyncKeyState(). We'll focus on GetAsyncKeyState() because it works for a single key, which is what you're usually interested in rather than the entire keyboard. If you're interested in the other functions, you can always look them up in the Win32 SDK. Anyway, GetAsyncKeyState() as the following prototype:
SHORT GetAsyncKeyState(int virtual_key);
You simply send the function the virtual key code that you want to test, and if the high bit of the return value is 1, the key is pressed. Otherwise, it's not. I have written some macros to make this easier:
#define KEYDOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) #define KEYUP(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 0 : 1)
The beauty of using GetAsyncKeyState() is that it's not coupled to the event loop. You can test for keypresses anywhere you want. For example, say that you're writing a game and you want to track the arrow keys, spacebar, and maybe the Ctrl key. You don't want to have to deal with the WM_CHAR or WM_KEYDOWN messages; you just want to code something like this:
if (KEYDOWN(VK_DOWN)) { // move ship down, whatever } // end if if (KEYDOWN(VK_SPACE)) { // fire weapons maybe? } // end if // and so on
Similarly, you might want to detect when a key is released to turn something off. Here's an example:
if (KEYUP(VK_ENTER)) { // disengage engines } // end if
As an example, I have created a demo that continually prints out the status of the arrow keys in the WinMain(). It's called DEMO3_12.CPP, and the executable is DEMO3_12.EXE. Here's the WinMain() from the program:
int WINAPI WinMain(HINSTANCE hinstance, HINSTANCE hprevinstance, LPSTR lpcmdline, int ncmdshow) { WNDCLASSEX winclass; // this will hold the class we create HWND hwnd; // generic window handle MSG msg; // generic message HDC hdc; // graphics device context // first fill in the window class stucture = (HBRUSH)GetStockObject(BLACK_BRUSH); winclass.lpszMenuName = NULL; winclass.lpszClassName = WINDOW_CLASS_NAME; winclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION); // save hinstance in global hinstance_app = hinstance; // register the window class if (!RegisterClassEx(&winclass)) return(0); // create the window if (!(hwnd = CreateWindowEx(NULL, // extended style WINDOW_CLASS_NAME, // class "GetAsyncKeyState() Demo", // title WS_OVERLAPPEDWINDOW | WS_VISIBLE, 0,0, // initial x,y 400,300, // initial width, height NULL, // handle to parent NULL, // handle to menu hinstance,// instance of this application NULL))) // extra creation parms return(0); // save main window handle main_window_handle = out the state of each arrow key sprintf(buffer,"Up Arrow: = %d ",KEYDOWN(VK_UP)); TextOut(hdc, 0,0, buffer, strlen(buffer)); sprintf(buffer,"Down Arrow: = %d ",KEYDOWN(VK_DOWN)); TextOut(hdc, 0,16, buffer, strlen(buffer)); sprintf(buffer,"Right Arrow: = %d ",KEYDOWN(VK_RIGHT)); TextOut(hdc, 0,32, buffer, strlen(buffer)); sprintf(buffer,"Left Arrow: = %d ",KEYDOWN(VK_LEFT)); TextOut(hdc, 0,48, buffer, strlen(buffer)); // release the dc back ReleaseDC(hwnd, hdc); } // end while // return to Windows like this return(msg.wParam); } // end WinMain
Also, if you review the entire source on the CD-ROM, you'll notice that there aren't handlers for WM_CHAR or WM_KEYDOWN in the message handler for the window. The fewer messages that you have to handle in the WinProc(), the better! In addition, this is the first time you have seen action taking place in the WinMain(), which is the section that does all game processing. Notice that there isn't any timing delay or synchronization, so the redrawing of the information is free-running (in other words, working as fast as possible). In Chapter 4, "Windows GDI, Controls, and Last-Minute Gift Ideas," you'll learn about timing issues, how to keep processes locked to a certain frame rate, and so forth. But for now, let's move on to the mouse.
Squeezing the Mouse
The mouse is probably the most innovative computer input device ever created. You point and click, and the mouse pad is physically mapped to the screen surfacethat's innovation! Anyway, as you guessed, Windows has a truckload of messages for the mouse, but we're going to look at only two classes of messages: WM_MOUSEMOVE and WM_*BUTTON*.
Let's start with the WM_MOUSEMOVE message. The first thing to remember about the mouse is that its position is relative to the client area of the window that it's in. Referring to Figure 3.22, the mouse sends coordinates relative to the upper-left corner of your window, which is 0,0.
Other than that, the WM_MOUSEMOVE message is fairly straightforward.
Message: WM_MOUSEMOVE
Parameterization:
int mouse_x = (int)LOWORD(lParam); int mouse_y = (int)HIWORD(lParam); int buttons = (int)wParam;
Basically, the position is encoded as 16-bit entries in the lparam, and the buttons are encoded in the wparam, as shown in Table 3.9.
Figure
3.22 The details of mouse movement.
Table 3.9 Button Bit Encoding for WM_MOUSEMOVE
So all you have to do is logically AND one of the bit codes with the button state and you can detect which mouse buttons are pressed. Here's an example of tracking the x,y position of the mouse along with the left and right buttons:
case WM_MOUSEMOVE: { // get the position of the mouse int mouse_x = (int)LOWORD(lParam); int mouse_y = (int)HIWORD(lParam); // get the button state int buttons = (int)wParam; // test if left button is down if (buttons & MK_LBUTTON) { // do something } // end if // test if right button is down if (buttons & MK_RBUTTON) { // do something } // end if } break;
Trivial, ooh, trivial! For an example of mouse tracking, take a look at DEMO3_13.CPP on the CD-ROM and the associated executable. The program prints out the position of the mouse and the state of the buttons using the preceding code as a starting point. Take note of how the button changes only when the mouse is moving. This is as you would expect because the message is sent when the mouse moves rather than when the buttons are pressed.
Now for some details. The WM_MOUSEMOVE is not guaranteed to be sent all the time. You may move the mouse too quickly for it to track. Therefore, don't assume that you'll be able to track individual mouse movements that wellfor the most part, it's not a problem, but keep it in mind. Also, you should be scratching your head right now, wondering how to track if a mouse button was pressed without a mouse move. Of course, there is a whole set of messages just for that. Take a look at Table 3.10.
Table 3.10 Mouse Button Messages
The button messages also have the position of the mouse encoded just as they were for the WM_MOUSEMOVE messagein the wparam and lparam. For example, to test for a left button double-click, you would do this:
case WM_LBUTTONDBLCLK: { // extract x,y and buttons int mouse_x = (int)LOWORD(lParam); int mouse_y = (int)HIWORD(lParam); // do something intelligent // tell windows you handled it return(0); } // break;
Killer! I feel powerful, don't you? Windows is almost at our feet!
|
https://www.informit.com/articles/article.aspx?p=30009&seqNum=4
|
CC-MAIN-2021-31
|
refinedweb
| 3,839
| 60.35
|
I want it to ask user for a float input to assign to money such as a dollar. The smallest unit would be a cent or 0.01 so I want it to reprompt the user for input every time he enters a negative value or zero. The while condition seems fine. What is wrong with it?
#include <stdio.h>
#include <cs50.h>
#include <math.h>
int main(void)
{
float amount = 0;
do
{
printf("How much change to be returned?\n");
amount = GetFloat();
}
while(amount < 0.01);
}
jharvard@appliance (~/Dropbox/pset1): ./greedy
How much change to be returned?
0.01
How much change to be returned?
0.01 Cannot be represented exactly in a float variable, therefore I conclude that in your case the input of 0.01 is converted to a float value bigger than the 0.01 constant and consequently the loop is not terminating for input of 0.01.
To fix this you can switch to a full blown currency or fixed point datatype or more simply change the program to work with cents instead of dollars, thereby avoiding fractional values and especially those of them that cannot be represented exactly. Another often used technique from Kirhog's answer is to use an Epsilon in you comparisons, e.g.:
double EPSILON = 0.005; ... while(amount < 0.01+EPSILON);
|
https://codedump.io/share/wrUIkcX2qpkl/1/why-is-this-do-while-loop-not-working-properly
|
CC-MAIN-2017-34
|
refinedweb
| 221
| 77.13
|
CGTalk
>
Software Specific Forums
>
Autodesk Maya
>
Maya Programming
> Rotation based on one object
PDA
View Full Version :
Rotation based on one object
newguy4life
03-11-2011, 07:33 PM
Hello everybody,
I am very new to scripting, and I'm sure this has been covered before but I had a hard time knowing what to search for to find my answer, so I apologize if this is redundant.
I am trying to basically have a bunch of objects in a string rotate by a fraction of what the object before it does. So if I have objects A B and C, and I rotate A 100, B would be 90, C would be 81 etc.
I have it working where I hit the button, and it positions them correctly, but I can't figure out how link them via an expression or something so that I can move object A and the other objects would move in relation to that without hitting the button.
Here is my current script:
// rotates the object 90% of object below
string $feathers[] = `ls -sl`; int $i = 0;
for ($i=1; $i<size($feathers); $i++)
{
float $ro = `getAttr($feathers[$i-1] + ".rotateY")`;
setAttr ($feathers[$i]+".rotateY") ($ro*.9);
};
If anybody can point me in the right direction, you would be helping me out BIG time!
Thanks a lot!
fatsumo
11-13-2011, 06:57 AM
cubeB.rotateY = cubeA.rotateY*0.9;
cubeC.rotateY = cubeA.rotateY*0.8;
//in the expression editor
mrcain
11-14-2011, 03:27 AM
You can also do this in Python.
create a bunch of cubes, select all of them apart from pCube1 and run this....
import maya.cmds as mc
driven = mc.ls(sl=True)
ct = 1
for i in range(0,len(driven),1):
multi = mc.createNode('multiplyDivide')
StoreMulti = mc.ls('multiply*')
mc.connectAttr ("pCube1" + '.ry',(StoreMulti[i])+'.input1X')
ct = ct - 0.1
sum = ct
print sum
mc.setAttr((StoreMulti[i])+'.input2X', sum)
mc.connectAttr((StoreMulti[i])+'.outputX',(driven[i]) + '.ry')
CGTalk Moderation
11-14-2011, 03.
|
http://forums.cgsociety.org/archive/index.php/t-963913.html
|
CC-MAIN-2014-42
|
refinedweb
| 338
| 64.1
|
First solution in Clear category for Counting Tiles by nickie
from math import ceil, hypot
def checkio(radius):
n = int(ceil(radius))
full = partial = 0
# let's stay in the upper-right quarter of the circle
for i in range(n):
for j in range(n):
if hypot(i+1, j+1) < radius:
full += 1 # full iff upper right corner is in the circle
elif hypot(i, j) < radius:
partial += 1 # partial iff lower right corner is in the circle
return [4*full, 4*partial]
#These "asserts" using only for self-checking and not necessary for auto-testing
if __name__ == '__main__':
assert checkio(2) == [4, 12], "N=2"
assert checkio(3) == [16, 20], "N=3"
assert checkio(2.1) == [4, 20], "N=2.1"
assert checkio(2.5) == [12, 20], "N=2.5"
Oct. 15, 2013
Forum
Price
Global Activity
Jobs
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners
|
https://py.checkio.org/mission/counting-tiles/publications/nickie/python-3/first/share/4082bd5034815a2ca1a2da28581134db/
|
CC-MAIN-2019-51
|
refinedweb
| 150
| 54.97
|
I am working with an algorithm that, for each iteration, needs to find which region of a Voronoi diagram a set of arbirary coordinats belong to. that is, which region each coordinate is located within. (We can assume that all coordinates will belong to a region, if that makes any difference.)
I don’t have any code that works in Python yet, but the the pseudo code looks something like this:
## we are in two dimensions and we have 0<x<1, 0<y<1. for i in xrange(1000): XY = get_random_points_in_domain() XY_candidates = get_random_points_in_domain() vor = Voronoi(XY) # for instance scipy.spatial.Voronoi regions = get_regions_of_candidates(vor,XY_candidates) # this is the function i need ## use regions for something
I know that the scipy.Delaunay has a function called find_simplex which will do pretty much what I want for simplices in a Delaunay triangulation, but I need the Voronoi diagram, and constructing both is something I wish to avoid.
Questions:
1. Is there a library of some sort that will let me do this easily?
2. If not, is there a good algorithm I could look at that will let me do this efficiently?
Update
Jamie’s solution is exactly what I wanted. I’m a little embarrassed that I didn’t think of it myself though …
Best answer
You don’t need to actually calculate the Voronoi regions for this. By definition the Voronoi region around a point in your set is made up of all points that are closer to that point than to any other point in the set. So you only need to calculate distances and find nearest neighbors. Using scipy’s
cKDTree you could do:
import numpy as np from scipy.spatial import cKDTree n_voronoi, n_test = 100, 1000 voronoi_points = np.random.rand(n_voronoi, 2) test_points = np.random.rand(n_test, 2) voronoi_kdtree = cKDTree(voronoi_points) test_point_dist, test_point_regions = voronoi_kdtree.query(test_points, k=1)
test_point_regions Now holds an array of shape
(n_test, 1) with the indices of the points in
voronoi_points closest to each of your
test_points.
|
https://pythonquestion.com/post/finding-voronoi-regions-that-contain-a-list-of-arbitrary-coordinates/
|
CC-MAIN-2020-16
|
refinedweb
| 335
| 62.58
|
Numpy Tutorial
Introduction

NumPy is an acronym for "Numeric Python" or "Numerical Python". It is an open source extension module for Python, which provides fast precompiled functions for mathematical and numerical routines. Furthermore, NumPy enriches the programming language Python with powerful data structures for efficient computation of multi-dimensional arrays and matrices. The implementation is even aiming at huge matrices and arrays. Besides that the module supplies a large library of high-level mathematical functions to operate on these matrices and arrays.
SciPy (Scientific Python) is often mentioned in the same breath with NumPy. SciPy extends the capabilities of NumPy with further useful functions for minimization, regression, Fourier-transformation and many others.
Both NumPy and SciPy are usually not installed by default. NumPy has to be installed before installing SciPy. Numpy can be downloaded from the website:
(Comment: The diagram of the image on the right side is the graphical visualisation of a matrix with 14 rows and 20 columns. It's a so-called Hinton diagram. The size of a square within this diagram corresponds to the size of the value of the depicted matrix. The colour determines, if the value is positive or negative. In our example: the colour red denotes negative values and the colour green denotes positive values.)
NumPy is based on two earlier Python modules dealing with arrays. One of these is Numeric. Numeric is like NumPy a Python module for high-performance, numeric computing, but it is obsolete nowadays. Another predecessor of NumPy is Numarray, which is a complete rewrite of Numeric but is deprecated as well. NumPy is a merger of those two, i.e. it is build on the code of Numeric and the features of Numarray.
The Python Alternative to Matlab
Python in combination with Numpy, Scipy and Matplotlib can be used as a replacement for MATLAB. The combination of NumPy, SciPy and Matplotlib is a free (meaning both "free" as in "free beer" and "free" as in "freedom") alternative to MATLAB. Even though MATLAB has a huge number of additional toolboxes available, NumPy has the advantage that Python is a more modern and complete programming language and - as we have said already before - is open source. SciPy adds even more MATLAB-like functionalities to Python. Python is rounded out in the direction of MATLAB with the module Matplotlib, which provides MATLAB-like plotting functionality.
Comparison between Core Python and Numpy
When we say "Core Python", we mean Python without any special modules, i.e. especially without NumPy.
The advantages of Core Python:
- high-level number objects: integers, floating point
- containers: lists with cheap insertion and append methods, dictionaries with fast lookup
Advantages of using Numpy with Python:
- array oriented computing
- efficiently implemented multi-dimensional arrays
- designed for scientific computation
A Simple Numpy Example
Before we can use NumPy we will have to import it. It has to be imported like any other module:
import numpy
But you will hardly ever see this. Numpy is usually renamed to np:
import numpy as np
We have a list with values, e.g. temperatures in Celsius:
cvalues = [25.3, 24.8, 26.9, 23.9]
We will turn this into a one-dimensional numpy array:
C = np.array(cvalues) print(C)
[ 25.3 24.8 26.9 23.9]
Let's assume, we want to turn the values into degrees Fahrenheit. This is very easy to accomplish with a numpy array. The solution to our problem can be achieved by simple scalar multiplication:
print(C * 9 / 5 + 32)
[ 77.54 76.64 80.42 75.02]
Compared to this, the solution for our Python list is extremely awkward:
fvalues = [ x*9/5 + 32 for x in cvalues] print(fvalues)
[77.54, 76.64, 80.42, 75.02]
Creation of Evenly Spaced Values
There are functions provided by Numpy to create evenly spaced values within a given interval. One
uses a given distance 'arange' and the other one 'linspace' needs the number of elements and creates
the distance automatically.) # compare to range: x = range(1,10) print(x) # x is an iterator print(list(x)) # some more arange examples: x = np.arange(10.4) print(x) x = np.arange(0.5, 10.4, 0.8) print(x) x = np.arange(0.5, 10.4, 0.8, int)] [ 0 1 2 3 4 5 6 7 8 9 10 11 12]. 1.18367347 1.36734694 1.55102041 1.73469388 1.91836735 2.10204082 2.28571429 2.46938776 2.65306122 2.83673469 3.02040816 3.20408163 3.3877551 3.57142857 3.75510204 3.93877551 4.12244898 4.30612245 4.48979592 4.67346939 4.85714286 5.04081633 5.2244898 5.40816327 5.59183673 5.7755102 5.95918367 6.14285714 6.32653061 6.51020408 6.69387755 6.87755102 7.06122449 7.24489796 7.42857143 7.6122449 7.79591837 7.97959184 8.16326531 8.34693878 8.53061224 8.71428571 8.89795918 9.08163265 9.26530612 9.44897959 9.63265306 9.81632653 10. ] [
Time Comparison between Python Lists and Numpy Arrays
One of the main advantages of NumPy is its advantage in time compared to standard Python. Let's look at the following functions:
import time size_of_vec = 1000 def pure_python_version(): t1 = time.time() X = range(size_of_vec) Y = range(size_of_vec) Z = [] for i in range(len(X)): Z.append(X[i] + Y[i]) return time.time() - t1 def numpy_version(): t1 = time.time() X = np.arange(size_of_vec) Y = np.arange(size_of_vec) Z = X + Y return time.time() - t1
Let's call these functions and see the time consumption:
t1 = pure_python_version() t2 = numpy_version() print(t1, t2) print("Numpy is in this example " + str(t1/t2) + " faster!")
0.0002090930938720703 2.0503997802734375e-05 Numpy is in this example 10.19767441860465 faster!
It's an easier and above all better way to measure the times by using the timeit module. We will use the Timer class in the following script.
The constructor of a Timer object takes a statement to be timed, an additional statement used for setup, and a timer function. Both statements default to 'pass'.
The statements may contain newlines, as long as they don't contain multi-line string literals.
import numpy as np from timeit import Timer size_of_vec = 1000 def pure_python_version(): X = range(size_of_vec) Y = range(size_of_vec) Z = [] for i in range(len(X)): Z.append(X[i] + Y[i]) def numpy_version(): X = np.arange(size_of_vec) Y = np.arange(size_of_vec) Z = X + Y #timer_obj = Timer("x = x + 1", "x = 0") timer_obj1 = Timer("pure_python_version()", "from __main__ import pure_python_version") timer_obj2 = Timer("numpy_version()", "from __main__ import numpy_version") print(timer_obj1.timeit(10)) print(timer_obj2.timeit(10))
0.0022348780039465055 6.224898970685899e-05
Creating Arrays
Zero-dimensional Arrays in Numpy
One-dimensional Arrays
Two- and Multidimensional Arrays
Shape of an Array:
x.shape = (4, 4)The previous code returned the following:
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-81-5c4497921b8c> in <module>() ----> 1 x.shape = (4, 4) ValueError: total size of new array must be unchanged
Let's look at some further examples.
The shape of a scalar is an empty tuple:
x = np.array(11) print(np.shape(x))
()
B = np.array([ [[111, 112], [121, 122]], [[211, 212], [221, 222]], [[311, 312], [321, 322]] ]) print(B.shape)
(3, 2, 2)
Indexing and Slicing
Assigning to and accessing the elements of an array is similar to other sequential data types of Python, i.e. lists and tuples. We have also many options to indexing, which makes indexing in Numpy very powerful and similar to core Python.
Single indexing is the way, you will most probably expect it:
F = np.array([1, 1, 2, 3, 5, 8, 13, 21]) # print the first element of F, i.e. the element with the index 0 print(F[0]) # print the last element of F print(F[-1]) B = np.array([ [[111, 112], [121, 122]], [[211, 212], [221, 222]], [[311, 312], [321, 322]] ]) print(B[0][1][0])
1 21 121
Indexing multidimensional arrays:
A = np.array([ [3.4, 8.7, 9.9], [1.1, -7.8, -0.7], [4.1, 12.3, 4.8]]) print(A[1][0])
1.1
We accessed the element in the second row, i.e. the row with the index 1, and the first column (index 0). We accessed it the same way, we would have done with an element of a nested Python list. There is another way to access elements of multidimensional arrays in numpy.
There is also an alternative: We use only one pair of square brackets and all the indices are separated by commas:
print(A[1, 0])
1.1
You have to be aware of the fact, that the second way is more efficient. In the first case, we create an intermediate array A[1] from which we access the element with the index 0. So it behaves similar to this:
tmp = A[1] print(tmp) print(tmp[0])
[ 1.1 -7.8 -0)The above Python code returned the following:. Looking at the data attribute returns something surprising:
print(A.data) print(B.data) print(A.data == B.data)
<memory at 0x7fe3b458dd90> <memory at 0x7fe3b45a9e48> False
Let's check now on equality of the arrays:
print(A == B)
False
Which makes sense, because they are different arrays concerning their structure:
print(A) print(B)
[42 1 2 3 4 5 6 7 8 9 10 11] [[42 1 2 3] [ 4 5 6 7] [ 8 9 10 11]]
But we saw that if we change an element of one array the other one is changed as well. This fact is reflected by may_share_memory:
np.may_share_memory(A, B)This gets us the following output:
True
The result above is "false positive" example for may_share_memory in the sense that somebody may think that the arrays are the same, which is not the case.
Arrays of Ones and of Zeros]
Copying Arrays
numpy.copy()
ndarray.copy()
Identity Array
An identity array is a square array with ones on its main diagonal. There are two ways to create identity array.
- identy
- eye
The identity Function
We can create identity arrays with the function identity:
identity(n, dtype=None)
The parameters:
The output of identity is an 'n' x 'n' array with its main diagonal set to one, and all other elements are 0.
import numpy as np np.identity(4)The above code returned the following result:
array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]])
np.identity(4, dtype=int) # equivalent to np.identity(3, int)The above Python code returned the following:
array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
The eye Function
Another way to create identity arrays provides the function eye.)The above Python code returned the following output:.
Solutions to the Exercises:]
|
http://python-course.eu/numpy.php
|
CC-MAIN-2017-09
|
refinedweb
| 1,802
| 64.3
|
Step-by-step Tutorial: A simple IoT solution to collect your environmental data
A step-by-step tutorial that anyone can build easily.
In this day and age, there are so many surveillance systems out there in the market that can help you track your environment no matter where you are. Their main problem, however, is usually the price. After all, equipments that have functions such as Internet access, making calls and tracking various parameters of the surroundings are very complicated.
But what if someone tells you that even a student can create such a system and make it work? Surely many would find it a joke. But in this article I will show you that it is much easier than you think it is.
The key idea of this post is to demonstrate that even ordinary things can be used in more complex and unique situations than you think.
Here I will use some temperature and humidity sensors as an example, and you absolutely any module can be connected in the same way as shown.
For example, I have only the Grove Beginner Kit for Arduino, which only costs you 30 dollars. However, if you feel that you do not need all the sensors included, you can limit yourself to buying a Seeeduino Nano (if you need only one Grove port) for 10 dollars and a Grove Temperature and Humidity sensor for 6 dollars. In addition, you will need a computer (in my case it is Windows, but in theory C # will work everywhere). A paid (or at least free) web server is also required.
What for? Because the data about, for example, a cat lying on the temperature sensor, must be displayed somewhere. And which device does everyone always have with them? Of course, its our smartphones. And which application is always active and notify you quickly? Of course it is our messenger apps.
I chose Telegram as it is my main messenger, and secondly because writing bots for it is very simple.
There are 4 main steps to building this, which are namely:
1. Hardware
Seeeduino Lotus with Temperature and humidity sensor
Everything is simple here – we just need to connect the sensor to the DIGITAL port (yes, it matters) using the Grove cable. That’s all.
2. Seeeduino software
We need to make a sketch that would send data to the serial port of the computer via USB. This is the simplest option.
Here is a well-commented code for you.
#include "DHT.h" //connecting library #define DHTPIN 2 //setting variable #define DHTTYPE DHT11 //selecting our type DHT dht(DHTPIN, DHTTYPE); //putting it together void setup() { SERIAL.begin(9600); //connecting to serial port @9600 baud Wire.begin(); //connecting sensor to board dht.begin(); //inititating sensor } void loop() { float temp_hum_val[2] = {0}; //creating massive with 2 items if(!dht.readTempAndHumidity(temp_hum_val)){//if sensor is online SERIAL.print("Humidity: "); //output mark SERIAL.print(temp_hum_val[0]); //output data SERIAL.println(""); //separator SERIAL.print("Temperature: "); //output mark SERIAL.print(temp_hum_val[1]); //output data } else{ //if sensor NOT online SERIAL.println("Failed to get temprature and humidity value."); //saying "oops" } delay(3600000); //waiting ONE WHOLE HOUR (1000ms*60s*60m) }
3. PC Client app
For me, this is probably the hardest part in the whole project. I have been writing bots in Telegram for a long time and have been familiar with Arduino programming, but this is the first time that I used C #. What was new to me was how to connect to the server from the command line and receive data from the serial port not through the Arduino IDE. No, probably the opposite – first get the data and only then send it))
Here are just some code with comments. Feel free to copy it, edit it, use it.
using System; using System.IO.Ports; using System.Threading; using System.Net; using System.IO; namespace IoT { class Program { static SerialPort _serialPort; public static void Main() { Console.WriteLine("Started");//boot message Console.SetWindowSize(16, 1);//do I need a bigger window? :D Console.SetCursorPosition(0, 0);//we must see the text _serialPort = new SerialPort();//initiating _serialPort.PortName = "COM3";//your port may wary _serialPort.BaudRate = 9600;//haha, classic _serialPort.Open();//listen! listen it! int i = 0;//let`s make a incremental counter while (true)//endless, brainless { string a = _serialPort.ReadExisting();//this is our output from seeeduino if (a != "")//we must not do a request every second, only if we have new data { string link = "" + a;//ask me if you want to see how it works with me i++; //connections: one more HttpWebRequest request = (HttpWebRequest)WebRequest.Create(link);//sending HttpWebResponse response = (HttpWebResponse)request.GetResponse();//we don`t use it Stream resStream = response.GetResponseStream();//useless too Console.Clear();//cls Console.WriteLine("Connections: " + i + ".");//notify user that we are alive Console.SetWindowSize(16, 1);//again Console.SetCursorPosition(0, 0);//again } Thread.Sleep(1000);//wait 1 s }Console.ReadLine();//somwhy, withou it my app crushes - but VS says that it is bad code( } } }
4. Server software
The problem with this section is that it makes no sense to describe the building of a bot and giving its full code here; there are lots of manuals with detailed descriptions on how to create bots on the Internet, and the information on the bot will probably take up half of the article despite the fact that the function itself takes only a couple of lines. Therefore, I will show only a short excerpt here. By the way, I wrote entirely in PHP and did not use any libraries.
$seed=$_REQUEST[seed];//collecting a product name $text=$_REQUEST[text];//collecting a message if($seed!=null&&$text!=null){//if we have both of them apiRequestJson("sendMessage", array('chat_id' => -1001329261925, "text" => "Отзыв из $seed: $text"));//send a request to our special chat }
So…
If everything is correct, the cat did not cut off the Internet cable and the server on the network is working, a notification will come to the selected channel. It looks like this:
Here are a couple of tips. Do not start the application before you plug in the board – it will crash instantly. Also, do not try to upload the firmware to the board while the application is active – the Arduino IDE will give an error, because this channel is busy.
Last but not least – I do not advise setting the update frequency more often than half an hour for such projects. After all, no one needs annoying notifications, right?
Keep moving forward, Makers!
|
http://www.seeedstudio.com/blog/2019/09/23/step-by-step-tutorial-a-simple-iot-solution-to-collect-your-environmental-data/
|
CC-MAIN-2020-34
|
refinedweb
| 1,077
| 57.87
|
Before starting ¶
- Make sure you have a robot ready to use. Otherwise, read NAO - Out of the box.
- Make sure Python and Python SDK are installed on your computer. If it is not the case, see: Python SDK Install Guide.
How it works ¶
This script uses the say method of the ALTextToSpeech module. ALTextToSpeech is the module of NAoqi dedicated to speech. The say method makes the robot pronounce the string given in parameter.
For further details about this module, see ALTextToSpeech.
Let’s explain the 3 lines you wrote:
from naoqi import ALProxy
This line imports the module ALProxy.
tts = ALProxy("ALTextToSpeech", "<IP of your robot>", 9559)
This line creates an object called tts. This object will send calls to NAOqi.
-.
- IP and Port (9559) of the robot are also specified (it was not the case with Choregraphe).
tts.say("Hello, world!")
This line uses the object tts to send an instruction to the NAOqi module.
- tts is the object we use.
- say() is the method.
- “Hello, world!” is the parameter.
What you have learned ¶
To make the robot do something, you have to:
- Import the module ALProxy.
- Create an object giving access to one of the NAOqi modules.
- Call one of its available methods.
Outside Choregraphe, IP and Port are mandatory parameters of proxy().
To go further ¶
- If you are not familiar with Python language, you should go through the tutorial included in Python distribution.
- To learn how to execute Python scripts on your computer or on your robot, see: Python tutorials.
- To discover NAOqi API, its modules and methods, see: NAOqi APIs.
Next step ¶
Python is an interpreted language, so it is far slower than a compiled language like C++.
If C++ has no secret for you, or if you strongly want to learn it, Install the C++ SDK.
For further details, see: C++ SDK Installation.
|
https://developer.softbankrobotics.com/nao-naoqi-2-1/naoqi-developer-guide/getting-started/hello-worlds/hello-world-4-using-python
|
CC-MAIN-2020-50
|
refinedweb
| 309
| 76.52
|
Aliases and typedefs (C++)
You can use an alias declaration to declare a name to use as a synonym for a previously declared type. (This mechanism is also referred to informally as a type alias). You can also use this mechanism to create an alias template, which can be particularly useful for custom allocators.
Syntax
using identifier = type;
Remarks
identifier
The name of the alias.
type
The type identifier you are creating an alias for.
An alias does not introduce a new type and cannot change the meaning of an existing type name.
The simplest form of an alias is equivalent to the
typedef mechanism from C++03:
// C++11 using counter = long; // C++03 equivalent: // typedef long counter;
Both of these enable the creation of variables of type "counter". Something more useful would be a type alias like this one for
std::ios_base::fmtflags:
// C++11 using fmtfl = std::ios_base::fmtflags; // C++03 equivalent: // typedef std::ios_base::fmtflags fmtfl; fmtfl fl_orig = std::cout.flags(); fmtfl fl_hex = (fl_orig & ~std::cout.basefield) | std::cout.showbase | std::cout.hex; // ... std::cout.flags(fl_hex);
Aliases also work with function pointers, but are much more readable than the equivalent typedef:
// C++11 using func = void(*)(int); // C++03 equivalent: // typedef void (*func)(int); // func can be assigned to a function pointer value void actual_function(int arg) { /* some code */ } func fptr = &actual_function;
A limitation of the
typedef mechanism is that it doesn't work with templates. However, the type alias syntax in C++11 enables the creation of alias templates:
template<typename T> using ptr = T*; // the name 'ptr<T>' is now an alias for pointer to T ptr<int> ptr_int;
Example
The following example demonstrates how to use an alias template with a custom allocator—in this case, an integer vector type. You can substitute any type for
int to create a convenient alias to hide the complex parameter lists in your main functional code. By using the custom allocator throughout your code you can improve readability and reduce the risk of introducing bugs caused by typos.
#include <stdlib.h> #include <new> template <typename T> struct MyAlloc { typedef T value_type; MyAlloc() { } template <typename U> MyAlloc(const MyAlloc<U>&) { } bool operator==(const MyAlloc&) const { return true; } bool operator!=(const MyAlloc&) const { return false; } T * allocate(const size_t n) const { if (n == 0) { return nullptr; } if (n > static_cast<size_t>(-1) / sizeof(T)) { throw std::bad_array_new_length(); } void * const pv = malloc(n * sizeof(T)); if (!pv) { throw std::bad_alloc(); } return static_cast<T *>(pv); } void deallocate(T * const p, size_t) const { free(p); } }; #include <vector> using MyIntVector = std::vector<int, MyAlloc<int>>; #include <iostream> int main () { MyIntVector foov = { 1701, 1764, 1664 }; for (auto a: foov) std::cout << a << " "; std::cout << "\n"; return 0; }
1701 1764 1664
Typedefs
A.
In contrast to the
class,
struct,
union, and
enum declarations,
typedef declarations do not introduce new types — they introduce new names for existing types.
Names declared using
typedef occupy the same namespace as other identifiers (except statement labels). Therefore, they cannot use the same identifier as a previously declared name, except in a class-type declaration. Consider the following example:
// typedef_names1.cpp // C2377 expected typedef unsigned long UL; // Declare a typedef name, UL. int UL; // C2377: redefined.
The name-hiding rules that pertain to other identifiers also govern the visibility of names declared using
typedef. Therefore, the following example is legal in C++:
// typedef_names2.cpp typedef unsigned long UL; // Declare a typedef name, UL int main() { unsigned int UL; // Redeclaration hides typedef name } // typedef UL back in scope
//
Re-declaration of typedefs
The
typedef declaration can be used to redeclare the same name to refer to the same type. For example:
// FILE1.H typedef char CHAR; // FILE2.H typedef char CHAR; // PROG.CPP #include "file1.h" #include "file2.h" // OK
The program PROG.CPP includes two header files, both of which contain
typedef declarations for the name
CHAR. As long as both declarations refer to the same type, such redeclaration is acceptable.
A
typedef cannot redefine a name that was previously declared as a different type. Therefore, if FILE2.H contains
// FILE2.H typedef int CHAR; // Error
the compiler issues an error because of the attempt to redeclare the name
CHAR to refer to a different type. This extends to constructs such as:
typedef char CHAR; typedef CHAR CHAR; // OK: redeclared as same type typedef union REGS // OK: name REGS redeclared { // by typedef name with the struct wordregs x; // same meaning. struct byteregs h; } REGS;
typedefs in C++ vs. C
Use of the
typedef specifier with class types is supported largely because of the ANSI C practice of declaring unnamed structures in
typedef declarations. For example, many C programmers use the following:
// typedef_with_class_types1.cpp // compile with: /c typedef struct { // Declare an unnamed structure and give it the // typedef name POINT. unsigned x; unsigned y; } POINT;
The advantage of such a declaration is that it enables declarations like:
POINT ptOrigin;
instead of:
struct point_t ptOrigin;
In C++, the difference between
typedef names and real types (declared with the
class,
struct,
union, and
enum keywords) is more distinct. Although the C practice of declaring a nameless structure in a
typedef statement still works, it provides no notational benefits as it does in C.
// typedef_with_class_types2.cpp // compile with: /c /W1 typedef struct { int POINT(); unsigned x; unsigned y; } POINT;
The preceding example declares a class named
POINT using the unnamed class
typedef syntax.
POINT is treated as a class name; however, the following restrictions apply to names introduced this way:
The name (the synonym) cannot appear after a
class,
struct, or
unionprefix.
The name cannot be used as constructor or destructor names within a class declaration.
In summary, this syntax does not provide any mechanism for inheritance, construction, or destruction.
|
https://docs.microsoft.com/en-us/cpp/cpp/aliases-and-typedefs-cpp?view=msvc-160&viewFallbackFrom=vs-2019
|
CC-MAIN-2021-04
|
refinedweb
| 956
| 52.7
|
On Tue, Sep 25, 2012 at 01:58:29PM -0700, Paul Eggert wrote: > Sorry, I don't see a bug there. "gcc -std=gnu99" > accepts ISO C11, in the sense that it passes all the > C11 tests that we have, if your version of GCC is > sufficiently new. This is because the GCC supports > these C11 features even when running in C99 mode. The only compiler for which autoconf currently knows how to set it into C11 mode passes the test in C99 mode. That sounds a bit odd to me. > If there's some C11 feature that is missing, > a feature that it's reasonable to expect from C11 > compilers, we could add that to the test, and this > will cause 'configure' to say "no" rather than "yes". > I did briefly try to think of such a feature but > came up dry. What about #if !defined(__STDC_VERSION__) || (__STDC_VERSION__ < 201112L) #error compiler is not in C11 mode #endif cu Adrian -- "Is there not promise of rain?" Ling Tan asked suddenly out of the darkness. There had been need of rain for many days. "Only a promise," Lao Er said. Pearl S. Buck - Dragon Seed
|
https://lists.gnu.org/archive/html/bug-autoconf/2012-09/msg00035.html
|
CC-MAIN-2019-30
|
refinedweb
| 194
| 82.24
|
Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)
By arungupta on Nov 27, 2012
Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. In a typical application, ServletInputStream is read in a while loop.
public class TestServlet extends HttpServlet {If the incoming data is blocking or streamed slower than the server can read then the server thread is waiting for that data. The same can happen if the data is written to ServletOutputStream.
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws IOException, ServletException {
ServletInputStream input = request.getInputStream();
byte[] b = new byte[1024];
int len = -1;
while ((len = input.read(b)) != -1) {
. . .
}
}
}
This is resolved in Servet 3.1 (JSR 340, to be released as part Java EE 7) by adding event listeners -
ReadListenerand
WriteListenerinterfaces. These are then registered using
ServletInputStream.setReadListenerand
ServletOutputStream.setWriteListener. The listeners have callback methods that are invoked when the content is available to be read or can be written without blocking.
The updated
doGetin our case will look like:
AsyncContext context = request.startAsync();
ServletInputStream input = request.getInputStream();
input.setReadListener(new MyReadListener(input, context));
Invoking
setXXXListenermethods indicate that non-blocking I/O is used instead of the traditional I/O. At most one
ReadListenercan be registered on
ServletIntputStreamand similarly at most one
WriteListenercan be registered on
ServletOutputStream.
ServletInputStream.isReadyand
ServletInputStream.isFinishedare new methods to check the status of non-blocking I/O read.
ServletOutputStream.canWriteis a new method to check if data can be written without blocking.
MyReadListenerimplementation looks like:
@Override
public void onDataAvailable() {
try {
StringBuilder sb = new StringBuilder();
int len = -1;
byte b[] = new byte[1024];
while (input.isReady()
&& (len = input.read(b)) != -1) {
String data = new String(b, 0, len);
System.out.println("--> " + data);
}
} catch (IOException ex) {
Logger.getLogger(MyReadListener.class.getName()).log(Level.SEVERE, null, ex);
}
}
@Override
public void onAllDataRead() {
System.out.println("onAllDataRead");
context.complete();
}
@Override
public void onError(Throwable t) {
t.printStackTrace();
context.complete();
}
This implementation has three callbacks:
onDataAvailablecallback method is called whenever data can be read without blocking
onAllDataReadcallback method is invoked data for the current request is completely read.
onErrorcallback is invoked if there is an error processing the request.
context.complete()is called in
onAllDataReadand
onErrorto signal the completion of data read.
For now, the first chunk of available data need to be read in the
doGetor
servicemethod of the Servlet. Rest of the data can be read in a non-blocking way using
ReadListenerafter that. This is going to get cleaned up where all data read can happen in
ReadListeneronly.
The sample explained above can be downloaded from here and works with GlassFish 4.0 build 64 and onwards.
The slides and a complete re-run of What's new in Servlet 3.1: An Overview session at JavaOne is available here.
Here are some more references for you:
- Java EE 7 Specification Status
- Servlet Specification Project
- JSR Expert Group Discussion Archive
- Servlet 3.1 Javadocs
Nice.
Posted by guest on November 28, 2012 at 01:36 AM PST #
Thanks Arun,
Nice post!!
excited enough to get started with servlet 3.1
Posted by Amit Phaltankar on November 28, 2012 at 03:13 AM PST #
when I read this first time first thought was "are they trying to do something similar to nodejs ? is this something like acknowledgement of nodejs way of doing IO" :)
I think java really will be amazing when Lambda Project a.k.a. JSR 335 will be available.
Posted by Akshay Ransing on November 28, 2012 at 11:28 AM PST #
When you say the first chunk needs to be read in the doGet method, what does this (temporary) implementation look like?
Posted by Jim Cheesman on November 28, 2012 at 12:47 PM PST #
Jim,
It'll be similar to the implementation in onDataAvailable. But that does not follow DRY.
The EG has agreed that this code need to be specified at one place and that would be onDataAvailable method only. So this is only an interim work around, probably for the next few builds of GlassFish only.
Posted by Arun Gupta on November 28, 2012 at 12:51 PM PST #
The title reads '...Scalable applications using Java EE 7', does it mean that jee7 will be made to run on multiple core when the app is deployed? Just like the other dynamic languages(Scala,Groovy etc)
Posted by guest on November 29, 2012 at 07:45 AM PST #
Scalability comes from the non-blocking I/O as compared to blocking I/O earlier. No special configuration is required in the application server to run across multiple cores.
Posted by Arun Gupta on December 02, 2012 at 06:04 PM PST #
Regarding:
For now, the first chunk of available data need to be read in the doGet or service method of the Servlet. Rest of the data can be read in a non-blocking way using ReadListener after that. This is going to get cleaned up where all data read can happen in ReadListener only.
Might be tidier to read the first block some time between the ReadListener being created and the first callback being made. I.e. MyReadListener.create(input, context); have create instantiate and do the first blocking read. That way it's all outside of doGet and to adapt for when it's all async, all the code is in one place (instead of across all your servlets)
Posted by joel on December 19, 2012 at 09:40 AM PST #
I'm a bit confused about the claim that the servlet 3 spec requires blocking io.
ServletInputStream input = request.getInputStream();
byte[] b = new byte[1024];
int len = -1;
while ((len = input.read(b)) != -1) {
. . .
}
where does it say in the spec that input.read(b) has to block (as in wait for data to arrive in the socket's receive buffer)? InputStream is essentially an interface, why can't the ServletInputStream returned be "backed" by e.g. a byte[] or a ByteBuffer or some other abstraction which has been filled from the original http request by a non-blocking read(s) of the client socket?
This would not change the semantics in any way.
I think you're confusing non-blocking with asynchronous reads (which is what your example code demonstrates).
Could you point to a specific section of the Servlet 3.0 spec that proscribes what I've suggested?
Posted by guest on March 24, 2013 at 12:41 PM PDT #
|
https://blogs.oracle.com/arungupta/entry/non_blocking_i_o_using
|
CC-MAIN-2015-35
|
refinedweb
| 1,083
| 56.76
|
There's no reason not to switch to DocBlox
- phpDocumentor is based on PHP 4, and not developed anymore (the last release was in 2008). It's the good old phpdoc command.
- doxygen is a mature choice that supports many languages, and it's not written in PHP (with some hacks it supports even JavaScript.)
- DocBlox is a PHP 5.3 compliant, actively developed tool. It's already used in Zend Framework and Agavi. It was ceated by Mike Van Riel, a Dutch PHP developer whom I met at the last DPC where he held a DocBlox talk at the Uncon.
A PHP tool, faster than doxygen in implementing new features, and actively developed: these are the factors that made me choose DocBlox as my new default Api documentation mean.
Installation
The requirements for DocBlox are PHP 5.3 with the XSL extension enabled. Additional extensions will be required for additional functionalities such as graphs.
sudo apt-get install php5-xsl # in Debian-based Linux distributions sudo pear channel-discover pear.docblox-project.org sudo pear channel-discover pear.michelf.com # some dependencies sudo pear install docblox/DocBlox-beta
At the time of this writing, DocBlox 0.13.3 will be installed by these commands.
Features (from the docs)
Performance is one key advantage of DocBlox. It parses Zend Framework (version 1.1 in this benchmark) in less than 90 seconds; it has low memory usage (< 50 MB) and implements incremental parsing by only accessing changed files since the last execution. On a small-sized project, it's blazingly fast.
DocBlox has PHP 5.3 support: namespaces are recognized; scopes and other PHP 5 constructs are a default.
The user interface produced by DocBlox contains a JavaScript search, and allows for independent theming and templating: multiple skins and multiple layouts. It has a support for a custom Writer implementations to transform the XML parsed structure in something other than HTML.
Demo
DocBlox own's demo shows how it has an user experience more advanced and current than phpDocumentor's old earthli template.
You can also see an example on real code, by taking a look at Zend Framework's Api documentation.
Trying it out
mkdir apidocs docblox run -d . -t apidocs
For the basic use case, that's it: parse the current folder by selecting only the PHP files and produce a result in the apidocs/ folder.
You can save the command in a Phing target, or introduce a docblox.dist.xml file to store the configuration:
<?xml version="1.0" encoding="UTF-8" ?> <docblox> <parser> <target>apidocs</target> </parser> <transformer> <target>apidocs</target> </transformer> <files> <directory>.</directory> </files> </docblox>
With this file in place, you can just run docblox in the folder containing it.
User interface
The default theme, which by the way will be the one used by the majority of the projects, is usable enough and good-looking. It shows:
- a list of classes and files on the left.
- C/I icons of different colors for classes and interfaces.
- p/m icons for property and methods.
- Circles of different colors (green to red) for scope private to public.
JavaScript navigation works even when not served from an HTTP server but just as a folder loaded in the browser. However JavaScript search requires the Api docs to be loaded via HTTP from a PHP-capable web server (a virtual host in your local Apache configuration will do it.)
Wthout any docblocks present, DocBlox list methods organized by logic and physical location (class and file), their names and parameters.
With docblocks present, all metadata are extracted: fields are listed with type and default value; methods with their entire prototype (parameters with type and default if applicable, and return type of the method itself):
Conclusions
There's no reason not to choose docblox as the default for your project and abandoning phpDocumentor. It is still a PHP-dedicated tool, written in PHP and distributed with an open source MIT license.
The docblocks tags supported are exactly the same, so the only thing that changes is the command for generation; the support is even improved as namespaces are correctly identified. Installation is available via PEAR and easier as for phpDocumentor, both for your development machines and for your Continuous Integration server.
DocBlox's documentation (it's documentation on the tool, not on docblock's syntax.)
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Filip Procházka replied on Tue, 2011/08/30 - 9:00am
Mario T. replied on Wed, 2011/08/31 - 12:51am
The generated manual pages definitely look nice, it's speedy and verbose when generating them. As far as CLI interfaces go it's also easy to use.
However having the frigging top-level namespace separators in the generated class diagrams even if you don't actually use any is a little turn off. Not sure if this is configurable, or how to add basic support for @magic methods. But maybe it'll mature some more..
Mike Van Riel replied on Thu, 2011/09/01 - 11:57pm
in response to:
Mario T.
Emma Watson replied on Fri, 2012/03/30 - 3:16am
Key advantages of the DocBlox are wonderful. Performance is the most admirable. In lesser than 90 seconds, Zend Framework is parsed. For smaller projects, it works efficiently and effectively.
PHP 3.5 support is incredible. Independant templating as well as theming is possible by the JavaScript search of DocBlox.java program
|
http://css.dzone.com/articles/theres-no-reason-not-switch
|
CC-MAIN-2014-35
|
refinedweb
| 918
| 56.55
|
NAME
new_unrhdr, delete_unrhdr, alloc_unr, free_unr -- kernel unit number allocator
SYNOPSIS
#include <sys/systm.h> struct unrhdr * new_unrhdr(int low, int high, struct mtx *mutex); void delete_unrhdr(struct unrhdr *uh); int alloc_unr(struct unrhdr *uh); int alloc_unrl(struct unrhdr *uh); void free_unr(struct unrhdr *uh, u_int item);
DESCRIPTION
The kernel unit number allocator is a generic facility, which allows to allocate unit numbers within a specified range. new_unrhdr(low, high, mutex) Initialize a new unit number allocator entity. The low and high arguments specify minimum and maximum number of unit numbers. There is no cost associated with the range of unit numbers, so unless the resource really is finite, INT_MAX can be used. If mutex is not NULL, it is used for locking when allocating and freeing units. Otherwise, internal mutex is used. delete_unrhdr(uh) Destroy specified unit number allocator entity. alloc_unr(uh) Return a new unit number. The lowest free number is always allocated. This function does not allocate memory and never sleeps, however it may block on a mutex. If no free unit numbers are left, -1 is returned. alloc_unrl(uh) Same as alloc_unr() except that mutex is assumed to be already locked and thus is not used. free_unr(uh, item) Free a previously allocated unit number. This function may require allocating memory, and thus it can sleep. There is no pre-locked variant.
CODE REFERENCES
The above functions are implemented in sys/kern/subr_unit.c.
HISTORY
Kernel unit number allocator first appeared in FreeBSD 6.0.
AUTHORS
Kernel unit number allocator was written by Poul-Henning Kamp. This manpage was written by Gleb Smirnoff.
|
http://manpages.ubuntu.com/manpages/oneiric/man9/new_unrhdr.9freebsd.html
|
CC-MAIN-2014-10
|
refinedweb
| 266
| 50.43
|
Opened 10 years ago
Closed 8 years ago
Last modified 5 years ago
#2612 closed defect (fixed)
Fix admin formatting when help_text used with multiple fields on the same line
Description
If you have a case where you have set multiple fields to appear on the same line in the admin interface, the current version produces a line break when help_text is present, due to it being in a <p> element. To fix it in a half-assed sort of way, I wrapped the entire field in a <div style="float: left">. The diff is should below:
Index: django/contrib/admin/templates/admin/field_line.html =================================================================== --- django/contrib/admin/templates/admin/field_line.html (revision 3669) +++ django/contrib/admin/templates/admin/field_line.html (working copy) @@ -2,9 +2,11 @@ <div class="{{ class_names }}" > {% for bound_field in bound_fields %}{{ bound_field.html_error_list }}{% endfor %} {% for bound_field in bound_fields %} + <div style="float: left;"> {% if bound_field.has_label_first %}{% field_label bound_field %}{% endif %} {% field_widget bound_field %} {% if not bound_field.has_label_first %}{% field_label bound_field %}{% endif %} {% if bound_field.field.help_text %}<p class="help">{{ bound_field.field.help_text }}</p>{% endif %} + </div> {% endfor %} </div>
Ultimately you want a class which does this, which means changing the admin stylesheet, so you probably don't want to use this patch exactly.
The patch does not appear to affect the default case of stacked fields, as the the field line itself is wrapped in another <div>.
Attachments (6)
Change History (27)
Changed 10 years ago by Andy Dustman <farcepest@…>
comment:1 Changed 9 years ago by SmileyChris
I can confirm this patch fixes the above problem. I would upload before / after screenshots but trac won't let me.
Probably needs some more testing to make sure this doesn't break any existing fields.
Changed 9 years ago by hakejam
comment:2 Changed 9 years ago by hakejam
- Needs tests set
- Triage Stage changed from Unreviewed to Accepted
Expanded on the idea above using the float-left css class that was already in the admin. I have tested it with a few examples, but still needs more testing. Not sure if this applies to the newforms-admin branch.
comment:3 Changed 9 years ago by hakejam
- Resolution set to fixed
- Status changed from new to closed
- Triage Stage changed from Accepted to Ready for checkin
comment:4 Changed 9 years ago by hakejam
- Keywords sprintsept14 added
comment:5 Changed 9 years ago by Simon G. <dev@…>
- Resolution fixed deleted
- Status changed from closed to reopened
It's not closed until it's checked in
comment:6 Changed 9 years ago by gwilson
- Patch needs improvement set
- Summary changed from [patch] Fix admin formatting when help_text used with multiple fields on the same line to Fix admin formatting when help_text used with multiple fields on the same line
- Triage Stage changed from Ready for checkin to Accepted
- Version changed from SVN to newforms-admin
This needs to be fixed in newforms-admin, as it will soon will be replacing the current admin.
comment:7 Changed 9 years ago by xian
- Keywords help_text added
- Owner changed from nobody to xian
- Status changed from reopened to new
Assigning to myself to make sure it gets attended to in newforms-admin. We're allowing stacked inlines to take full fieldsets so we have a there will be edge cases there to work out.
comment:8 Changed 8 years ago by Karen Tracey <kmtracey@…>
- Keywords nfa-someday added; sprintsept14 help_text removed
Formatting improvement for a problem that was noted in old admin, should not block newforms-admin merge.
comment:9 Changed 8 years ago by programmerq
- milestone set to 1.0
Changed 8 years ago by brosner
comment:10 Changed 8 years ago by brosner
I attached a patch that applies on the latest trunk (r8388). I am not a UI designer nor really that great with front-end work. However, this does fix the problem of the line break, but doesn't seem 100% ideal. Also if there is no help_text on the first field in a two field grouping, but help_text on the second one, then the latter's help_text is placed under the first. Ideally someone with better skills in this area is required. :)
comment:11 Changed 8 years ago by brosner
comment:12 Changed 8 years ago by msaelices
I've tried to test this ticket, but I've not found right admin options for placing two fields in same line. I've been trying with this, but I didn't find the correct CSS class for placing into same line:
# model: class Person(models.Model): name = models.CharField(max_length=128, help_text="Please input the name") date_joined = models.DateTimeField() # admin: class PersonAdmin(admin.ModelAdmin): fieldsets = (('Avanced options', {'classes': '????????', # what CSS class? 'fields': ('person', 'date_joined')}),)
What CSS class we use? Is correct this approach to reproduce UI bug?
comment:13 Changed 8 years ago by msaelices
Sorry for previous comment, correct last line was:
'fields': ('name', 'date_joined')}),)
comment:14 Changed 8 years ago by mtredinnick
- Version changed from newforms-admin to SVN
@msaelices: don't worry about testing this. Jacob and I sat down with a designer (Nathan Borror) last week and discussed this. He's going to look at some solutions that will work. We'll use whatever he comes up with. (By the way, the feature you were looking for is fieldsets and then tuples of fields).
comment:15 Changed 8 years ago by msaelices
@mtredinnick, ok. I'll wait :-P, but, I think fieldsets is just I put in my PersonAdmin options, isn't?
comment:16 Changed 8 years ago by brosner
- Resolution set to fixed
- Status changed from new to closed
Changed 8 years ago by nathan
This one's better
Changed 8 years ago by nathan
Admin inline float fix
comment:17 Changed 8 years ago by Alex
- Resolution fixed deleted
- Status changed from closed to reopened
Reopening since we now have a patch from nathan, a real designer :P
comment:18 Changed 8 years ago by jezdez
With all due respect patching a global CSS class like .float-left seems unreasonable, creating an own class should also work.
Also, please fix it for rtl-languages.
Changed 8 years ago by jezdez
Fix that doesn't patch float-left class and is rtl compatible.
comment:19 Changed 8 years ago by jacob
- Resolution set to fixed
- Status changed from reopened to closed
comment:20 Changed 7 years ago by anonymous
- Cc andy@… added
comment:21 Changed 5 years ago by jacob
- milestone 1.0 deleted
Milestone 1.0 deleted
Proper patch for the fix. Adds a form-row-item class and applies it with a div around every bound_field
|
https://code.djangoproject.com/ticket/2612
|
CC-MAIN-2016-22
|
refinedweb
| 1,099
| 60.35
|
CppDepend is a primarily a source code analyzer, with features geared towards making it easier to understand large code bases with complex interdependencies. In addition, it can integrate with static analyzers.
Built into CppDepend is the analyzer from Clang. As of version 5, it can exposes all of the diagnostic messages that Clang providers. Because of the tight integration, Clang messages can be queried using CQLinq.
Other static code analyzers are incorporated by importing result files. This is configured using an XML-based file. Out of the box, configuration files are provided for CppCheck and CPD. Once imported, these results can also be queried using CQLing.
New for version 5 is support for C and C++14. The C support required significant changes to how CppDepend presents its information. In previous versions it was based on “namespaces, types, methods”, which didn’t work for the directory/file based organization found in C projects.
Community comments
|
https://www.infoq.com/news/2014/10/CppDepend-5?utm_source=presentations_about_static-analysis&utm_medium=link&utm_campaign=static-analysis
|
CC-MAIN-2019-18
|
refinedweb
| 155
| 50.23
|
Python vs Java – Who Will Conquer 2019?
Python Vs Java – The hottest battle of the era. Every beginner want to know which programming language will have a bright future? According to statistics, Java is losing its charm and Python is rising. But, no one will tell you which one is beneficial. In this blog, we will discuss the differences between Java and Python and let you decide which one is more useful.
Keeping you updated with latest technology trends, Join DataFlair on Telegram
Python Vs Java – A Battle for the Best
Let’s deep dive into the differences.
1. Hello World Example
To rigorously compare Python and Java, we first compare the first program in any programming language- to print “Hello World”.
- Java
public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); } }
- Python
Now, let’s try printing the same thing in Python.
print(“Hello World”)
As you can see, what we could do with 7 lines of code in Java, we can do with 1 line in Python.
Let’s further discuss parts of this program and other things.
2. Syntax
A striking characteristic of Python is simple python syntax. Let’s see the syntax difference between Python and Java.
2.1 Semicolon
Python statements do not need a semicolon to end, thanks to its syntax.
>>> x=7 >>> x=7;
But it is possible to append it. However, if you miss a semicolon in Java, it throws an error.
class one { public static void main (String[] args) { int x=7; System.out.println(x) } }
Compilation error #stdin compilation error #stdout 0.09s 27828KB
Main.java:10: error: ‘;’ expected
System.out.println(x) ^
1 error
2.2 Curly Braces and Indentation
The major factor of Python Vs Java.
>>> if 2>1: print("Greater")
Greater
This code would break if we added curly braces.
>>> if 2>1: {
SyntaxError: expected an indented block
Now, let’s see if we can skip the curly braces and indentation in Java.
class one { public static void main (String[] args) { if(2>1) System.out.println("2"); } }
Success #stdin #stdout 0.07s 27792KB
2
Here, we could skip the braces because it’s a single-line if-statement. Indentation isn’t an issue here. This is because when we have only one statement, we don’t need to define a block. And if we have a block, we define it using curly braces. Hence, whether we indent the code or not, it makes no difference. Let’s try that with a block of code for the if-statement.
class one { public static void main (String[] args) { if(2<1) System.out.println("2"); System.out.println("Lesser"); } }.
2.3 Parentheses
Starting Python 3.x, a set of parentheses is a must only for the print statement. All other statements will run with or without it.
>>> print("Hello")
Hello
>>> print "Hello"
SyntaxError: Missing parentheses in call to ‘print’
This isn’t the same as Java, where you must use parentheses.
2.4 Comments
Comments are lines that are ignored by the interpreter. Java supports multiline comments, but Python does not. The following are comments in Java.
//This is a single-line comment /*This is a multiline comment Yes it is*/
Now, let’s see what a comment looks like in Python.
>>> #This is a comment
Here, documentation comments can be used in the beginning of a function’s body to explain what
it does. These are declared using triple quotes (“””).
>>> """ This is a docstring """ '\n\tThis is a docstring\n'
These were the syntax comparison in Python Vs Java, let’s discuss more.
3. Dynamically Typed
One of the major differences is that Python is dynamically-typed. This means that we don’t need to declare the type of the variable, it is assumed at run-time. This is called Duck Typing. If it looks like a duck, it must be a duck, mustn’t it?
>>> age=22
You could reassign it to hold a string, and it wouldn’t bother.
>>> age='testing'
In Java, however, you must declare the type of data, and you need to explicitly cast it to a different type when needed. A type like int can be casted into a float, though, because int has a shallower range.
class one { public static void main (String[] args) { int x=10; float z; z=(float)x; System.out.println(z); } }
Success #stdin #stdout 0.09s 27788KB
10.0
However then, at runtime, the Python interpreter must find out the types of variables used. Thus, it must work harder at runtime.
Java, as we see it, is statically-typed. Now if you declare an int an assign a string to it, it throws a type exception.
class one { public static void main (String[] args) { int x=10; x="Hello"; } }
Compilation error #stdin compilation error #stdout 0.09s 27920KB
Main.java:12: error: incompatible types: String cannot be converted to int
x="Hello"; ^
1 error
4. Verbosity/ Simplicity
Attributed to its simple syntax, a Python program is typically 3-5 times shorter than its counterpart in Java. As we have seen earlier, to print “Hello World” to the screen, you need to write a lot of code in Java. We do the same thing in Python in just one statement. Hence, coding in Python raises programmers’ productivity because they need to write only so much code needed. It is concise.
To prove this, we’ll try to swap two variables, without using a third, in these two languages. Let’s begin with Java.
class one { public static void main (String[] args) { int x=10,y=20; x=x+y; y=x-y; x=x-y; System.out.println(x+" "+y); } }
Success #stdin #stdout 0.1s 27660KB
20 10
Now, let’s do the same in Python.
>>> a,b=2,3 >>> a,b=b,a >>> a,b
(3, 2)
As you can see here, we only needed one statement for swapping variables a and b. The statement before it is for assigning their values and the one after is for printing them out to verify that swapping has been performed. This is a major factor of Python vs Java.
5. Speed
When it comes to speed, Java is the winner. Since Python is interpreted, we expect them to run slower than their counterparts in Java. They are also slower because the types are assumed at run time. This is extra work for the interpreter at runtime. The interpreter follows REPL (Read Evaluate Print Loop). Also, the IDLE has built-in syntax highlighting, and to get the previous and next commands, we press Alt+p and Alt+n respectively.
However, they also are quicker to develop, thanks to Python’s brevity. Therefore, in situations where speed is not an issue, you may go with Python, for the benefits it offers more than nullify its speed limitations. However, in projects where speed is the main component, you should go for Java. An example of such a project is where you may need to retrieve data from a database. So if you ask Python Vs Java as far as speed is concerned, Java wins.
6. Portability
Both Python and Java are highly portable languages. But due to the extreme popularity of Java, it wins this battle. The JVM (Java Virtual Machine) can be found almost everywhere. In the Python Vs Java war of Portability, Java wins.
7. Database Access
Like we’ve always said, Python’s database access layers are weaker than Java’s JDBC (Java DataBase Connectivity). This is why it isn’t used in enterprises rarely use it in critical database applications.
8. Interpreted
With tools like IDLE, you can also interpret Python instead of compiling it. While this reduces the program length and boosts productivity, it also results in slower overall execution.
9. Easy to Use
Now because of its simplicity and shorter code, and because it is dynamically-typed, Python is easy to pick up. If you’re just stepping into the world of programming, beginning with Python is a good choice. Not only is it easy to code, but it is also easy to understand. Readability is another advantage. However, this isn’t the same as Java. Because it is so verbose, it takes some time to really get used to it.
10. Popularity and Community
If we consider the popularity and community factor for Python vs Java, we see that for the past few decades, Java has been the 2nd most popular language (TIOBE Index). It has been here since 1995 and has been the ‘Language of the Year’ in the years 2005 and 2015. It works on a multitude of devices- even refrigerators and toasters.
Python, in the last few years, has been in the top 3, and was titled ‘Language of the Year’ in years 2007, 2010, and 2018. Python has been here since 1991. Can we just say it is the easiest to learn? It is a great fit as an introductory programming language in schools. Python is equally versatile with applications ranging from data science and machine learning to web development and developing for Raspberry Pi.
While Java has one large corporate sponsor- Oracle, Python is open-source (CPython) and observes distributed support.
11. Use Cases
Python: Data Science, Machine Learning, Artificial Intelligence, and Robotics, Websites, Games, Computer Vision (Facilities like face-detection and color-detection), Web Scraping (Harvesting data from websites), Data Analysis, Automating web browsers, Scripting, Scientific Computing
Java: Application servers, Web applications, Unit tests, Mobile applications, Desktop applications, Enterprise applications, Scientific applications, Web and Application Servers, Web Services, Cloud-based applications, IoT, Big Data Analysis, Games
12. Best for
While Python is best for Data Science, AI, and Machine Learning, Java does best with embedded and cross-platform applications.
13. Frameworks
Python: Django, web2py, Flask, Bottle, Pyramid, Pylons, Tornado, TurboGears, CherryPy, Twisted
Java: Spring, Hibernate, Struts, JSF (Java Server Faces), GWT (Google Web Toolkit), Play!, Vaadin, Grails, Wicket, Vert.x
14. Preferability for Machine Learning and Data Science
Python is easier to learn and has simpler syntax than Java. It is better for number crunching, whereas Java is better for general programming. Both have powerful ML libraries- Python has PyTorch, TensorFlow, scikit-learn, matplotlib, and Seaborn, and Java has Weka, JavaML, MLlib, and Deeplearning4j
Similarities of Python and Java
Besides differences, there are some similarities between Python and Java:
- In both languages, almost everything is an object
- Both offer cross-platform support
- Strings are immutable in both- Python and Java
- Both ship with large, powerful standard libraries
- Both are compiled to bytecode that runs on virtual machines
This was all about the difference between Python vs Java Tutorial.
Summary
So, after all, that we’ve discussed here in Python vs Java Tutorial, we come to conclude that both languages have their own benefits. It really is up to you to choose one for your project. While Python is simple and concise, Java is fast and more portable. While Python is dynamically-typed, Java is statically-typed. Both are powerful in their own realms, but we want to know which one you prefer. Furthermore, if you have any query/question, feel free to share with us!
So now you know which out of java and python is best for your project, install python on Windows if you are willing to go ahead with Python.
Java has greatly changed since version 1.8. With lambdas, streams, default methods, code has become less verbose, concise and started favoring parallel computation. With no second thought, Java / Scala would be my choice.
Hi G.Sridhar,
Thank you for sharing such a nice piece of information on our Python vs Java Tutorial. We have a series of Scala tutorial as well refer them too.
Keep learning and keep visiting Data Flair
You have tones of false claims on comparison. How do you compare speed of two languages? Under what circumstances? Just another blowing click bait
Hey Osman,
Python, as mentioned above, is an interpreter based language. An interpreter based language being slower than a compiler based one is the first conclusion that anyone would deduce. Java excels at speed when it is compared with Python. This is the main reason as to why it is preferred as a server-side computing language. Java is also statically typed as opposed to Python which is dynamically typed. There are various evidences and benchmarks that bolster the claim that Java executes programs faster than Python.
Hope, it helps!
|
https://data-flair.training/blogs/python-vs-java/
|
CC-MAIN-2020-16
|
refinedweb
| 2,077
| 65.62
|
. It will make use of prebuilt data science modules such as Pandas, Matplotlib and Scikit-learn to build an efficient model. First, I’ll start with a brief introduction about different terms in the data science and machine learning space, then move the focus to Python coding so that you can actually start building your own machine learning model.
Machine Learning
As the name indicates, making machines learn what humans can do is machine learning. It’s all about making computers and applications learn and become decisive without explicitly programming all the possibilities. Based on known data or various possibilities with correct answers provided to the algorithms, the computer should yield the solutions to a given problem when the answer is not known.
In my previous article, I gave a granular view of components involved in machine learning which might help you to get a conceptual understanding of how Data, Model and Algorithms are interconnected.
Data Science
At its heart, data science is about turning the data into value. Data science can be thought of as the application for finding certain patterns in data and through that pattern deduce the outcome for the future problem at hand. It’s a combination of data mining and computer science. Initially, data mining was done using statistics, but with the help of data science, it’s mainly done programmatically. The powerful programming languages such as Python and R provide support to various scientific computing packages that leverage building statistics-focused models to predict the solutions.
As the name suggests, data science is all about data. There are various steps involved from collecting the data to processing and analysing the data. At each step, the different actors/roles come into play as shown in the table below:
Many data professionals, including DBAs and ETL developers, are familiar with most of these steps as well!
Linear Regression
Linear regression is the core concept in data science. It is a statistical term and mainly used whenever there is a need to make a prediction, model a phenomenon or discover the relationships between things. It is used for finding the relationship between two continuous variables. One of them is an independent variable, and the other is a dependent variable. Linear regression is used for determining the hypothesis. The core idea is to find the relationship between the two variables and obtaining a line that best fits the data. The best line is the one for which the most predictions are close to correct, which means the chances of errors are very low.
Here’s an example to help you understand linear regression. Assume that you are given the data for all the past movie productions: the movie budget and the revenue that they collected through the box office or any other sources. Now, imagine that you want to produce a movie and you want to predict from previous movie successes how much money your movie will make.
Given the data about various successful high budget films such as Avatar, Avengers, Titanic, etc., you can perform a hypothesis and try to understand where your movie fits. You are essentially going to build the best line (green line in the image below) that will help you predict how much revenue the movie can make given the budget of the film.
data-src="" data-lazy-load>
Through the budget value (X) for the movie, you can predict how much revenue (Y) the movie is going to make by just making a line from budget onto the best line (green line).
Requirements
Many languages such as Python, R, and Scala, etc., provide support for data science by bringing together statistics, data analysis and their related strategies to understand and analyse the data. This article will show how to use Python to analyse the data. Python has long been known as an easy to learn programming language from a syntax point of view. It provides extensive support to statistical and data science related libraries such as NumPy, SciPy, Scikit-learn, and Keras, etc. It also has an active community with a vast selection of libraries and resources which makes Python as the first choice for many data scientists.
Jupyter Notebook is an incredible tool that provides an easy way to execute the Python code. This article will use the browser version of Jupyter Python Notebooks. Click on Try Classic Notebook after you go to this link.
Editor’s note: you can also use the Jupyter Notebook feature found in Azure Machine Learning Studio, Azure Data Studio, or Azure Machine Learning Services.
data-src="" data-lazy-load>
This will open a new Python notebook in the browser where you can write Python commands and see the results.
data-src="" data-lazy-load>
Note: The browser version of Jupyter Notebook sometimes gets disconnected if it is kept idle for a long time. You may try downloading Anaconda and after installation is complete open the Jupyter notebook. This will help you run the Jupyter notebook on the local computer without connectivity issues.
Before writing some interesting Python commands and cooking something, you need to gather and clean the ingredients for the recipe.
Start building the model
To create a successful machine learning model, you need to follow some steps:
- Formulate the question
- Gather and clean the data
- Visualise the data
- Train the algorithm
- Evaluate the result based on the requirements.
To solve the problem, you are going to follow these steps:
data-src="" data-lazy-load>
Formulate a question
The question is the same example you saw before given the movie budget and revenue. The question is “How much money/revenue is the movie going to make?”
Gather data
To perform the analysis on the data, you need the movie budget in USD and movie revenue in USD. You can use this website to gather the data. All you have to do is download the data and open it in Excel for your research. (To make it easier, you can download the data from here as well.)
Clean the data
The next step is cleaning the wrong data. You might have noticed that the data in the Excel sheet contains a $0 amount in some cases.
data-src="" data-lazy-load>
The reason for this might be because the movie dates are in the future or the movie never came out. There might be many more reasons to have a $0 amount there, so for now, delete these $0 rows so that they don’t cause any false failures in the analysis and focus on the ones which have concrete results.
As discussed before, the focus will be just on the two columns production budget and worldwide gross because these are the columns that you will plot on the graph. After cleaning the data, removing the $ signs and renaming column names, this is how my Movie_Revenue_Edited looks:
data-src="" data-lazy-load>
Explore and Visualize
Now it’s time to visualise how the production budget and worldwide gross are related to each other. To do so, import the .csv file now so that you can do some magic on it. For this, click on the Jupyter logo, and it will take you to a screen. Click on Upload to upload the Movie_Revenue_Edited.csv file.
The next step is to start with a fresh notebook. In the Jupyter notebook, go to the File Menu-> New Notebook -> Python 3. This will open a new instance of Python notebook for you. I have renamed my notebook to My Movie Prediction. (You can also download the completed Movie Linear Regression Notebook.)
Now to access the csv file into the notebook, you need to use the Pandas module. Pandas is a prebuilt data science library that lets you do fast data analysis as well as data cleaning. It is built on the top of a famous data science library called NumPy. Pandas work with wide variety of data sources such as Excel, CSV, SQL file. In each cell, you can write either markup or code. You can select a cell with code and run it to get the results right in the notebook.
Here’s an example of importing the file and displaying the data (be sure to enter the code into the individual cells as shown in the image):
data-src="" data-lazy-load>
The next step is to load the data into the
X and
Y axis for the plot.
X is going to be
production_budget and
Y will contain the
worldwide_gross from the datasheet. To serve this purpose, you will have to map the csv data into rows and columns. This can be achieved using Pandas Data Frame. Data Frames is a two-dimensional and heterogeneous tabular data structure with labelled axes, i.e., rows and columns. The data frames package must be imported before using them in the code, which is very similar to the way you import packages in JAVA and C#. Go back to the cell where you imported the Pandas library and add the new
from line. After adding the code, rerun the cell.
data-src="" data-lazy-load>
Now to get the data loaded into the
X and
Y axes, you will load the data frame with
production_budget and
Y-axis with
worldwide_gross. Make sure you provide the same column name as that of your input csv data. The code will look something like this:
data-src="" data-lazy-load>
Now that you have successfully separated the data, you can visualise it. For this, you will need to import another module called Matplotlib. It has a rich library of graphing and plotting functionality. You will use the
pyplot feature from this module. Just add the
import statement to import the correct module. Make sure that you hit the Run button whenever you write new code to execute the cell’s code.
In a new cell, you will write code to print the plot. You will use Scatter Plots here as they help you find the correlation between these two variables. To display the plot, you will use the
pyplot.show() method.
data-src="" data-lazy-load>
To make the chart more readable, annotate the
X and
Y axes. This can be done using pyplot’s
xlabel and
ylabel methods.
data-src="" data-lazy-load>
Train the algorithm
Now you can run the regression on the plot to analyse the results. The main goal here is to achieve a straight line or the line of predicted values that would act as a reference to analyse any future predictions. As you might have realised by now, there are several modules that provide different functionality. To run the regression, you will use Scikit-learn which is a very popular machine learning module. Back in the import cell, add the new line to import linear regression from the Scikit-learn module and rerun.
Scikit-learn helps you create a linear regression model. Since the task of running the linear regression is done by an object, you will need to create a new object, in this case with the name
regressionObject. The
Fit method can be used to fit the regression model to the data. In other words, you have to make the model learn using the training data. For this purpose, use the
fit method as shown below.
data-src="" data-lazy-load>
Once your model is trained with the training dataset, you can predict the value for
Y using the regression object. The
predict method will help you predict values for
Y for each
X.
So
yPredicted will be equal to
regressionObject.predict(X), and then
yPredicted is used to build the regression line onto the plot using this statement. You will notice that I have used the green colour for the regression line, which shows up in the plot successfully. Change the previous cell so that it includes the plot.
data-src="" data-lazy-load>
Analyse
As you can see from the plot, there is a positive relationship between the two values. As production revenue increases, there is an increase in worldwide gross. This means the rate of change of variable
Y is proportional to the change in
X. When the regression line is linear the equation of line is
Y = aX + b, the
a is the regression coefficient/slope of the line which signifies the variance of
Y with change in values of
X.
The positive regression coefficient (a) will tell you that there is a positive relationship between
X and
Y. The coefficient value can be determined using
coef_ property on the regression object. For this map, the regression coefficient is 3.11, which means that for each USD spent for the movie production, you should get $3.11 in return.
data-src="" data-lazy-load>
The next step is to calculate
b the
intercept_ of the line. This can be done by using
intercept_ property on the regression object.
data-src="" data-lazy-load>
The generalized formula for a line is
Y = aX + b. Now consider a hypothetical scenario where you want to predict the worldwide revenue that a movie made for $20 Million in production budget. The estimation can be found by substituting the values in the equation.
The above calculation can be done using the Python notebook as below:
data-src="" data-lazy-load>
The important thing to note here is the model is a hypothetical analysis of the data provided. The predictions are not 100% accurate, but there is a high possibility that the predictions would turn out to be true. Keep in mind that the model is a dramatic simplification of the real world.
Summary
This article provided an introduction to the concepts of machine learning, data science and linear regression. It demonstrated how to build and analyse the Machine Learning Linear Regression Model through various steps which will eventually enabled you to predict the outcome for practical problems.
References
|
https://www.red-gate.com/simple-talk/cloud/data-science/building-machine-learning-models-to-solve-practical-problems/
|
CC-MAIN-2020-05
|
refinedweb
| 2,313
| 62.17
|
Why we expect different exceptions? I think this test
discovers incompatibility and should be just fixed to expect the same exception
Thanks,
Mikhail
2006/11/23, Ivanov, Alexey A <alexey.a.ivanov@intel.com>:
> Yeah, I remember about TestNG. Yet I think it won't solve all the cases
> where isHarmony used.
>
> For example, look at the tests in
>
> The isHarmony() method is used in if-else context there which
> demonstrates the difference between Harmony and RI. And mostly it is
> if-else context that isHarmony() is used.
>
> Regards,
> --
> Alexey A. Ivanov
> Intel Enterprise Solutions Software Division
>
>
> >-----Original Message-----
> >From: Mikhail Loenko [mailto:mloenko@gmail.com]
> >Sent: Thursday, November 23, 2006 2:39 PM
> >To: dev@harmony.apache.org
> >Subject: Re: [classlib][test] isHarmony method in the swing tests
> >
> >We are going to swith to TestNG.
> >
> >So we will be able to handle all that stuff there, won't we?
> >
> >Thanks,
> >Mikhail
> >
> >2006/11/23, Ivanov, Alexey A <alexey.a.ivanov@intel.com>:
> >> Mikhail,
> >>
> >> Here it's not a temporary solution.
> >>
> >> javax.swing.text.PlainViewI18N is for bidirectional text support. It
> is
> >> a package-private class, and it's not present in public API spec.
> >>
> >> Sun doesn't reveal its implementation of bidirectional text. I guess
> >> it's fully implemented yet: there are problems with it. What I can
> >> remember at once is you can't go through all the text using right or
> >> left arrows on keyboard because the caret jumps back.
> >>
> >> In general this method is used to differentiate our implementation
> from
> >> Sun. These differences are intentional. To make the tests pass both
> on
> >> RI and Harmony, it is checked which classlib is used. Also looking at
> >> the tests one sees the expected difference.
> >>
> >> Regards,
> >> Alexey.
> >>
> >> P.S. We can get rid of using this method and sort out the tests to
> >> separate implementation specific tests, but it requires lots of
> effort.
> >> On the other hand, some tests will lose the information about the
> >> difference. Subsequent releases of Java may change the behavior and
> >> we'll see it because of failing tests. This way we can adjust our
> >> implementation to the new RI impl.
> >>
> >> --
> >> Alexey A. Ivanov
> >> Intel Enterprise Solutions Software Division
> >>
> >>
> >> >-----Original Message-----
> >> >From: Mikhail Loenko [mailto:mloenko@gmail.com]
> >> >Sent: Thursday, November 23, 2006 10:22 AM
> >> >To: dev@harmony.apache.org
> >> >Subject: [classlib][test] isHarmony method in the swing tests
> >> >
> >> >Did I understand correctly that it's a temporary solution to
> >> >differentiate between
> >> >"api" and "impl" tests?
> >> >
> >> >package javax.swing.text;
> >> ><...>
> >> >public class PlainViewI18N_LineViewTest extends SwingTestCase {
> >> ><...>
> >> > public void testGetPreferredSpan01() throws Exception {
> >> > if (!isHarmony()) {
> >> > return;
> >> > }
> >>
>
|
http://mail-archives.apache.org/mod_mbox/harmony-dev/200611.mbox/%3C906dd82e0611230450q61d0eb74p8f84e4144a596d27@mail.gmail.com%3E
|
CC-MAIN-2016-30
|
refinedweb
| 429
| 58.89
|
Volity::Player - Volity players, from a referee's perspective.
An object of this class represents a Volity player present at a table. The referee creates one of these objects for every player who comes to that ref's table. The player might not actually play the game (i.e. sit in a seat), but is nonetheless recognized by the referee as a potential game player and table presence.
In certain circumstances a ref may choose to keep an object for a given player persistent, even after that player leaves the table, while other times the player's departure results in the object's destruction. Generally, it just does the right thing.
You should never need to create or destroy player objects yourself; the referee object takes care of that. However, there are a number of methods defined by Volity::Referee and Volity::Seat that return player objects, so you may find yourself interacting with them anyway.
Volity::Game subclasses refer to seats more often than to individual players, since most game interaction takes place at the seat level.
This class defines two kinds of object methods: accessors to basic, informational attributes about the player, and triggers to send RPC methods to the player's client.
Consider these methods as read-only accessors to attributes that the referee sets. (Well, you can write to them if you'd like, but I can't predict what might happen if you do, so don't.)
This player's full JID.
The player's JID, minus the resource part.
This player's MUC nickname.
The Volity::Referee object of the table this player is at.
1 if this player is a referee-created bot, 0 otherwise.
The Volity::Seat object this player occupies. undef if the player isn't sitting or missing.
A Volity::Seat object that's most appropriate to use when sending state to the player,
preventing suspension-mode state-snooping.
See the
send_game_state_to_player method documented in Volity::Game.
1 if the player has abruptly vanished while sitting at an active game, 0 otherwise.
These methods all send volity-namespaced RPC methods to the player's client.
Generally, you shouldn't have to call any of these yourself. The ref takes care of all this stuff.
Sends the RPC request "game.$function(@args)" from the referee to the player's client.
Note that in most cases,
you'll actually want to call UI functions on the seat level,
not on individual players.
Luckily,
the Volity::Seat class defines this same method,
which works in the same manner.
(To tell you the truth,
it just mirrors the
call_ui_function call to all of the player objects it holds...)
Updates the player about the game state and seats, sending it the proper volity RPCs.
Jason McIntosh <jmac@jmac.org>
|
http://search.cpan.org/~jmac/Frivolity-0.7.1/lib/Volity/Player.pm
|
CC-MAIN-2013-48
|
refinedweb
| 464
| 58.28
|
From: Thorsten Ottosen (nesotto_at_[hidden])
Date: 2004-05-04 19:29:31
Hi Rob.
Thanks for you review.
| I'd prefer "primary specialization" to "default" in the comments.
ok. Is that the correct term?
| Lines longer than 80 columns in a number of files.
| detail/common.hpp is particularly hard to view in 80 columns.
is that a requirement? If so, I will change it.
| I question the value of separating iterator_of and
| const_iterator_of in separate files given how similar they are.
Some like it...others don't. I guess I will use the bigger headers mostly.
| Wrong comment on std::pair specialization of result_iterator_of.
thanks.
| Why isn't result_iterator_of implemented in terms of iterator_of
| and const_iterator_of? As it stands, it duplicates the code in
| iterator.hpp and const_iterator.hpp. Collocating their
| implementation would reduce the chances of maintenance errors.
True. But the implementation is so simple, that it hardly helps
to put the code thruogh another layer of templates. Doing so might
give longer compiles. And some people care about minimal headers.
| Couldn't the primary specialization of size_type be to use SFINAE
| to check for a nested size_type type or else std::size_t? You
| wouldn't need any specializations then.
I guess it could. My only problem is that I don't know how portable
this is. AFAIK, detecting nested typedefs only works with Comeau.
| "Sise" [sic] is misspelled in sizer.hpp. Shouldn't sizer.hpp be
| in detail?
yes. maybe. I don't find it very useful. But if people do, I can put
BOOST_COMPILE_TIME_ARRAY_SIZE() macro in detail/array_size.hpp.
| Indentation in the detail files could be lessened with "namespace
| boost { namespace collection_traits_detail {".
yes.
| The Introduction is missing a motivation section. It goes from a
| short description to an example. It should help the reader
| understand why the library is valuable.
IMO the introduction is the motivation. what would you like to see in a motivation section?
| The Introduction is missing discussion of the use of namespace
| scope functions to do what heretofore would be done via member
| functions. Instead, the example and the sentence immediately
| following it imply this change of syntax.
Wouldn't it only duplicate stuff in the two links to CollectionCocept and ExternalConcepts?
| Using the namespace
| scope functions is central to the library, so it should be stated
| early and clearly.
ok. I should add something about extending the lib + how to rely on ADL.
| Documentation Comments
Thanks. I will look into them when ) update the docs.
br
Thorsten
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2004/05/64951.php
|
CC-MAIN-2021-31
|
refinedweb
| 438
| 62.34
|
Amazon Alexa Skills Development with Azure Active Directory and ASP.NET Core 1.0 Web API
Developer
In this post, Premier ADM, Rob Reilly, walks us through building Alexa Skills using Azure AD and ASP.NET Core Web API.
Background
Amazon Alexa is a technology developed by Amazon that takes voice commands, interprets them, and then takes action by sending requests on to API’s to perform a multitude of tasks only limited by your imagination. Amazon offer various devices that can be used to take the requests from the user and act on the request. Examples of this are the TAP and the Echo and are essentially smart speakers. The bundling of this functionality into offering is known as creating an Alexa Skill and some examples of things you can do with this technology are stream music, call for an Uber, get news and weather and control home automation devices. Custom Alexa Skills can be developed by third parties. The Components of an Alexa Skill from the Amazon Alexa documentation are:
- Alexa developer portal.
There are multiple options by which Custom Alexa Skills can be created and in this post I will focus on using the Alexa portal for the Skills configuration and using ASP.NET Core 1.0 Web Api for the Cloud-based Service portion. There are also occasions when you will want to authenticate the Alexa device calling your Alexa Cloud Service. Alexa Skills can use what is known as account linking to allow your Alexa Device to connect to the API using a credential managed in an external identity provider such as an on-prem ADFS or Azure Active Directory. Basically the Account linking binds your Amazon account used for Alexa to these external Identity providers through a process where you authenticate to these Identity providers and grant the Alexa application permission to use your account in those systems for authentication and authorization to the Skills API’s you develop. This is done through a OAUTH 2 process. In this posting I am focusing on the use of Azure Active Directory as the Identity provider and use the OAUTH 2 Grant Type known as Authorization Code Grant. Authorization Code grant as its base is an authentication\authorization process where the user authenticates to the Identity provider and granted an Authorization Code. This code is then presented back to the same Identity Provider server or perhaps another where the Authorization Code is checked and a then an Access Token, used to access other systems and a refresh token used to refresh the Access token when it expires is presented back to the calling application. The Alexa Application the end user ties to their Alexa Devices does this access code grant and stores the access and refresh tokens for use from an Alexa Skill when the Alexa is asked to use a particular skill. This post is neither an in-depth discussion on OAUTH 2, Alexa Skills Development or ASP.NET Core. The main purpose of this post is to show how I was able to get Alexa, Azure Active Directory and an ASP.NET Core Web API to work together using the built-in Authentication and Authorization middleware and standard techniques for locking down web api. For more detailied information on these topics you should refer to their Associated documentation.
The Problems
Despite all of this following along with the OAUTH 2 standard there always seems to be instances where one vendors interpret standards differently. Interpretations can be perfectly within the standard but not actually work together. In addition to some implementation mismatches there were other issues getting Azure Active Directory to accept the Authorization Code Grant request from the Alexa account linking infrastructure. Once these issues were addressed and the account linking had been completed successfully making calls from Alexa to the .NET Core Web API backend that was secured did not work. Upon further reading of the Alexa Documents the JWT access token from an Alexa is not placed in the HTTP Authorization header as ASP.NET Core expects it is actually part of the Request Envelop made from Alexa. This means you can either use frameworks like the AlexaSkillsKit.NET or you can create some custom middleware for ASP.NET Core that pulls the access token out of the HTTP Request Body and adds the Authorization Header with the Bearer token you obtained from the body. I chose the latter approach. My reason for going this route is that it allows me to benefit from the building Authorization support in ASP.NET Core, specifically the use of the [Authorize] attribute and also leverage the fact that the access token validation functionality is coming from a built in ASP.NET core library so odd are it will be more up to date and likely more robust then the AlexaSkillsKit.net. However, figuring out how to pull this access token from the request body and add it as Authorization header got a little tricky as well. I found out after some hair pulling that unless you take certain steps when you interrogate the Request body it won’t be passed on along in the pipeline. So initially I got the Authorization to work but lost my payload to the method being called in the web api controller. I’ll being showing how this was addressed in the step by step tutorial that follows.
What’s needed before we begin
In order to complete this tutorial, you are going to need the following:
- Access to an Azure Subscription
- Azure Active Directory Domain
- Application set-up for Alexa front end
- Application set-up for Web Api
- Azure App Service to host the Alexa Skill Web ApI
- Amazon Account
- Alexa Development Portal
- Alexa Application to manage your skills.
- Visual Studio 2015 Community Edition with updates for ASP.NET Core development
- Alexa Device for Testing
- You can test in the Alexa Developers portal but it might help to test a real world device.
- An alternative way to test is to use the online Alexa Simulator
Step by Step Tutorial
Step 1 – Creating an Azure AD Directory (only if you don’t currently have one)
1. Login to Azure classic portal and you Azure Subscription () a. Despite adding considerable support to the new Azure portal for the management of Azure AD it appears directory creation is still only available in the classic portal 2. At the bottom of the portal select NEW>APP SERVICES>ACTIVE DIRECTORY>DIRECTORY>CUSTOM CREAT
3. At the dialog Add Directory select Create new directory and provide the requested information and click the Circle with check in it.
4. Go to the ACTIVE DIRECTORY ITEM in the management portal.
5. Make sure the newly created directory displays in the list of directories.
6. At this point you could add users and permission but for testing this tutorial out you can use your admin account (one you logged in the Azure subscription with) to complete all additional Azure Active Directory configuration and Alexa account linking. 7. Login to the new Azure portal() Since the new Azure Portal is where we should be learning how to do stuff I will do all the next steps from the new portal 8. Select the new domain from the Azure Portal This is selected under your account in the upper right hand corner
9. Select the Azure Active Directory Item in the portal to get to the management blade for this directory
10. At this point we can leave the portal open to this spot and move onto creating the web api and using this directory. We come back to here shortly.
Step 2 – Creating a ASP.NET Core 1.0 Web API with authentication
1. Open Visual Studio 2015 a. Make sure you have installed all the updates for .NET Core 1.0 and ASP.NET Core 1.0 (See Links sections for getting the tools and api’s) b. Installing the Azure SDKs and tools makes things easier to manage Azure components through Visual Studio Server Explorer 2. File>New>Project 3. Select ASP.NET Core Web Application (.Net Core)
4. Once you fill in the New Project information click OK button. 5. At the New ASP.NET Core Web Application dialog select Web API template
6. Click the Change Authentication Button 7. At the Change Authentication dialog select Work and School Accounts a. Select the directory domain you used when we created the AD Directory b. Check the Read directory data i. This allows the application to read from the Azure AD Graph API. We won’t need for the simple is authenticated and authorized scenario of this tutorial but it could be used later c. Expand the more options and update your App ID URI to.{domain} i. This value can be any valid uri and is needed later as part of the Alexa configuration. (Copy this value off for later referral)
8. Click ok 9. You may be prompted to authenticate to your azure portal as the admin account. 10. We’ll be hosting this Alexa Cloud Service in Azure so select the Host in the cloud check box.
11. Click Ok 12. Add the Create App Service dialog you can use the defaults or customize to your liking
13. Click Create13. Click Create
At this point Visual Studio has created a web api application from the template and configured it to use our custom directory form managing user authentication credentials. It has also provisioned the application registration in the Azure Active Directory Directory. Let’s look at the application setup the Visual Studio deployed out into our Azure AD Directory.
To see the application registration is located in the following way:
- Open Azure Portal
- Select the Directory you are using
- Click on the Azure AD item button on the lower left of the Azure Portal
- Click on App registrations
- You will see our just created Web API application
- Click on the application registration to get to its configuration settings
We don’t need to do any additional set-up at this point this is just for reference. You would come to this location if you needed to look up the information like the Application ID or the App ID URI. The App ID Uri can be located by looking at the manifest at the identifiedUris.
Now let’s go back to our web api and see how this information is configured in the application. The Azure AD information is configured into the application during the startup. The properties for the set-up are kept in one file (appsettings.json) and bootstrapped into the application in another file (Startup.cs). Let’s look at the appsettings.json.
These values should look familiar they are what is configured in your application registration in Azure AD. The nice thing for us is that when we create the web Api project all this was created for us. Now lets see how these properties are bootstrapped in to the web api by looking at the Startup.cs.
In the Configure method of the Startup class the application is told to utilize the JwtBearerAuthentication Middleware. The call to the middleware is passed the JwtBearerOptions which pulls in the configured values from the Appsettings.json. The big thing to note about this is that the app. JwtBearerAuthentication() middleware is places before the app.UseMvc(). This is important because the precedence in the processing pipeline occurs in the order in which the middleware is configured here. Since we need to authenticate before we can pass on the credentials to the Mvc middleware the authentication MUST be defined above the Mvc Middleware. This will become important because we will be creating a custom middleware that that takes the Alexa access token from the HTTP Post body and maps it into the authorization header. This custom middleware must be placed above the JwtBearerAuthentication middleware so that the header is available for processing by the JwtBearerAuthentication.
Step 3 – Adding Alexa Skills functionality to the Web Api
Now we need to add the ability for our web api to be able to process Alexa Requests and Respond with actions for Alexa to take. In order to do this I have created a Alexa Object model that is a blend of some of my interpretation of the Alexa API interfaces and some code and approach found in the AlexaSkills.NET Project previously mentioned. This code along with the full sample tutorial will be available on GitHub for you to reference. Since the goal hear was mostly to work through the details of Alexa, Azure AAD and web api integration the Alexa Object model I provide is not 100% complete and needs further work to make it rock solid. The main goal is to make a simple “Hello World” type Alexa skill so we can test the component integrations without getting ties up in a complex Alexa Skills project. Lets add a controller to process our simple Alexa Skill Request.
- Add a new Web Api controller and call it HelloController
- Let’s keep things simple and strip out everything but the one Post method so our controller should look like:
- Now let’s reference the Alexa Object model classes provided and use them in the controller.
- Now we can update our Post method to process a very simple Alexa Skill that basically Says hello {FirstName} where {{FirstName} is one of the parameters that is passed to us from Alexa when the user asks our custom skill to act. More on this later when we get into the Alexa Skill Configuration portion. Note we are also securing this method with the [Authorize] attribute. This basically for our purposes means you either are authenticated or not by an Identity Provider we trust. You can of course check for additional claims to further see if the user can call this method or not but once again keeping it simple for our main goal.
Now we have and endpoint that can process Alexa Skills but there is one issue. The JwtBearerAuthentication() middleware looks for the access token in the HTTP Authorization header. Alexa does not send the Authorization header but sends the token as part of the HTTP Post Body. So if we want to use the [Authorize] attribute to lock down our api we need to do some processing of the request prior to it getting sent to the JwtBearerAuthentication() middleware. To accomplish this we will create our own middleware.
- Under the Web Api project add a new item.
- Select Middleware Class and name it AlexaJWTMiddleware
- The code needs to be update to the following
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Http; using System.IO; using System.Text; using Alexa.Skills.Api; using Newtonsoft.Json.Linq; namespace AlexaSkillSample { // You may need to install the Microsoft.AspNetCore.Http.Abstractions package into your project public class AlexaJWTMiddleware { private readonly RequestDelegate _next; public AlexaJWTMiddleware(RequestDelegate next) { _next = next; } public async Task Invoke(HttpContext context) { if (!context.Request.Headers.Keys.Contains("Authorization")) { // Keep the original stream in a separate // variable to restore it later if necessary. var stream = context.Request.Body; // Optimization: don't buffer the request if // there was no stream or if it is rewindable. if (stream == Stream.Null || stream.CanSeek) { await _next(context); return; } try { using (var buffer = new MemoryStream()) { // Copy the request stream to the memory stream. await stream.CopyToAsync(buffer); byte[] bodyBuffer = new byte[buffer.Length]; buffer.Position = 0L; buffer.Read(bodyBuffer, 0, bodyBuffer.Length); string body = Encoding.UTF8.GetString(bodyBuffer); RequestEnvelope AlexaRequest = RequestEnvelope.FromJObject(JObject.Parse(body)); if (AlexaRequest?.session?.user?.accessToken != null) { context.Request.HttpContext.Request.Headers["Authorization"] = "Bearer " + AlexaRequest.session.user.accessToken; } // Rewind the memory stream. buffer.Position = 0L; // Replace the request stream by the memory stream. context.Request.Body = buffer; // Invoke the rest of the pipeline. await _next(context); } } finally { // Restore the original stream. context.Request.Body = stream; } } await _next(context); return; } } // Extension method used to add the middleware to the HTTP request pipeline. public static class AlexaJWTMiddlewareExtensions { public static IApplicationBuilder UseAlexaJWTMiddleware(this IApplicationBuilder builder) { return builder.UseMiddleware<AlexaJWTMiddleware>(); } } }
You can review this code for yourself and if you want to better understand the ASP.NET Core middleware coding model please reference the documentation. The main thing to understand here is that we are plucking the access token out of the HTTP post body and adding it to the Authorization header as a bearer token. As you can see doing this is a little bit trickier than it sounds. The minute you read the Requests body payload off the buffer you have essentially cleared out the buffer and can’t rewind it. So this approach essentially makes a copy of the stream prior to processing the data off the buffer into a memorystream and getting at the content we need. A MemoryStream buffer can be rewound. Once we are done we rewind the memorystream buffer and place it back on the request body stream and send the context back on its way in the pipleing. Now we can use this middleware to do this processing for us by adding the following to the Startup classes configure method.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddConsole(Configuration.GetSection("Logging")); loggerFactory.AddDebug(); app.UseAlexaJWTMiddleware(); app.UseJwtBearerAuthentication(new JwtBearerOptions { Authority = Configuration["Authentication:AzureAd:AADInstance"] + Configuration["Authentication:AzureAd:TenantId"], Audience = Configuration["Authentication:AzureAd:Audience"] }); app.UseMvc(); }
Notice we placed this above the app.UseJwtBearerAuthentication(). This is very important as I mentioned before. Our middleware needs to process this accesstoken from the body into the header before the actual Authentication happens so the authentication middleware has an Authorization header to process. There might be a more elegant way to do this you could probably write a full on authentication middleware for Alexa but this allowed me to make a relatively simple middleware and still be able to leverage the rest of the built-in stuff. So at this point our Web Api is good to go so we can move on to working on getting an Application registry set-up in Azure Active Directory for he Alexa frontend piece.
Step 4 – Creating an Application Registry in Azure AD for the Alexa Front End
Now we have to configure an application registry for Alexa and grant it permissions to create access tokens that can access our web api.
- Go back into the Azure Portal where we left off and select App registrations
- Click the Add one the App Registrations Blade
- On the Create Blade
- Name: Alexa Tutorial Skill Frontend
- Application Type: Native
This will create the application registration for our Alexa to account link to. Notice we selected Native for the Application Type. I’m not really sure why this is required but without setting this as a Native type Alexa Account Linking will not successfully occur. Also notice the {lookuplater} in the sign-on url. We’ll need to come back in later and update this after we create the Alexa Front end. The Alexa Configuration will provide the full URL for us after it creates the new skill. Also at that time we will get the key from this application registry but no need now till after we get into the Alexa Configuration. The only step left we need to do while still here is to grant permissions to the AlexaTutotorialCloudService to our Alexa Tutorial Skill Frontend application registration.
- Click on the Alexa Tutorial Skill Frontend registerd app and go into the Required permissions blade.
- Click the Add
- In the Add API Access blade
- Click on the Select an API
- In the Select and API you’ll need to do a search for AlexaTutorialCloudService
- Select the AlexaTutorialCloudService and click on the Select button.
- In the Enable Access blade check the Access AlexaTutorialCloudService
- Click on the select button.
- Click Done in the Add API access blade
That’s it for now. We’ll come back to this location later to update out sign-on URL and get our Client Secret(AKA Key) after we have created our new Alexa Skill in the Alexa Developer Portal.
Step 5 – Creating an Alexa Skill Frontend
Now we will be going to the Amazon Developer portal to create the Alexa Skill Configuration for the portion of this Alexa Skill that runs on the Amazon Cloud. I won’t go into the details of what each step means just plug in what I provide. You can read the documentation to better understand the full set-up.
- Login to with your developer account.
- Click on the Alexa Section
- Click on the Alexa Skills Kit>Get Started
- Click on the Add a New Skill button in the upper right hand corner
- In the Skills Information Section· Skill Type: Custom Interaction Model· Language: English· Name: Alexa ASP.NET Core Web API Tutorial· Invocation Name: Tutorial· Global Fields > Audio Player: No
- Click save then click Next
- In the interaction Model Section
- Intent Schema:
{
“intents”:
[
{
“intent”: “Tutorial”,
“slots”:
[
{
“name”: “FirstName”,
“type”: “AMAZON.DE_FIRST_NAME”
}
]
},
{
“intent”: “AMAZON.HelpIntent”
},
{
“intent”: “AMAZON.StopIntent”
}
]
}
- Sample Utterances: Tutorial Can you greet {FirstName}
8. Click Save then click next
9. In the Configuration Section
- Endpoint
- Service EndPoint Type: HTTPS
- Geographical region: North America
- North America: https://{App Service Hostname}/api/Hello
- This is the URL Endpoint of our Web API as deployed to the Azure App Service
- Account Linking: Yes
- Authorization URL:{Azure AD Tenant}/oauth2/authorize?resource={App ID URI}
- You can get first part of this url by going to the Azure Portal and clicking on the Endpoints for the Azure AD Directory
- The resource parameter needs to be added because Azure AD seems to required it. This is the value mentioned when we created the web api. If you need to look it up it will be the Audience value in the appsettings.json
- Client Id: {your native app registration Application id}
- Domain List: Leave default
- Scope: leave default
- Authorization Grant Type: Auth Code Grant
- Access Token URI:{Azure AD Tenant}/oauth2/token
- Client Secret: This is the key associated with your Azure AD application registration for the Alexa front end
- We’ll generate this now.
- Client Authentication Scheme: HTTP Basic (Recommended).
- Private Policy File URL: {Path to you policy file}
- In order to account link Alexa requires you to provide a policy file.
- For a real app we would create a real policy file but for this tutorial Just add a Policy.html file under wwwroot to the web api and app.UseStaticFiles() right above app.useMvc() in the startup.cs. This will make it accessible via the cloud service as: https://{App Service Hostname}/Policy.html
10. Click Save
11. We now also need to update the redirect URI for our Alexa Tutorial Skill Front end Application registration in Azure Active Directory
- You can get the proper Redirect URI from the Alexa Configuration page
- Copy this URL and update the Azure AD application registration Redirect UIR for our Alexa frontend
12. Go back to the Alexa Configuration page and click next
13. In the SSL Certificate Section select: My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority.
14. Click Save and then click next
That’s the bare bones we need to configure the Alexa front end so we can account link and then test. If you actually build a real Alexa Skill that you will publish there are additional steps that need to occur.
Step 6 – Account Linking
Now we are ready to try to link our Alexa Skill Amazon account to our Azure AD Admin Account. Normally, you would use a user account in both systems but once again trying to keep it simple.
- Login to
- Navigate to Skills>Your Skills
- You should see our newly created skill. Click on it.
- Click on the Link Account button
- You will be redirected to login to Azure Active Directory and then asked to accept the access being requested. Once you get through that the result if all goes well should be:
Now once we deploy our Web Api application we are ready to test.
Step 7 – Publish Web Api to Azure app service
At this point we need to publish our web api up to the clouds so we have something to test against.
- In visual Studio right click on the AlexaTutorialCloudService project
- In the context menu select Publish
- This should be all set-up and ready to go but if not fill in the proper values in the Publish dialog
- Press the publish button.
Once this completes you should be ready to test your Alexa Integration.
Step 8 – Testing the Alexa Integration
We have multiple options by which to test the Alexa Integration. We can use and actual device, the Amazon Developers Portal Test client or there is an Alexa Skill Testing Emulator. I go over the last two since they don’t require you to purchase anything.
Amazon Developer Portal
- In the Amazon Developer portal where we configured the Alexa application
- Make sure you are under the Skill you are wanting to test
- Click on the test section
- In the Enter Utterance Section enter the following: Tutorial Can you greet {your name}
- {your name} = your name or any name
- This is derived in the iteration model section for the skill.
- Click the Ask Alexa ASP.NET Core Web API Tutorial button.
- If all goes well you should see something that looks like:
- If you click on the Listen Button you will hear the Alexa voice say “Hello {yourname}”
Alexa Skill Testing Tool (Emulator)
The Alexa emulator is a simulated version of using an Alexa device that is surfaced in a web page. You need to have a microphone and speaker to use this tool.
- Open browser and go to following URL:
- Click on the Login with Amazon button.
- Use your Amazon Developer account
- Once logged in your ready to test
- Use the mouse to press the Microphone button and at the prompt say the following: Ask Tutorial can you greet {your name}
- If all goes well you should hear the lady say “Hello {your name}”
Wrap Up
Hopefully, you will have success working through this tutorial. My main hope is that this information will save you all the discovery and digging that I had to do to figure out how to make this all play nice and work together. This post just scratches the surface of building a full on Alexa Skill that you would want to publish to Amazon. Hopefully having this information will allow you to focus on those aspects if you are either going to, or are in the process, of creating an Alexa Skill and need to do account linking to secure the system.
Helpful Tools
The following are some very helpful tools you should look to get and use for troubleshooting web apis and OAUTH security in general.
- Postman plugin for Chrome –
- an verify the signature
Sample Code
The companion source code for the web api portion of this tutorial can be located on GitHub at the following location.
Links
- Visual Studio and .NET Core API’s
- Azure SDKs and Tools.
|
https://devblogs.microsoft.com/premier-developer/amazon-alexa-skills-development-with-azure-active-directory-and-asp-net-core-1-0-web-api/
|
CC-MAIN-2020-45
|
refinedweb
| 4,521
| 53.31
|
Hello all,
I am new to the forum and am seeking some help on why my program is not working as expected :confused:. For some reason, my program skips over letting the user enter their town. I would be very grateful for any help that one may provide me with:
Code Java:
import java.util.Scanner; public class Okay { public static void main(String[] args) { String name, town; int age, siblings; Scanner details = new Scanner (System.in); System.out.println("Hey, what's your name?"); name = details.nextLine(); System.out.println("That's cool my friend is named " + name + " too!"); System.out.println("How old are you?"); age = details.nextInt(); System.out.println("Wow, my friend, " + name + ", is " + age + " years old" + " too!"); System.out.println("What town do you live in?"); town = details.nextLine(); System.out.println("This is getting kind of crazy... My friend, " + name + " lives in " + town + ", too!"); System.out.println("How many siblings do you have?"); siblings = details.nextInt(); System.out.println("STOP RIGHT THERE. THIS IS INSANE. My friend, " + name + ", has " + siblings + " siblings, too!!!"); System.out.println("Wait, " + name + ", is that YOU!?"); } }
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/35969-help-my-simple-scanner-program-printingthethread.html
|
CC-MAIN-2015-18
|
refinedweb
| 186
| 62.04
|
Ind
file uploads to my web site
file uploads to my web site How can I allow file uploads to my web site
Servlets and
Servlets and Sir...! I want to insert or delete records form oracle based on the value of confirm box can you please give me the idea....
SERVLETS
SERVLETS I have two Servlet Containers, I want to send a request from one Servlet from one container to one Servlet in the other container, How can I do
Submitting Web site to search engine
your
pages to the search engine index. Once your web site appears...
Registering Your Web Site
To Search Engines... Web Sites
Once your web site is running, the next job for you
servlets
servlets how can I run java servlet thread safety program using...'. I follow the same procedure what you send by the links.but i got the same errors
coding is:
import java.io.*;
import java.sql.
Servlets
(15,Howdidyouhear);
int i=pstm.executeUpdate();
if(i==1){
System.out.print("insert...\myproject\WEB-INF\classes>javac InsertServlet.java
InsertServlet.java:3
Web Site Goals - Goal of Web Designing
Web Site Goals - Goal of Web Designing
What are the prime features necessary... and the client.
What is Custom Web Design?
Custom web site is little bit... launching it?s website?
Every company want to boost it?s business through website
Servlets Books
to program dynamic Web content using Java Servlets, with a fine introduction.... And while there is a reference to the publisher's Web site, it wasn't easy... that any Java web developer who uses JavaServer Pages or servlets will use every day
How to Upload Site Online?
How to Upload Site Online? After designing and developing your... to be purchased from the website hosting provider.
I have hosting account but I... program is free for uploading a website?
How to Upload Site Online?
Thanks
.
Tutorials
Loan Advice
Trade Web Site... Site Map
We have organized our site map for easy access.
You can browser though Site Map to reach the tutorials and information
pages. We
web site building
web site building to make a website, what are the hardware and software requirements
| Spring Tutorial
|
Hibernate-Tutorials |
Servlets-Tutorials |
Web... | Site
Map | Business Software
Services India
Tutorial Section ... |
Web-Hosting-Tutorial |
jruby Tutorial
|
json Tutorial |
RDF
servlets - Java Interview Questions
. now in my project i need to work with blob. so i want to upload image from html and processing in servlets and store in DB like ORACLE. And next i want...servlets Good Evening.
I want to work with BLOB datatype. I know
JavaScript array index of
JavaScript array index of
In this Tutorial we want to describe that makes you to easy to understand
JavaScript array index of. We are using JavaScript... are followed by break line. The for loop execute
the script till variable
want a project
want a project i want to make project in java on railway reservation using applets and servlets and ms access as database..please provide me code and how i can compile and run
In this section of sitemap we have listed all the important sections of java tutorials.
Select the topics you want... Features
Struts 2 History
Servlets?
JavaScript array index of
JavaScript array index of
In this Tutorial we want to describe that makes you to easy to understand
JavaScript array index of. We are using JavaScript
servlets
servlets why do we need web-inf directory in web application why do we need web-inf directory in web application? means what's the benefits of doing so
Sessions in servlets
Sessions in servlets What is the use of sessions in servlets?
The servlet HttpSession interface is used to simulate the concept that a person's visit to a Web site is one continuous series of interactions
I want code below mention programe
I want code below mention programe Create a web application using any technology that accepts a keyword and displays 10 relevant tweets from Twitter in real-time for that keyword
Open Source web Templates
Open Source web Templates
Open
Source Web Templates
A web site... a great jump start towards a professional web site, and "JSB Web Templates" offers fully customizable Dreamweaver web site templates suited for all industries Hi
what is pre initialized servlets, how can we achives?
When servlet container is loaded, all the servlets defined... the request it loads the servlet. But in some cases if you want your servlet
servlets
what are sessions in servlets what are sessions in servlets
... and separate session variable is associated with that session. In case of web...:
java servlets with database interaction
java servlets with database interaction hai friends i am doing a web application in that i have a registration page, after successfully registered... i have done if you want i can send sample code.
mycode
import java.io. what are different authentication options available in servlets
There are four ways of authentication:-
HTTP basic... authentication
In FORM-based the web container invokes a login page. The invoked
web services in java - Java Beginners
web services in java hello there,
I want to develop a web site using java as platform.I have undergone the basic training in java...://
Thanks
servlets
allows the dynamic inclusion of web components either by including in the current component or by forwarding to another web component. A typical use is to include
web
web i want to create a discussion forum in internet pls give me guidelines to do so
the servlets
with the servlet container. There is only one ServletContext for the entire web application and the components of the web application can share it.
It gives information about the environment. It represents a Servlet's view of the in side WEB-INF (for security reasons, I want these documents visible after user
servlets
don't want their servlet to support the main HTTP methods (POST, GET), so it would
servlets
servlets hi i am doing one servlet program in which i strucked... the student details i have to forward that to another jsp page and there i have... in the resultset object i have to display in this jsp page
plz help me
Servlets
Servlets when i am compiling the following servlet program it compiles the successfully.but when i try to run the program it gives the following...);
int i = pstm.executeUpdate();
String sql = "select,Jsp,Javascript - JSP-Servlet
.
Thanks I just want to show a busy cursor whenever user clicks...Servlets,Jsp,Javascript Hi in my application i am creating a file... put in the file are quite large it takes about 1 minute to create the file i want
Servlets
st=con.createStatement();
int i=st.executeUpdate("insert
Help me to create a sharing text just like facebook using jsp servlet and oracle
thing is that i want to pass a value to servlet without refreshing the whole jsp...Help me to create a sharing text just like facebook using jsp servlet and oracle I tried to use lot of methods but i couldn't get the proper method
Adobe Flex Component Index
a rich look in the web site as well as with the help of various validators we can put validations on different field so easily and effectively.
Index of Flex
Servlet Tutorials Links
technology of choice for extending and enhancing Web servers. Servlets provide...
Servlet Communication:
Servlets are not alone in a Web Server. They have... servlets and related technologies. This page lists the mailing lists you may want
jsp and servlets
jsp and servlets i want code for remember password and forget password so please send as early as possible ............. thanks in advance
Please visit the following link:
i want to learn Jquery
i want to learn Jquery i want to learn jquery can u plz guide me
Yes, you can learn these technologies by yourself. Go through the following links:
Ajax Tutorials
JSON Tutorials
JQuery Tutorials
Accessing Database from servlets through JDBC!
processing.
With
Java servlets web developers can... run it on any Servlet enabled web
server. Servlets runs entirely inside... side, it does not depend on browser
compatibility. I just send the result
i want for statement codding
i want for statement codding what is the for condition following out put
1
2 2
3 3 3
4 4 4 4
5 5 5 5
Blocking a web site using java program
Blocking a web site using java program How to block a url typed in browser using java program
Creating methods in servlets - JSP-Servlet
Creating methods in servlets I created servlet and jsp file.I....
Document : index
Created on : Dec 15, 2008, 7:49:51 PM
Author : mihael...");
fullname = name();
out.println(" My Full Name from the web page
i want code for these programs
i want code for these programs Advances in operating system
Laboratory Work:
(The following programs can be executed on any available and suitable platform)
Design, develop and execute a program using any
just askin - Java Beginners
just askin were can i find a website,, who have a java bean program which include its code, algorithm and flowchart? plsss
web pagr designing - XML
web pagr designing what is the use of XML in web page designing i want a sample code?
i mean to say what is the real use of XML in jsp???
can any body suggest me
Spring MVC Say Hello Example
Say Hello application in Spring 2.5 MVC
....
In this tutorial we will create a web application that will present a
form... folder. In the
index.jsp file we will create a hyperlink "Say Hello".
servlets - JSP-Servlet
servlets hi,
can anybody help me as what exactly to be done to for compilation,execution of servlets. i also want to know the software required in this execution
Chapter 2. Design, build and test web components
Page Template for the Web Site if you want your
entire Web site to share....
Note: If you want to specify a target server for the Web project you are going...: If you want to add a Web project as a module
Professional Web Design Services For You Web Site
Professional Web Design Services For You Web Site
... design of the web site....
However there are also some guidelines to the use of graphics on the web
MYSQL and SERVLETS - JDBC
servlets .I do not know that how to combine these two programs into a single... .How I can do using servlets
Hi friend,
For developing a simple...://
Thanks
iPhone's Missing Features
;
iPhone is a great phone with amazing features... there are always pre-paid users.
Browser Plug-ins/Flash/Javascript: The iPhone's Safari web... the service provider offers you or you can say good bye to your iPhone dreams
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://roseindia.net/tutorialhelp/comment/9548
|
CC-MAIN-2015-32
|
refinedweb
| 1,816
| 63.7
|
Well, submitting what I did wednesday. No RDF yet, but I'm almost getting to that point. Tomorrow I'll meet Peter Willems from TNO (Dutch practical research outfit) who also deals with RDF in the same field. Only he uses java and jena so he won't be able to help me with this.
But: google frustrated my attempt at low-key programming and only sending around emails once I got something working. I got an email from Dave Kleinschmidt asking whether he could help out on this rope project... Great to get such an email :-) Writing the first code ======================
I started off with the list of standard imports listed in the ZDG, including them in RopeProduct:
from Acquisition import Implicit from Globals import Persistent from AccessControl.Role import RoleManager from OFS.SimpleItem import Item
Then I added the following to my RopeProduct class:
""" Rope product class. Makes rdflib available to zope. """ meta_type = 'Rope' # Required by Item baseclass def __init__(self, id): self.id = id # id needed by Item baseclass
Yep, don't forget that documentation at the beginning. The
Item
baseclass needs
meta_type, which is the name under which your
product will be known to zope. A few pages later in the ZDG I decided
to add a
PropertyManager as well. That way I could add the handy
title attribute and allow people to add their own properties
afterwards.
The next step is to add security declarations. As I haven't added any
rdflib stuff there isn't yet a thing that can be
made secure or public. So I added an
index_html method including the
declarePublic statement:
from AccessControl import ClassSecurityInfo ... security = ClassSecurityInfo() security.declarePublic('index_html') def index_html(self): """ Dummy index page. """ return """ Role product instance <p>My id is %s</p> """ % self.id
Note to self: don't forget that initial code comment next time... Methods without that comment won't be executable in zope.
Then the ZDG tells us to initialise the class. Note that I added the following on the "top level" of the file, not inside the class itself (I made that mistake the first time...):
from Globals import InitializeClass ... InitializeClass(RopeProduct)
Browsing a bit ahead in the ZDG reminded me that I needed define a method that adds the RopeProduct to a zope folder, coupled with a form to fill in the needed parameters. I'll create the form with simple html for the moment, later on I'll put in a page template:
def addForm(): """ Returns an html form used to instantiate a RopeProduct instance. """ return """ Add Rope id: <br> """ def addRope(dispatcher, id): """ Create a new Rope and add it to myself. """ rope = RopeProduct(id) dispatcher.Destination()._setObject(id,rope)
Note that both are again on the toplevel. Last thing left to do for we
can do some real testing is to add an
__init__.py file to allow zope
to auto-load this file. Well, auto isn't really the correct term, as
almost everything in zope is explicit. Zope does some real magic, but
it is all plain for all to see. So automatically adding a product to
zope means writing a function in a place where zope will find and
execute it. Here is the '__init__.py':
# Import the actual Rope product and the initialisation form+adder. from rope import RopeProduct, addForm, addRope def initialize(registrar): registrar.registerClass( RopeProduct, constructors = (addForm, addRope), )
Well, it doesn't actually do anything, but it is time to test it in
zope for the first time. I made a symlink from my zope products
directory to the directory I put all my work in (windows users will
need to copy it). I started up zope and waited for some error
messages. They came.
addForm() needed a parameter... That's not in
the ZDG. Changing the definition to the following helped:
def addForm(unknown): # I couldn't really find what 'unknown' does...
After correcting some further small errors it seemed to work. The product seemed to exist. Headed over to the management interface for the product to get auto-refresh working. No, I'm not going to restart zope over and over again.
Ok, I now get the form. Only... I pressed submit and it didn't seem to do anything. Pressing submit again got me the error that the id I just entered. Looking in the management interface the object did exist. Clicking on it got me the security interface. Hand-editing the url to the address of the object got me the desired dummy html page. It's working!
Now to work on that management interface. It needs a view tab for
index_html and another one for the management of the properties. I
didn't include the code for that property tab for nothing. Browsing a
bit further in the ZDG gives me the hint to add this to my
RopeProduct class:
manage_options = ( {'label': 'Properties', 'action': 'manage_propertiesForm'}, {'label': 'View', 'action': 'index_html'}, {'label': 'Security', 'action': 'manage_access'} )
Viewing it gives me an error on the properties tab... "
str object
has no attribute 'copy'". Didn't figure it out before I went home. Let's see tomorrow.
Well, tomorrow didn't work out, spend the day arranging and giving a UML course for fourth year students. Had some problems with getting it all arranged, but the local sysadmins were great and helped out very willingly. I've never had any big problems with them, good bunch of):
|
http://reinout.vanrees.org/weblog/2003/09/04/rope-part-2.html
|
CC-MAIN-2016-36
|
refinedweb
| 904
| 66.54
|
TWC9: New Windows 10, New Surface Book, New Visual Studio, New Web Documentation and…
Remember the days when we had the choice of one color, usually White or Green, for our code (cause we were using a DOS)? And that first time you saw syntax coloring and how your brain exploded?
Yeah, I don't either (um... yeah...)
Today color is all over our coding windows or IDE's. But, in many cases, it's not really "smart" coloring. It's simple, this is a data type, this is a variable, this is the start/end of an enclosure, etc.
What would be cool is a way to use the smarts in the .NET Compiler Platform (fka as Roslyn) to intelligently color your code...
I guess that makes George Aleksandria's project officially cool!
An open source extension that uses Roslyn to color and decorate our C# code.
A Visual Studio 2017 extension that uses Roslyn API's for analyzing C# source code and colorize appropriate syntax nodes to different colors. It makes easily recognizabling the supported elements.
Extension supports following elements:
- Namespaces
- Alias for namespaces
- Fields in the local scope
- Parameters
- Instance methods and constructor
- Static and extension methods
- Events
- Properties
- Instance fields
- Enum fields
Use Visual Studio Fonts and Colors options to change colors for items. Look for items in Display Items that starts with CoCo format:
If you are looking the same extension for Visual Studio 2015 you can get it from here.
Examples
In the Dark theme:
In the Light/Blue theme:
... Click through to download it ...
And has George said, it's open source,
Follow @CH9
Follow @coding4fun
Follow @gduncan411
i want to imrove mu logic
@alaa: It's open source, check out
This conversation has been locked by the site admins. No new comments can be made.
|
https://channel9.msdn.com/coding4fun/blog/Smart-Code-Coloring-with-CoCo
|
CC-MAIN-2019-22
|
refinedweb
| 303
| 63.39
|
Results 1 to 1 of 1
Thread: iMovieHD DV import difficulties
iMovieHD DV import difficulties
- Member Since
- Jul 02, 2007
- Location
- Bath, England
- 1
Hi Folks
I'm a first time user, so apologies if I am naive as to etiquete!
I am able to successfully import DV from a Sony DCR-TRV22E into iMovie on my iMac (PowerMac2,1, PowerPC 750 (83.2), 640MB RAM, 400 MHz with OSX 10.4.9.
However, when I use the same camcorder with a MacPro running the latest OSX version update, iMovieHD will no longer recognise the device. It worked perfectly a couple of months ago.
I have tried all the hard and soft solutions recommended by both Apple and Sony and have tried other solutions from other forums.
I am getting very frustrated now!
Have the latest updates removed/disabled the drivers? Do I need to reinstall? Is this a result of Sony and Apple having a hissy fit?
Turtle
Thread Information
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
Similar Threads
Burning an imovieHD fileBy Joshwa in forum Movies and VideoReplies: 0Last Post: 04-26-2009, 04:52 PM
import issue: only showing imovie 08:last import ... unable to locate previous importBy DAIDAI in forum Movies and VideoReplies: 0Last Post: 12-25-2008, 02:03 PM
Titles Blurry in iMovieHD (HELP)By The_Mongrel in forum Movies and VideoReplies: 1Last Post: 03-18-2008, 12:46 AM
iMovieHD won't quit?By chordstrummer in forum macOS - Apps and GamesReplies: 4Last Post: 06-03-2007, 11:03 PM
iMovieHD will not allow me to import through DVBy RPM in forum Movies and VideoReplies: 6Last Post: 03-18-2005, 08:36 AM
|
http://www.mac-forums.com/movies-and-video/67939-imoviehd-dv-import-difficulties.html?s=313a2086992bff177685d5a8e728138a
|
CC-MAIN-2017-39
|
refinedweb
| 288
| 69.41
|
As one of the most popular front-end libraries, vue.js has a new thinking mode for rapid construction and development of front-end projects. This article introduces how to build a web application (primary demo) quickly with go + vue.js.
Environmental preparation:
1. Install go language and configure go development environment;
2. Install node.js and NPM environment;
Use of gin:
In order to quickly build back-end applications, gin is used as the web framework. Gin is a web framework implemented in golang. Its API is very friendly and has excellent routing performance and detailed error prompts. If you want to develop a high-performance production environment quickly, gin is a good choice.
Download and install gin:
go get github.com/gin-gonic/gin
Used in Code:
import "github.com/gin-gonic/gin"
Here is a simple example of using gin:
package main import "github.com/gin-gonic/gin" func main() { r := gin.Default() r.GET("/ping", func(c *gin.Context) { c.JSON(200, gin.H{ "message": "pong", }) }) r.Run() // listen and serve on 0.0.0.0:8080 }
Note: Gin can easily support various HTTP request methods and return various types of data. For details, go to.
Start a project:
Create a new project in gogland (IDE): demo, and create a main.go file as the project entry:
package main import ( "demo/router" ) func main() { router.Init() // init router }
Note: package main in go must contain a main function.
As can be seen from the above code, we introduced the router package below demo and explicitly called the init() function of router. Now, we will create a new router directory under demo project and create router.go under the directory to write routing rules. The code is as follows:
package router import ( "demo/handlers" "github.com/gin-gonic/gin" ) func Init() { // Creates a default gin router r := gin.Default() // Grouping routes // group: v1 v1 := r.Group("/v1") { v1.GET("/hello", handlers.HelloPage) } r.Run(":8000") // listen and serve on 0.0.0.0:8000 }
Here, we create a default route for gin, assign a group V1 to it, listen to Hello requests and route them to the view function hellopage, and finally bind to 0.0.0.0:8000.
Now let’s create a view function, create a new handlers directory, and create a hello.go file in the directory. The code is as follows:
package handlers import ( "github.com/gin-gonic/gin" "net/http" ) func HelloPage(c *gin.Context) { c.JSON(http.StatusOK, gin.H{ "message": "welcome", }) }
C. JSON is a built-in method for gin to return JSON data, which contains two parameters, status code and returned content. Http.statusok means that the return status code is 200 and the text is
{"message": "welcome"}。
Note: Gin also contains more return methods, such as c.string, c.html, c.xml, etc. please learn by yourself.
So far, we have implemented the most basic code for gin to build web services, running the code:
~/gofile/src/demo$ go run main.go [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] GET /v1/hello --> demo/handlers.HelloPage (3 handlers) [GIN-debug] Listening and serving HTTP on :8000
As you can see, we have successfully started the web server and listened to the local 8000 port. Now we can access the address of / V1 / Hello:
curl -XGET '' -i HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8Date: Mon, 18 Sep 2017 07:38:01 GMT Content-Length: 21 {"message":"welcome"}
Here, the server has correctly responded to the request and returned
{"message":"welcome"}At the same time, it can be seen from the HTTP request header that the request status code is 200 and the returned data type is content type: application / JSON.
Let’s look at the output information of the server’s console:
[GIN] 2017/09/18 - 15:37:46 | 200 | 81.546µs | 127.0.0.1 | GET /v1/hello
So far, we have successfully built a set of simple web servers. But in the real world, we will definitely generate data communication with the server. Next, let’s see how gin receives parameters.
Gin parameter usage
When restful is widely popular, gin can easily receive URL parameters:
We defined a new route under the previous group V1 route:
v1 := r.Group("/v1") { v1.GET("/hello", handlers.HelloPage) v1.GET("/hello/:name", func(c *gin.Context) { name := c.Param("name") c.String(http.StatusOK, "Hello %s", name) }) }
Next visit:
curl -XGET '' -i HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Date: Mon, 18 Sep 2017 08:03:02 GMT Content-Length: 11 Hello lilei
As you can see, with the c.param (“key”) method, gin successfully captures the parameters in the URL request path. Gin can also receive general parameters as follows:
v1.GET("/welcome", func(c *gin.Context) { firstname := c.DefaultQuery("firstname", "Guest") lastname := c.Query("lastname") c.String(http.StatusOK, "Hello %s %s", firstname, lastname) })
Similarly, we visit:
curl -XGET '' -i HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8Date: Mon, 18 Sep 2017 08:11:37 GMT Content-Length: 12 Hello li lei
adopt
c.Query("key")The URL parameter can be received successfully. C.defaultquery will be replaced by its default value guest when the parameter does not exist.
Note: Gin can also receive more parameters of different types. Please check gin’s documentation;
Gin returns to static page
We will definitely involve the processing of static resources in website development. Here is a simple example of gin returning static pages and realizing data interaction.
Create a new templates directory, and create index.html under the directory, as follows:
<html> <h1> {{ .title }} </h1> </html>
Create a new group V2, and create a / index route to return to the static HTML page:
r.LoadHTMLGlob("templates/*") v2 := r.Group("/v2") { v2.GET("/index", func(c *gin.Context) { c.HTML(http.StatusOK, "index.html", gin.H{ "title": "hello Gin.", }) }) }
Use loadhtmlglobe to define the template file path and c.html to return to the static page. Visit:
curl -XGET '' -i HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Date: Mon, 18 Sep 2017 08:29:13 GMT Content-Length: 55 <html lang="en"> hello Gin. </html>
Gin returns the static file index.html and populates the title data into the template
{{ .title }}
Note: for the use of template language, the reader shall supplement by himself. Of course, static resources can also be handled by nginx to reduce server pressure.
Gin default route
We can also define some default routes for gin:
// 404 NotFound r.NoRoute(func(c *gin.Context) { c.JSON(http.StatusNotFound, gin.H{ "status": 404, "error": "404, page not exists!", }) })
At this time, we visit a page that does not exist:
curl -XGET '' -i HTTP/1.1 404 Not Found Content-Type: application/json; charset=utf-8 Date: Mon, 18 Sep 2017 09:22:38 GMT Content-Length: 46 {"error":"404, page not exists!","status":404}
Gin Middleware
In go’s net / HTTP, we can easily design middleware, and gin also provides us with a very convenient use of middleware. We can define global middleware, group middleware and single routing middleware, which can limit the scope of middleware.
First, define a simple middleware and set it as a global middleware:
// PrintMiddleware is a function for test middleware func PrintMiddleware(c *gin.Context) { fmt.Println("before request") c.Next() }
Next, register as global middleware:
r := gin.Default() r.Use(PrintMiddleware())
Then we initiate the client request and view the gin console output:
curl -XGET '' -i [GIN-debug] Listening and serving HTTP on :8000 before request [GIN] 2017/09/18 - 17:42:50 | 200 | 809.559µs | 127.0.0.1 | GET /v2/index
It can be seen that gin successfully executed the customized middleware function before executing the request. C.next () indicates that after the execution of the middleware is completed, the request will be passed to the next function for processing.
A global middleware is defined above. Now we want to verify (simulate login) the request of V2 group. Suppose that the request contains a token parameter to store authentication information, we can implement this middleware function:
func ValidateToken() gin.HandlerFunc { return func(c *gin.Context) { token := c.Request.FormValue("token") if token == "" { c.JSON(401, gin.H{ "message": "Token required", }) c.Abort() return } if token != "accesstoken" { c.JSON(http.StatusOK, gin.H{ "message": "Invalid Token", }) c.Abort() return } c.Next() } }
Then we register the middleware in group V2:
v2.Use(ValidateToken())
Next, we will visit as usual:
curl -XGET '' -i HTTP/1.1 401 Unauthorized Content-Type: application/json; charset=utf-8 Date: Mon, 18 Sep 2017 10:01:10 GMT Content-Length: 32 {"message":"Token required"}
Prompt us for token required. When we add token:
curl -XGET '' -i HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Date: Mon, 18 Sep 2017 10:02:28 GMT Content-Length: 29 <html> hello Gin. </html>
As you can see, gin has passed the verification and responded to the request correctly. c. Abort () indicates that the request was terminated.
See here, I believe you have a general understanding of gin, and you can do your own code. In the actual development, there will be a variety of needs, at this time, we need to consult the data from many sides to find the answer.
Vue.js use
Vue.js is a popular front-end framework. We can use vue.js and gin to build front-end and back-end separate applications.
Official document of Vue:
Build Vue development environment:
1. Install node.js and NPM.
2. NPM installation Vue scaffold Vue cli:
NPM install Vue cli - g // global install
Vue cli is an official vue.js project scaffold, which can be used to create Vue projects quickly. GitHub address:
3. Next, use Vue cli to create a project, using the official web pack template:
vue init webpack demo
Here, the default setting can be used. Press enter all the way, and you will be prompted to complete the installation and enter the project
cd demo
Installation dependency (need to wait for a while):
npm install
4. Start the development server:
npm run dev
Visit: http: / / localhost: 8080, and you can see the initialization page set up by Vue official for us.
Here, we have set up the initial project template of Vue conveniently, so how can we realize the data interaction work of front and back end separation? Next, use a recently used small case to demonstrate data interaction.
Draw a simple chart using echarts
1. Create a new views directory under SRC directory to store views (directory structure):
src ├── App.vue ├── assets │ └── logo.png ├── components │ └── HelloWorld.vue ├── main.js ├── router │ └── index.js └── views ├── ChartLine.vue └── index.js
2. Install the components that will be used later:
npm install echarts --save-dev // echarts NPM install Axios -- save dev // an asynchronous HTTP request Library
3. Create a new chartline.vue file to display the line chart. The contents are as follows:
<template> <div> <div> < button v-on: refresh < / button > <div></div> </div> </div> </template> <script> import echarts from 'echarts' import axios from 'axios' export default { name: 'ChartLine', computed: { Opt() {// option can refer to the official case of echarts return { title: { Text: 'stacking area map' }, tooltip: { //Omitted. See the official case of echarts for parameters }, legend: { Data: ['email marketing'] }, grid: { // ellipsis }, xAxis: [ { // ellipsis data: [] } ], yAxis: [ // ellipsis ], series: [ { Name: 'email marketing', type: 'line', data: [] } ] } } }, methods: { async refreshCharts () { const res = await axios.get('') This. Mychart. Setoption ({// update data xAxis: { data: res.data.legend_data }, series: { data: res.data.xAxis_data } }) } }, mounted () { this.myChart = echarts.init(document.getElementById('line')) This. Mychart. Setoption (this. OPT) // initialize echarts Window. Addeventlistener ('resize ', this. Mychart. Resize) // adaptive } } </script> <style> .line { width: 400px; height: 200px; margin: 20px auto; } </style>
The above code implements the initialization and data filling process of echarts chart, as well as the function of clicking the button to refresh the chart;
4. Register route and edit router / index.js:
import Vue from 'vue' import Router from 'vue-router' import ChartLine from '../views/ChartLine.vue' Vue.use(Router) export default new Router({ mode: 'history', routes: [ { path: '/line', name: 'Line', component: ChartLine } ] })
5. Implementation of gin background API interface:
v1.GET("/line", func(c *gin.Context) { Legenddata: = [] string {"Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"} xAxisData := []int{120, 240, rand.Intn(500), rand.Intn(500), 150, 230, 180} c.JSON(200, gin.H{ "legend_data": legendData, "xAxis_data": xAxisData, }) })
6. Now we can see the chart correctly. Click the refresh button to see that the chart has refreshed the data correctly.
summary
The above is a quick build of a web application (primary demo) with go + vue.js introduced by Xiaobian. I hope it can help you. If you have any questions, please leave a message to me and Xiaobian will reply to you in time. Thank you very much for your support of the developepaer website!
|
https://developpaper.com/quickly-build-a-web-application-primary-demo-with-go-vue-js/
|
CC-MAIN-2021-10
|
refinedweb
| 2,173
| 59.5
|
TimerService startup issue in JBoss AS 6Abdur Rahman Apr 12, 2012 7:12 AM
I am using JBoss AS 6. I have created a timer service listed bellow:
package com.mycompany.infrastructure.notification.service;
import javax.ejb.Schedule;
import javax.ejb.Stateless;
import javax.ejb.Timer;
@Stateless
public class NotificationTimerService {
@Schedule(second = "*/30", minute = "*", hour = "*", dayOfWeek = "*", timezone = "GMT")
public void executeSomeMethod(Timer timer) {
System.out.println("Invoking daily event notification ...");
/ / call some session bean here
}
}
Issues:
1. When I restart jboss server the timer service starts execution before the conatainer/application is loaded completely. I want it start execution onces all the session beans (and other classes) are loaded.
2. It executes all the previous invocations that were missed when jboss server was offline. I want it ignore missing calls due to offline mode.
Thanks.
1. Re: TimerService startup issue in JBoss AS 6Wolf-Dieter Fink Apr 12, 2012 9:07 AM (in response to Abdur Rahman)
You should inject the reference to other session beans via @EJB instead of lookup it,this should set a dependency.
The @Schedule timer event is catch up all missing events if it is persistent (by default). If you don't want that behaviour you should set @Schedule(... persistent=false) this will avoid any catch up as long as the bean is down.
2. Re: TimerService startup issue in JBoss AS 6Abdur Rahman Apr 12, 2012 10:44 AM (in response to Wolf-Dieter Fink)
Thanks, This resolved the startup issue.
1. About setting persistent=false, Will turning persistent off guaranty a single invocation at a particular interval in a clustered environment (multiple jboss instances)?
2. I have also noticed receiving multiple invocations on single JVM at a specific interval. Can you guide about this too? Sorry for asking in the same thread.
3. Re: TimerService startup issue in JBoss AS 6Wolf-Dieter Fink Apr 12, 2012 10:51 AM (in response to Abdur Rahman)
1.
Nevertheless how persistence is set, the schedule is per instance. That mean if you deploy the application in a cluster with X nodes the bean is called X-times
This is still the same as in former versions and still an issue (there are JIRA enhancements).
A naive expectation is that it is a unique application event in a cluster, but it isn't the EJB spec does not specify the behaviour in a cluster.
2.
Can you explain a bit more, I don't understand what you meant ...
4. Re: TimerService startup issue in JBoss AS 6Abdur Rahman Apr 12, 2012 12:11 PM (in response to Wolf-Dieter Fink)
I mean the executeSomeMethod(Timer timer) is invoked multiple times after every 30 seconds. Thanks.
5. Re: TimerService startup issue in JBoss AS 6Wolf-Dieter Fink Apr 12, 2012 12:20 PM (in response to Abdur Rahman)
Oh, ok
that smell like a solved bug (in AS7) with persistence=true.
You should stop the server and remove all entries from the timers database or even drop the TIMERS tables.
|
https://community.jboss.org/message/729649
|
CC-MAIN-2015-32
|
refinedweb
| 502
| 63.59
|
In this post we will consume WCF SOAP Service in C#/XAML based Metro Application. This is level 100 post showing basic steps to create simple WCF Service and consume that in Metro Application.
Very first let us create a WCF Service using VS2012. Form File menu create a new project by choosing WCF Service Application project template from WCF tab.
Remove all the default code from IService1.cs and replace it with following code. In below service contract we are going to return a simple greeting message. From client a name as string will be input to the service and service will return a greeting message as string.
IService1.cs
using System.ServiceModel; namespace ServiceToConsumeinMetro { [ServiceContract] public interface IService1 { [OperationContract] string GreetingMessage(string name); } }
Service is implemented as following. It is concatenating two string and returning it .
Service1.svc.cs
namespace ServiceToConsumeinMetro { public class Service1 : IService1 { public string GreetingMessage(string name) { return "Welcome to Metro Word " + name; } } }
We will go with the default binding and will not configure anything in Web.Config. Leave Web.config as it is and press F5 to run and host created WCF service in local server. On successful running in your browser you should have below output.
Now let us create a Metro Application to consume this service. From File menu in VS 2012 choose Blank App project template from Windows Metro Style tab.
Next let us design the page. On the page we are going to put one TextBox, One Button and one TextBlock. User will enter Name in textbox and on the click event of the button service will be called. Returned output from service will be displayed in the textblock. Xaml is as following
<Page x: <Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}"> <Grid.RowDefinitions> <RowDefinition Height="200" /> <RowDefinition Height="200" /> <RowDefinition Height="200" /> </Grid.RowDefinitions> <TextBox x: <Button x: <TextBlock x: </Grid> </Page>
Next we need to add Service Reference in the Metro project. For that right click on Metro Application project Reference tab and select Add Service Reference
Add the reference as following. My metro application name is App1. If your Metro application name is XYZ then you will be adding XYZ.ServiceReference1 provided you have not changed reference name while adding it.
In Metro word everything is asynchronous. So we need to make async call to service. When you add reference automatically visual studio creates async function for you.
We can call this function from service as following
If you notice we have put await keyword before making a call to service. Since service is called in await manner so the function inside which we are calling service must be async. On click event of the button service can be called as following
Combining all discussion together code behind will look like following
I am sorry for poor UI experience but any way making UI immersive was not purpose of this post. I hope now you know basic steps to consume WCF SOAP service in metro application.Follow @debug_mode
|
https://debugmode.net/2012/06/
|
CC-MAIN-2019-51
|
refinedweb
| 498
| 57.16
|
I expected it to be possible to apply alignas/_Alignas to an entire struct declaration, like this:
#include <stddef.h>
#include <stdalign.h>
struct alignas(max_align_t) S {
int field;
};
struct S s = { 0 };
test.c:4:8: error: expected ‘{’ before ‘_Alignas’
struct alignas(max_align_t) S {
^
test.c:4:1: error: declaration of anonymous struct must be a definition
struct alignas(max_align_t) S {
^
alignas
struct __attribute__((aligned(__alignof__(max_align_t)))) S {
int field;
};
alignas
alignas(max_align_t) struct S { ... };
struct S alignas(max_align_t) { ... };
struct S { ... } alignas(max_align_t);
C11 is not very clear on these things, but a consensus has emerged how this is to be interpreted. C17 will have some of this clarified. The idea of not allowing types to be aligned is that there should never be different alignment requirements for compatible types between compilation units. If you want to force the alignment of a
struct type, you'd have to impose an alignment on the first member. By that you'd create an incompatible type.
The start of the "Constraint" section as voted by the committee reads:.
|
https://codedump.io/share/rS9j6nGwZCGe/1/applying-alignas-to-an-entire-struct-in-c
|
CC-MAIN-2019-47
|
refinedweb
| 176
| 66.74
|
QuickObserver 2.0
A quick way to enable observable behavior on any object.
Why Should I Use This?
If you are looking for a way to decouple the logic of your app from the front end, this is a good way to help. It allows for classes to be lightly coupled and for information to quickly pass in both directions. Either from the View Controller up to the Logic Controller, or for the Logic Controller back down to the View Controller. This also easily allows for multiple related view controllers to use the same logic controller.
Usage
Using the observer is easy, the following is an example observable object.
import QuickObserver class Controller: QuickObservable { var observer = QuickObserver<Actions, Errors>() enum Actions { case action } enum Errors: Error { case error } }
The above class Controller can now be observed, and issue the actions or errors described in the class.
Reporting A Change
Any time you need to alert observing objects that something has changed you can simply call
report(action: Actions) on the
observer like in the following example.
extension Controller { func performAnAction() { // Some Logic observer.report(.action) } }
Once
observer.report(.action) is called it’ll alert every observer that it needs to act on the change.
Adding An Observer
There are two types of observer. A repeat observer that will get updates until the observable object no longer exists, or it no longer exists. The second type is a one-off observer that gets an update and is then removed from future updates. Below are examples of each using the above
Controller class.
Repeat Observer
Below is a view controller that can continue to receive updates from the
Controller object. In the closure passed to the observable object, you see it returns a reference to the passes in observer. In this case that’s the View Controller itself. The
this variable allows you to access the ViewController without having to worry about retaining the reference.
import UIKit class ViewController: UIViewController { var controller = Controller() override func viewDidLoad() { super.viewDidLoad() controller.add(self) { (this, result) in switch result { case .success(let action): this.handle(action) case .failure(let error): this.handle(error) } } } func handle(_ action: Controller.Actions) { switch action { case .action: break // Do Some Work Here } } func handle(_ error: Controller.Errors) { switch error { case .error: break // Handle Error Here } } }
Single Observer
Below is a view controller that can continue to receive a single update from the
Controller object. In this case once the closure is called it is released and never called again.
class ViewController: UIViewController { var controller = Controller() override func viewDidLoad() { super.viewDidLoad() controller.add { [weak self] (result) in switch result { case .success(let action): self?.handle(action) case .failure(let error): self?.handle(error) } } } func handle(_ action: Controller.Actions) { switch action { case .action: break // Do Some Work Here } } func handle(_ error: Controller.Errors) { switch error { case .error: break // Handle Error Here } } }
Installation
Cocoapods
If you already have a podfile, simply add
pod 'QuickObserver', '~> 2.0.0' to it and run pod install.
If you haven’t set up cocoapods in your project and need help, refer to Using Pods. Make sure to add
pod 'QuickObserver', '~> 2.0.0' to your newly created pod file.
Manual
To manually install the files, simply copy everything from the QuickObserver directory into your project.
Latest podspec
{ "name": "QuickObserver", "version": "2.1.1", "summary": "A quick way to enable observable behavior on any object.", "description": "This library enable you to quickly add observers to your project.nWith a little adoption you can make it so any object can report on changes of state, or issue instructions to follower objects. The objects do not hold strong refrences to observing objects, and do not require the use of tokens.", "homepage": "", "documentation_url": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Timothy Rascher": "[email protected]" }, "platforms": { "ios": "10.0" }, "swift_version": "5.0", "source": { "git": "", "branch": "Cocoapods/2.1.1", "tag": "Cocoapods/2.1.1" }, "source_files": "QuickObserver/**/*.{swift}" }
Wed, 10 Apr 2019 10:13:30 +0000
|
https://tryexcept.com/articles/cocoapod/quickobserver
|
CC-MAIN-2020-29
|
refinedweb
| 666
| 60.21
|
Problem Statement
In count possible triangles problem we have given an array of n positive integers. Find the number of triangles that can be formed using three different elements of the array as the sides of a triangle.
Note: The condition of the triangle is the sum of two sides of a triangle is greater than the third side.
Example
Input
arr[] = {2, 3, 4, 5, 6, 7}
Output
13
Explanation
All the possible triplet are-
{2, 3, 4}, {2, 4, 5}, {2, 5, 6},{2, 6, 7}, {3, 4, 5}, {3, 4, 6},{3, 5, 6}, {3, 5, 7}, {3, 6, 7}, {4, 5, 6}, {4, 5, 7}, {4, 6, 7}, {5, 6, 7}
Algorithm for Count Possible Triangles
Now we understand the problem statement clearly. So, without wasting too much time on the theory we directly move to the algorithm used for finding the number of triangles in the array.
a. Sort the given array in increasing order.
b. Take three positions i, j, and k
- i for the first element. i is from 0 to n – 3.
- j for the second element. i is from i + 1 to n – 2.
- k for the third element. k is from i+ 2 to n -1.
c. Find the rightmost element which is smaller than the sum of the current i and j by moving k.
d. K – j – 1 is the count of triangles, for current i and j.
e. Add it to the count and increment i, j.
Implementation
C++ Program for Count Possible Triangles
#include <bits/stdc++.h> using namespace std; int compare(const void* a, const void* b) { return *(int*)a > *(int*)b; } int TrianglesCount(int array[], int n) { qsort(array,n,sizeof(array[0]),compare); int count = 0; for(int i = 0; i < n-2; ++i) { for (int j = i+1; j < n; ++j) { int k = j+1; while (k < n && array[i] + array[j] > array[k]) { k = k + 1; } count=count+k-j-1; } } return count; } //Main function int main() { int n; cin>>n; int a[n]; for(int i=0;i<n;i++) { cin>>a[i]; } cout<<"Total number of Triangles can be formed is = "<<TrianglesCount(input_array,n); return 0; }
Java Program for Count Possible Triangles
import java.util.Arrays; import java.util.Scanner; class sum { public static int TrianglesCount(int arr[], int n) { Arrays.sort(arr); int count = 0; for (int i = 0; i < n - 2; ++i) { int k = i + 2; for (int j = i + 1; j < n; ++j) { while (k < n && arr[i] + arr[j] > arr[k]) ++k; if (k > j) count += k - j - 1; } } return count; } public static void main(String[] args) { Scanner sr = new Scanner(System.in); int n = sr.nextInt(); int arr[] = new int[n]; for(int i=0;i<n;i++) { arr[i] = sr.nextInt(); } int ans = TrianglesCount(arr,n); System.out.println("Total number of Triangles can be formed is = " + ans); } }
5 2 3 4 5 6
Total number of Triangles can be formed is = 7
Complexity Analysis
Time Complexity
O(n^2) where n is the number of elements present in the given array.) because we create a few variables that lead us to constant space complexity.
|
https://www.tutorialcup.com/interview/array/count-possible-triangles.htm
|
CC-MAIN-2021-25
|
refinedweb
| 532
| 61.06
|
How do I remove the first word from a string?
Let's say I have
string sentence{"Hello how are you."}
And I want the string sentence to have "like you" without "Hello". How would I do it.
I tried to do something like:
stringstream ss(sentence); ss>> string junkWord;//to get rid of first word
But when I did this:
cout<<sentence;//still prints out "Hello how are you"
Pretty obvious that
stringstream
doesn't change the actual string. I also tried to use
strtok
but
string
it doesn't work with.
source to share
str=str.substr(str.find_first_of(" \t")+1);
Tested:
string sentence="Hello how are you."; cout<<"Before:"<<sentence<<endl; sentence=sentence.substr(sentence.find_first_of(" \t")+1); cout<<"After:"<<sentence<<endl;
Execution:
> ./a.out Before:Hello how are you. After:how are you.
It is assumed that the line does not start with white space. In that case, it won't work.
find_first_of("<list of characters>").
the list of symbols in our case is a space and a tab. This will search for the first padding of any of the characters and returns an iterator. After that, adding +1 moves the position one character at a time. The position then points to the second word of the line.
Substr(pos)
will extract the substring starting from position up to the last character of the string.
source to share
Try to run
int main() { std::string sentence{"Hello how are you."}; std::string::size_type n = 0; n = sentence.find_first_not_of( " \t", n ); n = sentence.find_first_of( " \t", n ); sentence.erase( 0, sentence.find_first_not_of( " \t", n ) ); std::cout << '\"' << sentence << "\"\n"; return 0; }
Output signal
"how are you."
source to share
There are many ways to do this. I guess I would go with this:
int main() { std::string sentence{"Hello how are you."}; // First, find the index for the first space: auto first_space = sentence.find(' '); // The part of the string we want to keep // starts at the index after the space: auto second_word = first_space + 1; // If you want to write it out directly, write the part of the string // that starts at the second word and lasts until the end of the string: std::cout.write( sentence.data() + second_word, sentence.length() - second_word); std::cout << std::endl; // Or, if you want a string object, make a copy from the start of the // second word. substr copies until the end of the string when you give // it only one argument, like here: std::string rest{sentence.substr(second_word)}; std::cout << rest << std::endl; }
Of course, unless you have a really good reason, you should check what
first_space != std::string::npos
, which would mean the location was not found. For clarity, my sample code does not include validation :)
source to share
You can, for example, take the remaining substring
string sentence{"Hello how are you."}; stringstream ss{sentence}; string junkWord; ss >> junkWord; cout<<sentence.substr(junkWord.length()+1); //string::substr
However, it also depends on what you want to do next.
source to share
One liner:
std::string subStr = sentence.substr(sentence.find_first_not_of(" \t\r\n", sentence.find_first_of(" \t\r\n", sentence.find_first_not_of(" \t\r\n"))));
working example:
void main() { std::string sentence{ "Hello how are you." }; char whiteSpaces[] = " \t\r\n"; std::string subStr = sentence.substr(sentence.find_first_not_of(whiteSpaces, sentence.find_first_of(whiteSpaces, sentence.find_first_not_of(whiteSpaces)))); std::cout << subStr; std::cin.ignore(); }
source to share
Here's how to use
stringstream
to extract the junkword, ignoring any space before or after (using
std::ws
), then get the rest of the sentence with robust error handling ....
std::string sentence{"Hello how are you."}; std::stringstream ss{sentence}; std::string junkWord; if (ss >> junkWord >> std::ws && std::getline(ss, sentence, '\0')) std::cout << sentence << '\n'; else std::cerr << "the sentence didn't contain ANY words at all\n";
See how it works on ideone here ....
source to share
using namespace std; int main() { string testString = "Hello how are you."; istringstream iss(testString); // note istringstream NOT sstringstream char c; // this will read the delima (space in this case) string firstWord; iss>>firstWord>>c; // read the first word and end after the first ' ' cout << "The first word in \"" << testString << "\" is \"" << firstWord << "\""<<endl; cout << "The rest of the words is \"" <<testString.substr(firstWord.length()+1) << "\""<<endl; return 0; }
output
The first word in "Hello how are you." is "Hello" The rest of the words is "how are you."
source to share
|
https://daily-blog.netlify.app/questions/2167791/index.html
|
CC-MAIN-2021-21
|
refinedweb
| 733
| 67.55
|
Revision history for Perl extension XML::LibXML 2.0128 2016-07-24 - Hopefully add the .pod files again as they were missing from 2.0127. - - Thanks to Paul Howarth for the report. - This was caused by ExtUtils::Manifest just warning that the files referenced in the "MANIFEST" file were not present and still continuing to prepare the archive as usual. A "do-what-I-don't-want-to" thing. 2.0127 2016-07-22 - Make sure t/release-kwalitee.t and other tests do not run by default. - Only with AUTHOR_TESTING or RELEASE_TESTING specified. - Thanks to Lance Wicks for the pull request. - - 2.0126 2016-06-24 - Workaround RT#114638: - 2.9.4 broke XSD Schema support. - - - - - Thanks to Paul for the report and to RURBAN for a pull-req. - Add t/release-kwalitee.t for testing CPANTS Kwalitee. 2.0125 2016-05-30 - Moved the repository from Mercurial and BitBucket to Git and GitHub: - - This was done to better encourage contributions to XML::LiBXML and to be able to use the better Continuous Integration options that are available for GitHub projects. 2.0124 2016-02-27 - Fix XML::LibXML::Text->attributes() to return an empty list in list context. - - Thanks to Rob Dixon for the report. 2.0123 2015-12-06 - Get rid of an undef-warning in XML::LibXML::Reader . - - Thanks to Rich for the report and testcase. - Apply patch from Debian for rewording the documentation. - - Some extra rewording has been done by SHLOMIF. - Thanks to Gregor Herrman and the Debian Team 2.0122 2015-09-01 - Enable the memory test on cygwin as well as Linux. - - Thanks to for the report. - Fix a typo in createElementNS - - Thanks to Rich for the report. 2.0121 2015-05-03 - Mention CVE-2015-3451 and related links in the Changes (= this file) entry for 2.0119. - Thanks to Tilmann Haak for pointing it out. 2.0120 2015-05-01 - Replace the test for the previous change with a more meaningful one. - Change was to preserve unset options after a _clone() call. - - Thanks to Salvatore Bonaccorso from Debian for the report and for a proposed fix (which was further refined by Shlomi Fish). 2.0119 2015-04-23 - SECURITY: Preserve unset options after a _clone() call (e.g: in load_xml()). - This caused expand_entities(0) to not be preserved/etc. - This is a security problem which was assigned the CVE number of CVE-2015-3451 . - - - Thanks to Tilmann Haak from xing.com for the report. 2.0118 2015-02-05 - Add $Config{incpath} to the include paths on Win32. - Fixes - Thanks to Marek for the report and propsed fix. 2.0117 2014-10-26 - Support libxml2 builds with disabled xmlReader - Makefile.PL : don't require a recentish ExtUtils::MakeMaker. - - Thanks to Slaven Rezic for the report. - Fix broken t/02parse.t with non-English locale with recent perls. - - Thanks to Slaven Rezic for the report. 2.0116 2014-04-12 - t/cpan-changes.t : minimum version of Test::CPAN::Changes. - This is to avoid test failures such as: - 2.0115 2014-04-03 - Fix double free when calling $node->addSibling with text nodes. - - Thanks to Jeff Trout for the report. 2.0114 2014-04-03 - Fix memory leaks and segfaults related to removal and insertion of DTD nodes. - - Fix memory leak in $node->removeChildNodes 2.0113 2014-03-14 - Fix test failures with older libxml2 versions. - - Thanks to Nick Wellnhofer for the patch. - Thanks to the CPAN Testers for reporting this issue. 2.0112 2014-03-13 - Fix segfaults when accessing attributes of DTD nodes - - Thanks to Ralph Merridew for the report. - Make $schema->validate work with elements. This uses xmlSchemaValidateOneElement under the hood. - - Thanks to Jeremy Marshall for the report. - Fix . - Thanks to Nick Wellnhofer for the report and test. - Apply patch to build with MSVC on Windows. - - Thanks to Nick Wellnhofer for the investigation and the patch. 2.0111 2014-03-05 - Skip t/40reader_mem_error.t with libxml2 < 2.7.4 The failure is probably due to a known double-free bug. - - - Thanks to Nick Wellnhofer for the pull request. - Die if a file handle with an encoding layer returns more bytes than requested in parse_fh. - - Make insertData, deleteData, replaceData work correctly with UTF-8 strings. - Fix substringData - - Fix "Threads still failing?" Bug report. - - Thanks to Daniel for the bug report and a test case, and to YOREEK for the patch. 2.0110 2014-02-01 - Add "use strict;" and "use warnings;" to all modules (CPANTS). - MIN_PERL_VERSION (CPANTS). - Add a LICENSE section to the POD (CPANTS). 2.0109 2014-01-31 - Fix for requiring XML::LibXML inside two loops in perl-5.19.6 and up. - - Thanks to Father Chrysostomos for the investigation, the test case, and the fix. - There are other ways to reproduce the bug, but the tests tests for a require inside two loops. 2.0108 2013-12-17 - Replace local $^W with << no warnings 'portable'; >> in t/15nodelist.t - Should fix - Thanks to "pagenyon" for the report. - Fix hash key typo in SAX/Builder.pm - "LocalName" was mis-capitalised. - - Thanks to Thomas Berger for the report and for a reproducing testcase. - Convert from "use base" to the more modern "use parent". 2.0107 2013-10-31 - Add a unique_key method for namespace objects. - - Thanks to garfieldnate for the pull request. - Grammar fixes in the documentation. - - Thanks to Gregor Herrman and the Debian Team 2.0106 2013-09-17 - Import croak from "use Carp;" to fix a missing croak definition. - - Update Devel::CheckLib under "./inc" to 1.01 : - Should fix 2.0105 2013-09-07 - Pull some commits from Jason Mash (JRMASH) to add convenience methods to the XML::LibXML::NodeList module. - New method 'to_literal_delimited($separator)' - New method 'to_literal_list()' - Fix t/35huge_mode.t on libxml2 versions less than 2.7.0. - Fixes - Thanks to Yuriy / YOREEK for the patch. - Add toStringC14N_v1_1() to XML::LibXML::Node. - Fixes - Thanks to Ulrich for the report and for a patch of sorts. 2.0104 2013-08-30 - Fix - Use quoted version number in the SYNOPSIS. - Thanks to Philipp Gortan for the report. - Apply a patch from Yuriy / YOREEK for test failures with a directory component that contains whitespace. - 2.0103 2013-08-22 - Apply patch from Yuriy / YOREEK for test failures in t/40reader.t: - - Changed the variable name to start with an underscore for internal use. 2.0102 2013-08-19 - for the report and to YOREEK for the patch. - Apply fix for - "building on RHEL-5-64 fails" - Thanks to mathias@koerber.org for the report, SREZIC@cpan.org and d.thomas@its.uq.edu.au for taking part and Yuriy for the patch. 2.0101 2013-08-15 - Fixed . - "HTML doctype differs for string/scalar input" - Thanks to NGLENN for the report and to Yuriy for the tests and fix. 2.0100 2013-08-14 - Added the unique_key() method to XML::LibXML::Node. - t/40reader.t: assigning from $@ to a lexical so it won't be over-ridden. - - Thanks to Douglas Christopher Wilson for the report. 2.0019 2013-07-01 - Correct typos reported in RT #86599. - - Thanks to dsteinbrunner. 2.0018 2013-05-13 - Revert previous change of minimal version of libxml2. - This change proved to be unpopular and didn't prevent the CPAN test failures. - By SHLOMIF 2.0017 2013-05-09 - Made the minimal version of libxml2 2.9.0 as previous versions were too buggy due to spuriourous CPAN test failures. - Please upgrade. - By SHLOMIF 2.0016 2013-04-13 - Don't enable XML_PARSE_HUGE by default. - Fix the previous version due to a mercurial SNAFU. 2.0015 2013-04-13 - Don't enable XML_PARSE_HUGE by default. - - Thanks to Grant McLean ( ) for the bug report and patch. 2.0014 2012-12-05 - Got 40reader_mem_error.t to not fetch the external DTDs. - - Thanks to Alexandr Ciornii (CHORNY) for the report and Slaven Rezic (SREZIC) for the analysis and a proposed fix. 2.0013 2012-12-04 - Fix a memory error (double-free) in XML::LibXML::Reader if we reached EOF and then called destroy. - discovered by Shlomi Fish. - Fixed by Shlomi Fish. - see t/40reader_mem_error.t 2.0012 2012-11-09 - Fix support for references to scalars with overloaded stringification magic. - - Thanks to Christian Hansen (CHANSEN) for a report, a testcase, and a patch. 2.0011 2012-11-08 - Fix crash in removeChild() when not expanding entities - - "removeChild() segfaults when not expanding entities" - Thanks to GUIDO@cpan.org for the report, for a test case (that was adapted into t/48_removeChild_crashes_rt_80395.t ) and for a patch to fix it. 2.0010 2012-11-01 - Passing debug (an undocumented option) to check_lib in Makefile.PL. - This way we get more meaningful traces on perl Makefile.PL DEBUG=1. - Thanks to MSTROUT for the report and a proposed fix. 2.0009 2012-11-01 - Fix libxml2 detection in Strawberry Perl. - Another Devel::CheckOS fallout. - Thanks to KMX for the report and for a proposed fix. The actual fix was made to be more generic considering the use-cases. - 2.0008 2012-10-22 - Fix build error when using non-standard libxml2 installation - - Thanks to L RW for the report. 2.0007 2012-10-17 - Fix for build failures on Windows with Microsoft Visual C++. - - Thanks to Desmond Daignault for the report and an initial patch. - Patch modified by Shlomi Fish 2.0006 2012-10-13 - When xml2-config returns several paths, the configuration failed. Fixed that. - - Thanks to VOVKASM for the report and fix. 2.0005 2012-10-13 - Added t/style-trailing-space.t and removed trailing space. - Add a check for the existence of included C headers (*.h) files in Makefile.PL to avoid failed compilations. - Using Devel::CheckLib. - Thanks to its maintainers! 2.0004 2012-08-07 - Add a way to specify a different compiler to be used in the "Makefile" by calling Makefile.PL with the CC environment variable set to the path to the alternate compiler. - This way we can use «CC=/usr/bin/clang perl Makefile.PL» in order to compile faster. - LibXML.pm (_clone): Fix typo in line_numbers handling. - Thanks to Bernhard Reutner-Fischer for the report and fix. 2.0003 2012-07-27 - Patch to a potential NULL dereference in xpath.c. - Thanks to Ville Skyttä <ville.skytta@iki.fi> and cppcheck. - Fix NodeList::item() calling a 1-indxed array reference. - See: - - Thanks to Tim Brody - Add the scripts/tag-release.pl script to tag a release using Mercurial. 2.0002 2012-07-08 - Applied spelling fixes correction patch by Ville Skyttä <ville.skytta@iki.fi>. - Thanks, Ville! 2.0001 2012-06-20 - Remove the leftover perl-libxml-libxml.h from the distribution. - - Thanks to Martin Mann for the report. 2.0000 2012-06-19 - Fix warnings that appear when compiling using the clang C compiler by default. - - Thanks to duvny for the report, and to seldon, doy and Zefram for their assistance in fixing the warnings. - Fix tests and run-time errors when Hash::FieldHash is installed by no longer using Hash::FieldHash. - - Thanks to hsk@fli-leibniz.de for reporting it, and to Father Chrysostomos ( ) and Mons Anderson for some diagnosis. 1.99 2012-05-31 - Apply a patch from Mons Anderson ( mons@cpan.org ) for fixing the overloading. - t/62overload.t - Thanks to Mons. - Fix test failures (and general functionality) on 64-bit big endian platforms - - Thanks to Gregor Herrmann and Niko Tyni from the Debian Perl group. 1.98 2012-05-13 - Make sure parse_string() and load_xml() also accept references to strings (to avoid unnecessary copying). - See: 1.97 2012-04-30 - Apply a test and a fix to correct keep_blanks having no effect on parse_balanced_chunk. - fixes - Add t/30keep_blanks.t . - Thanks to SREZIC for the report, the test and the fix. 1.96 2012-03-16 - 2012-03-06 - Got rid of a broken test (at least with recent libxml2s) in t/03doc.t : - - The problem was that the test tested for an undefined XML namespace, a behaviour which was changed in a recent libxml2 release. - Thanks to vcizek for the report. 1.94 2012-03-03 - Fix XML::LibXML::Element tests for ineqaulity with == and eq. - Fixes . - Thanks to Mark Overmeer for the report and for a preliminary patch to t/71overload.t . 1.93 2012-02-27 - Fix XML::LibXML::Element comparison with == and eq. - Fixes , , . - Thanks to Toby Inkster for a preliminary patch (that was modified by me) and to the various people who reported the problem. 1.92 2012-02-21 - Fix for test failure on perls < 5.10. - Fixes - Thanks to Paul for the report, and for a patch that was not accepted. 1.91 2012-02. 1.90 2012-01-08 - Pull a commit from Aaron Crange to fix compilation bugs in Devel.xs: - local variable declarations must be in the PREINIT section, not `CODE`, at least for some compiler/OS combinations. - Thanks, Aaron! 1.89 2011-12-24 -.88 2011-09-21 - 2011-08-27 - Fix t/49callbacks_returning_undef.t to not read /etc/passed which may not be valid XML. Instead, we're reading a local file while using URI::file (assuming it exists - else - we skip_all.) 1.86 2011-08-25 - Changed SvPVx_nolen() to SvPV_nolen() in LibXML.xs for better compatibility. - SvPVx_nolen() appears to be undocumented API. - Resolves - Thanks to Paul for the report. 1.85 2011-08-24 - 2011-07-23 - Fix for perl 5.8.x before 5.8.8: - "You can now use the x operator to repeat a qw// list. This used to raise a syntax error." - - fixes . - thanks to paul@city-fan.org for the report. 1.83 2011-07-23 - 2011-07-20 - 2011-07-16 - 2011-07-12 - 2011-07-08 - 2011-07-06 - 2011-07-01 - 2011-06-30 - 2011-06-24 - 2011-06-23 - 2011-06-18 - 2011-06-16 - Removed a stray file from the MANIFEST - - Warned on "kit not complete". - Thanks to obrien.jk 1.71 2011-06-14 - Unknown - Unknown - provide context and more accurate column number in structured errors - clarify license and copyright - support for Win32+mingw+ActiveState 1.69_1 Unknown - Unknown - fix incorrect output of getAttributeNS and possibly other methods on UTF-8 - added $node_or_xpc->exists($xpath) method - remove accidental debug output from XML::LibXML::SAX::Builder 1.68 Unknown - compilation problem fixes 1.67 Unknown - Unknown - Unknown - Unknown - fix reconciliation Unknown - Unknown - Unknown - Unknown - Unknown - Unknown - Unknown - Unknown - Unknown - 1.54 Unknown - - *NOTE:* - Version 1.54 fixes potentional buffer overflows were possible with - earlier versions of the package. 1.53 Unknown Unknown - fixed some typos (thanks to Randy Kobes and Hildo Biersma) - fixed namespace node handling - fixed empty Text Node bug - corrected the parser default values. - added some documentation 1.51 Unknown - Unknown - Unknown - Unknown - Unknown - Removed C-layer parser implementation. - Added support for prefixes in find* functions - More memory leak fixes (in custom DOMs) - Allow global callbacks 1.30 Unknown - Full PI access - New parser implementation (safer) - Callbacks API changed to be on the object, not the class - SAX uses XML::SAX now (required) - Memory leak fixes - applied a bunch of patches provided by T.J. Mather 1.00 Unknown - Added SAX serialisation - Added a SAX builder module - Fixed findnodes in scalar context to return a NodeList object - Added findvalue($xpath) - Added find(), which returns different things depending on the XPath - Added Boolean, Number and Literal data types 0.99 Unknown - Added support for $doc->URI getter/setter 0.98 Unknown - New have_library implementation 0.97 Unknown - Unknown - Addition of HTML parser - getOwner method added - Element->getAttributes() added - Element->getAttributesNS(URI) added - Documentation updates - Memory leak fixes - Bug Fixes 0.94 Unknown - Some DOM Level 2 cleanups - getParentNode returns XML::LibXML::Document if we get the document node 0.93 Unknown - Addition of DOM Level 2 APIs - some more segfault fixes - Document is now a Node (which makes lots of things easier) 0.92 Unknown - Many segfault and other bug fixes - More DOM API methods added 0.91 Unknown - Removed from XML::LibXSLT distribution - Added DOM API (phish) 0.01 2001-03-03 - original version; created by h2xs 1.19
|
https://metacpan.org/changes/distribution/XML-LibXML
|
CC-MAIN-2016-36
|
refinedweb
| 2,679
| 69.89
|
- 22 Dec, 2014 2 commits
- Facundo Domínguez authored
Reviewed By: austin Differential Revision:
- Herbert Valerio Riedel authored
This updates those two packages to their most recent respective proper releases.
- 19 Dec, 2014 2 commits
- 10 Dec, 2014 1 commit
- Fac
- 24 Nov, 2014 1 commit
- 21 Nov, 2014 7 commits.
Test case: th/T1476b.
This commit also refactors a bunch of lexeme-oriented code into a new module Lexeme, and includes a submodule update for haddock.
- 20 Nov, 2014 3 commits
- Luite Stegeman authored
this impliments #9703 from ghc trac Test Plan: still needs tests Reviewers: cmsaperstein, ekmett, goldfire, austin Reviewed By: goldfire, austin Subscribers: goldfire, thomie, carter, simonmar Differential Revision: GHC Trac Issues: #9703
- 12 Nov, 2014 6 commits
When splicing in a fixity declaration, look for both term-level things and type-level things. This requires some changes elsewhere in the code to allow for more flexibility when looking up Exact names, which can be assigned the wrong namespace during fixity declaration conversion. See the ticket for more info.
- 04 Nov, 2014 2 commits
- 02 Nov, 2014 7 commits
The patch includes errors for a whole host of pragmas. But, these are generated one at a time, and it doesn't seem like a good idea to add gobs of test-cases here.
This should fix #8953.
- 22 Oct, 2014 2 commits
(sorry)
- 21 Oct, 2014 1 commit
- 19 Oct, 2014 1 commit
- 07 Oct, 2014 1 commit:
- 03 Oct, 2014 2 commits
-
- 01 Oct, 2014 1 commit
Summary: Most of the changes is adaptation of old Python 2 only code. My priority was not breaking Python 2, and so I avoided bigger changes to the driver. In particular, under Python 3 the output is a str and buffering cannot be disabled. To test, define PYTHON=python3 in testsuite/mk/boilerplate.mk. Thanks to aspidites <emarshall85@gmail.com> who provided the initial patch. Test Plan: validate under 2 and 3 Reviewers: hvr, simonmar, thomie, austin Reviewed By: thomie, austin Subscribers: aspidites, thomie, simonmar, ezyang, carter Differential Revision: GHC Trac Issues: #9184
- 26 Sep, 2014 1 commit
|
https://gitlab.haskell.org/trac-jberryman/ghc/-/commits/c72f61c6d4dd779d61bd0ebc0b1211a84c5b9038/testsuite/tests/th
|
CC-MAIN-2021-39
|
refinedweb
| 348
| 59.84
|
Ok, addition and multiplication are in the can. Those are the easy ones because the set of natural numbers is “closed” over those operations. That is, if you have any two naturals then both their sum and their product is also a natural. Not so subtraction! To see why, first we have to state what subtraction really is.
Subtraction relates three numbers: the minuend and the subtrahend, which are given, and the difference, which is the result. When we say
m - s = d what we are really saying is that
d is the solution to the equation
m = s + d. But for, say,
m as 2 and
s as 3 there is no solution
d which is still a natural. The integers are closed over subtraction, but the naturals are not. We’re going to need to add an error case.
As should be expected by now we start by enforcing non-null arguments at the entrypoint:
public static Natural operator -(Natural x, Natural y) { if (ReferenceEquals(x, null)) throw new ArgumentNullException("x"); else if (ReferenceEquals(y, null)) throw new ArgumentNullException("y"); else return Subtract(x, y); }
OK, now we can make our base cases and recursive cases for this recursive algorithm. The base cases are as follows:
- We can start with a cheap early out; if the operands are reference equal to each other then their difference is zero.
- If the subtrahend is zero then the result is the minuend.
- If the minuend is zero and the subtrahend is non zero then throw an exception.
Now we can break down the recursive cases. Again, there are numerous ways to do this. The one I chose is as follows:
First, if the least significant bit of both
x and
y is the same, call it
head, then:
x - y = { xtail : head } - { ytail : head } = 2 * xtail + head - (2 * ytail + head) = 2 * xtail + head - 2 * ytail - head = 2 * (xtail - ytail) = { xtail - ytail : 0 }
Second, if the head of
x is 1 and the head of
y is 0:
x - y = { xtail : 1 } - { ytail : 0 } = 2 * xtail + 1 - (2 * ytail + 0) = 2 * xtail + 1 - 2 * ytail = 2 * (xtail - ytail) + 1 = { xtail - ytail : 1 }
Third, if the head of
x is 0 and the head of
y is 1:
x - y = { xtail : 0 } - { ytail : 1 } = 2 * xtail + 0 - (2 * ytail + 1) = 2 * xtail - 2 - 2 * ytail - 1 + 2 = 2 * (xtail - 1 - ytail) + 1 = { xtail - 1 - ytail : 1 }
This is usually characterized as “borrowing”, but as I mentioned in the episode on addition, I prefer to simply reason about it algebraically rather than thinking about the conventions of pencil-and-paper arithmetic. Let’s write the code:
private static Natural Subtract(Natural x, Natural y) { if (ReferenceEquals(x, y)) return Zero; else if (ReferenceEquals(y, Zero)) return x; else if (ReferenceEquals(x, Zero)) throw new InvalidOperationException("Cannot subtract greater natural from lesser natural"); else if (x.head == y.head) return Create(Subtract(x.tail, y.tail), ZeroBit); else if (x.head == OneBit) return Create(Subtract(x.tail, y.tail), OneBit); else return Create(Subtract(Subtract(x.tail, One), y.tail), OneBit); }
And of course we can now add a decrement operator; see my earlier post about increment operators to understand why this is so straightforward:
public static Natural operator --(Natural x) { if (ReferenceEquals(x, null)) throw new ArgumentNullException("x"); else if (ReferenceEquals(x, Zero)) throw new InvalidOperationException(); else return Subtract(x, One); }
Next time on FAIC: we’ll change gears for a bit and talk about equality and inequality, and then move on to division and remainder operators.
And for those that are interested in a matching F# implementation I updated my F# sample accordingly:
If it were me, I would have used the introduction of subtraction as a cue to extend the number class from natural to integer. No need for exceptions for well-formed calls.
Equally, when division is introduced would be time to write a rational class, although there would still need to be an exception for that pesky division-by-zero case.
I guess that my approach would not lend itself to writing a full-featured natural class. In your approach, on the other hand, I’m interested to see how your integer class will look – if it just extends natural, it would presumably still have error checking for negatives hidden away somewhere, which seems messy.
On the grip hand, a full-featured natural class is all you need for Gödel’s Theorem, which I was rabbitting on about a few posts back.
> I’m interested to see how your integer class will look – if it just extends natural, it would presumably still have error checking for negatives hidden away somewhere, which seems messy.
Well, inheritance is really a bad choice for modelling this in the first place – it doesn’t make sense for the Naturals to be a superclass of the Integers because … they’re not There are no Naturals that are not Integers, while there are Integers that are not Naturals. The only way an inheritance heirarchy makes sense is if you start with, say, the Reals at the top, and then move down to the Rationals, then the Integers, and then finally the Naturals at the bottom of the inheritance tree. But while that obeys substitutability, it’s not useful in terms of allowing the more complex implementations to build on the simpler ones, which is what we’d really be looking for when designing an abstraction. And who knows what you’re going to do when you want to start modelling groups that don’t have such a linear relationship!
That’s why term “subclassing”/”subtyping” always bugged me: the subtype is NOT a subset (of the supertype)! In fact, it’s almost always bigger.
Like, record type { a: Float, b: String, c: Integer } is normally considered a subtype of { a: Float; b: String }, but it’s clear that there are 2^32 times more tuples of the first kind than of the second.
And Liskov Substitute Principle, oh my. It looks good, but actually it’s pretty much useless: “Let q(x) be a property provable about objects x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T” is bloody undecidable! See Oleg Kiselyov’s “Subtyping vs. Subclassing” ()
One difficulty is that whether X is substitutable for Y depends upon how Y will be used, and I’m unaware of any framework whose type system really expresses that. Instead, both Java and .NET have the rule that a reference of type `foo` is substitutable for a class reference of type `bar` if and only if `foo` inherits or implements `bar`; that is a good rule for reference types that are used to encapsulate identity, but for structs or classes which encapsulate value other forms of substitutability may be more appropriate.
Perhaps what’s needed is a means by which an interface could specify that a particular type should be deemed to implement it, using static methods within the interface itself. For example, an `IFraction` interface with members `Numerator` and `Denominator` could specify that any type which implements `INumber` should also be deemed to implement `IFraction` as either :
Number IFraction.Numerator {
get {return ClassHelpers_Numerator(this);} }
Number IFraction.Deniminator {
get {return ClassHelpers_Denominator(this);} }
or
Number IFraction.Numerator {
get {return StructHelpers_Numerator(ref this);} }
Number IFraction.Deniminator {
get {return ClassHelpers_Denominator(this);} }
where ClassHelpers_* and StructHelpers_* were static methods nested within the interface.
Such an approach would allow a Number to be substitutable for an IFraction, even though nothing in the definition for Number itself knew about that type.
I completely agree; an integer is not a special kind of natural.
Well, one can implement integers as pairs of naturlas modulo the equivalence realtion (a, b) ~ (c, d) a+d = c+b.
Another choice is that an integer is a natural with a sign bit. That works quite well for Church’s numerals:
…
2 = m. s. z. s (s z)
1 = m. s. z. s z
0 = m. s. z. z
-1 = m. s. z. m (s z)
-2 = m. s. z. m (s (s z))
The implementation I think I’d like best for arbitrary-length integers stored as bits would be to have one special instance which represents an infinite string of zeroes, and another which represents an infinite string of ones. Adding one to the infinite string of ones yields an infinite string of zeroes; adding two infinite strings of ones together yields a zero followed by an infinite string of ones. I find it cute that the formula for computing the value of a power series 1+2+4+8+16… yields -1–a result which is also consistent with the recurrence relation that concatenating a “1” to the LSB end of a bit string should arithmetically double it and add one. Since concatenating a “1” to the LSB end of an infinite string of “1”‘s yields that same infinite string of “1”‘s, that implies that such a string must satisfy n=2n+1, and the only n for which that works is n=-1.
Wait, doesn’t it actually gives you 2-adic numbers? I’ve never touched those numbers seriously, so I may be very, very wrong, but it rings the same bell.
Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1456
My solution to roughly the same problem in Haskell for anyone who’s interested;
import Prelude hiding (pred,and,or,not)
data PR = Z
| S
| P Int
| C PR [PR]
| PR PR PR
deriving Show
eval :: PR -> [Integer] – Integer
eval Z _ = 0
eval S [x] = x+1
eval (P n) xs = nth n xs
eval (C f gs) xs = eval f (map (g -> eval g xs) gs)
eval (PR g h) (0:xs) = eval g xs
eval (PR g h) (x:xs) = eval h ((x-1) : eval (PR g h) ((x-1):xs) : xs)
nth _ [] = error “nth nil”
nth 0 _ = error “nth index”
nth 1 (x:_) = x
nth (n) (_:xs) = nth (n-1) xs
one = C S [Z]
plus = PR (P 1) (C S [P 2])
pred = PR Z (P 1)
modus = C modus’ [P 2, P 1]
modus’ = PR P 1 (C pred [P 2])
Actually this is quite a bit different because it doesn’t define it’s own types and isn’t working with binary representations but the recursive logic is similar.
I’ll admit, even though I’ve written a very similar Haskell program, it took me a while to figure out what then heck this program does. Cliff Notes:
*PR* is a datatype for Primitive Recursive functions (). Primitive recursive functions are n-ary functions from *n* natural numbers to a single natural number. The *eval* function evaluates the primitive recursive function with its list of *n* arguments.
There are five constructors for the Primitive Recursive functions.
* Z constructs the zero function, which returns zero regardless of input
* S constructs the successor function, which increments its 1 input argument
* P constructs a projection function, which selects the nth argument
* C constructs a function by (n-ary) composition
* PR constructs a function with “primitive recursion”
Using these 5 constructors, you can create any computationally tractable computable function (and quite a few computationally intractable functions too).
I’ll just take you back to the past. Something called vbsomething (if you can remember less esoteric things than what you do now).
I add scripting to everything I do. It’s the easiest thing to do (thanks). I didn’t realise everyone thought it was hard to run a script in an exe.
|
https://ericlippert.com/2013/10/03/math-from-scratch-part-five/
|
CC-MAIN-2016-36
|
refinedweb
| 1,941
| 56.08
|
Details
Description
This currently doesn't work because Player depends on Position, but it should:
$ cat position.avsc {"type":"enum", "name": "Position", "namespace": "avro.examples.baseball", "symbols": ["P", "C", "B1", "B2", "B3", "SS", "LF", "CF", "RF", "DH"] } $ cat player.avsc {"type":"record", "name":"Player", "namespace": "avro.examples.baseball", "fields": [ {"name": "number", "type": "int"}, {"name": "first_name", "type": "string"}, {"name": "last_name", "type": "string"}, {"name": "position", "type": {"type": "array", "items": "avro.examples.baseball.Position"} } ] } $ cat baseball.avdl @namespace("avro.examples.baseball") protocol Baseball { import schema "position.avsc"; import schema "player.avsc"; } $ java -jar avro-tools-1.5.1.jar idl baseball.avdl baseball.avpr
Activity
- All
- Work Log
- History
- Activity
- Transitions
Thanks Doug. I've verified that the idl tool now generates a protocol file. I'm unable to parse this using the Schema.parse(File file) method though. Is that supposed to work, or am I doing it wrong?
I've also verified that this now works:
Schema.Parser parser = new Schema.Parser(); parser.parse(new File("position.avsc")); Schema playerSchema = parser.parse(new File("player.avsc"));
On the email list we've been discussion alternate APIs (). Something like this:
public Schema Schema.parse(File[] files); public Schema Schema.parse(File[] files, Map<Name, Schema> context);
I propose morphing these two approaches to something that could be used like this:
Schema.Parser parser = new Schema.Parser(); parser.parse(new File("position.avsc")); parser.parse(new File("player.avsc")); Schema schema = parser.getSchemaByName("Player"); // or alternatively you can pass multiple files to the parse method once parser.parse(mySchemaFiles);
Thoughts?
I believe the former will work with the current patch: sequential calls using the same parser will accumulate names in the parser. The latter would be an easy addition to the patch, if desired.
To parse a protocol file use Protocol.parse(File). We should perhaps convert that to a Schema.Parser-style API too.
Note that the 'getSchemaByName' method you use in the first case above does not exist. Instead you can either use the value of parser.parse(). If we did want to add a 'getSchemaByName' method it would need to accept a fully-qualified name, e.g., parser.getSchemaByName("org.apache.avro.examples.Player"). Or we could change parser.getTypes() to instead return Map<String,Schema> so you could call parser.getTypes().get("org.apache.avro.examples.Player").
Yes, the former parsing approach will work with the current patch. We'd just need to add the convenience method to do
Schema schema = parser.getSchemaByName("Player");
(Or maybe getType aligns more with the existing API?) This would allow the caller to load a number of files and then fetch the schema(s) they're interested in, without necessarily knowing which file contained it. They'd still need to be sure to parse in reverse-dependent order.
Thanks for the pointer re Protocol parsing, that worked.
My last comment was sent before seeing yours FYI. Sure we can either expose the Map or a method to get a value from the map. Either way works. Exposing the Map might be be better since it provides more flexibility to the caller.
Here's a version where getTypes() returns Map<String,Schema>. This implementation not very efficient since it creates new Map entries each time its called. This could be optimized, either by defining and returning a Map implementation wrapping the Names instance, or by converting Names to be a Map<String,Schema> and returning it directly. But I'm not sure that's worth the effort now, since this doesn't seem a likely performance bottleneck.
I'm beginning to question the motivation for this a bit. The example player.avsc file is not a well-formed JSON schema, since it's not standalone, but rather depends on another .avsc file. To date, we've only consumed or produced standalone JSON, not fragmentary. JSON is meant to be the low-level schema language, with IDL as the higher level language, better supporting manually maintained schemas.
So, before we commit this, I'd like to understand the use case a bit more. Is there a reason one couldn't define the two schemas in a single .avsc, .avpr file, or multiple .avdl files?
We could I suppose change the Schema parser to, when it encounters an undefined name, look for a file defining it, much like the Java compiler looks for a .java file on the CLASSPATH, but I feel such features should be confined to IDL and that JSON should be primarily used for self-contained schemas. Standalone JSON schemas are what we save with files and exchange in RPC handshakes. Currently the API will not permit one to write a non-standalone Schema so I'm a bit reluctant to permit reading them.
Do others have thoughts on this?
The last patch works well, thanks. Attached is patch #4 which also has a parse(File[]) method, if we choose to go this route.
The value of being able to parse multiple JSON files into a single schema is that it allows for a more modular approach when creating and managing schema definitions. Without support for this at the JSON level, users will resort to copy and pasting common schemas into much larger and less manageable schema definitions.
It seems like a defacto best-practice is emerging to concat multiple schemas together into a union as a way to partially get around repeatedly in-lining JSON child schemas. This approach gets the job done, but has manageability problems.
This problem can be solved at the IDL level, but that provides yet another level of abstraction, a new language syntax and a compilation step to complicate what would otherwise be a very simple use case.
Regarding consuming/producing fragmentary JSON, with the proposed approach producing JSON fragments will still not occur, since the in-memory schema is always complete, due to the reverse-dependency ordering that is required at parse time (not unlike parsing a union). Also, parsing a JSON fragment will still fail without parsing it's dependancies first so it's not loosening the contract of how parsing is handled in any way.
I'd also like to hear others thoughts on this though.
Two things block me from using AvroIDL:
- I started maintaining and using Avro schemas before AvroIDL existed, so it is natural for developers on my project to use the JSON form, not AvroIDL.
- AvroIDL only supports protocols. I only use schemas.
The former can be overcome, AvroIDL should be easier to use anyway.
The latter needs work, and I haven't looked at what it would take to extend AvroIDL to work with schemas.
There's not much overhead to using IDL for schemas: just use an idl file without any messages:
@namespace("foo.bar") protocol MyProtocol { import idl "Bar.avdl" ; record Foo { ... } }
If you're generating specific code, then Foo will have the same name as if you defined it in a .avsc file, so no changes to clients should be required.
The only unnecessary part is the protocol name and the fact that an interface is generated with no methods that you'll never use. We could change that, e.g., by changing the idl parser to accept files of the form:
@namespace("foo.bar") record Foo { import idl "Bar.avdl"; ... }
The return type for the IDL parser could then be either a schema or a protocol, and clients would need to do different things depending.
It is possible to generate the Protocol at run time directly from an IDL file. That way the Project can be generated and consumed without the extra step to pre-generate an avpr file. Something like this works:
Idl parser = new Idl(idfFile); Protocol p = parser.CompilationUnit();
The downside is that now the avro-compiler jar would be needed at run time. I agree though that it would be nice to not have to create and use Protocols if you only need Schemas.
@Doug if all your schema objects are defined in your avsc files, then defining a dummy "record Foo" would be needed, right? Could you instead do something like this to load a collection of Schemas instead?:
@namespace("foo.bar") schema MySchema { import schema "Bar.avsc"; ... }
Granted, now there's a dummy MySchema thing, but it's not a record.
Bill, I don't understand the need for the dummy. Why would all your schemas be defined in avsc files? Why wouldn't you use avdl files to define them?
We'd like to continue to use avsc files because they're easier to read and author and our developers are already familiar with them. They're also not experimental and changing like the IDL language. So we'd just use IDL as a mechanism to combine fragmented avsc files, like the initial problem statement:
@namespace("avro.examples.baseball") protocol Baseball { import schema "position.avsc"; import schema "player.avsc"; }
We should perhaps remove the 'experimental' declaration from the IDL documentation. I don't think we should make incompatible changes to the syntax of IDL, so we might as well declare it stable. But that's another issue...
Bill, will you use the parse(File[]) method, or would you instead use an IDL file? It's not yet clear to me that method is so common a pattern that it warrants adding here. If we think it's a common pattern then we should at add some javadoc, otherwise we should remove it. Other than that, I'm willing to commit this.
If I could I'd prefer to just use the parse(File[]) method approach without having to deal with the IDL files.
Here's a version that includes javadoc for the parse(File[]) method and improves the javadoc for Parser in general.
Any objections to committing this?
Doug if you're hesitant about the parse(File[]) method, we can always leave that one out. Calling parse(File) repeatedly is not a big deal for the client.
The most recent patch has:
+ /** Parse schemas from the provided files, in the specified order. + * If named, each schema is added to the names known to this parser. */ + public Schema[] parse(File[] files) throws IOException { + Schema[] schemas = new Schema[files.length]; + for (int i = 0; i < files.length; i++) + schemas[i] = parse(files[i]); + return schemas; + }
in the Parser class.
My slight preference would be to leave parse(File[]) out, since I can imagine many use cases that are slightly different, e.g., 'List<Schema> parse(List<File>)' or 'Schema parse(File[])' that returns the last schema, etc. All of these are just a few lines of code that I think is reasonable to leave to applications. On the other hand, if lots of applications are using the same few lines of code, then it makes sense to capture it in a utility, but I don't yet know what the common idioms are here. Meh.
+1 on leaving parse(File[]) out.
Another reason to omit it it to keep other similar APIs (i.e. Protocol) consistant and concise. No need to re-implement Array-based support everywhere else. Also I'm already thinking about contributing a parse(URL) method. Not having to support each type as both single and array signatures will keep the code from bloating.
Attaching the patch that reflects what was committed, for those playing along at home.
Here's a patch that fixes this.
It replaces the Schema.parse() methods with a more flexible Schema.Parser API. If folks think this new API is reasonable, then we should perhaps switch all of the calls to Schema.parse() to the new API.
This includes the test case you provided.
|
https://issues.apache.org/jira/browse/AVRO-872?attachmentSortBy=dateTime
|
CC-MAIN-2017-26
|
refinedweb
| 1,942
| 67.04
|
In this tutorial we will learn how to obtain video from a webcam and convert it to black and white, using OpenCV and Python. image to black and white with this approach, please check here.
The code shown below was tested on Windows 8.1, with version 4.1.2 of OpenCV. The Python version used was 3.7.2.
The code
As usual, we start our code by importing the cv2 module.
import cv2
Then we will instantiate an object of class VideoCapture. As input of the constructor of this class we need to pass a number with the identifier of the camera we want to use. In my case, since I only have one camera attached to the computer, I should use the value 0.
capture = cv2.VideoCapture(0)
After this we will start getting frames from the camera, so we can then convert them to black and white and display them in a window. We will do this inside an infinite loop that will only break if the ESC key is pressed.
while (True): # Grab frames and convert to Black and White if cv2.waitKey(1) == 27: break
To obtain a frame from the camera, we simply need to call the read method on our VideoCapture object.
This method receives no arguments and returns a tuple with the following values:
- A Boolean value indicating if the frame was successfully obtained;
- The frame, as a numpy ndarray.
(ret, frame) = capture.read()
To be able to apply the thresholding operation that will convert the image to black and white, we first need to convert it to gray scale.
We can do this with a call to the cvtColor function, passing as first input the image and as second the color space conversion code. In our case, we want the code COLOR_BGR2GRAY since we are converting the image from BGR to gray.
As output, this function will return a new ndarray representing the image in gray scale.
grayFrame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
After this we can apply the thresholding operation with a call to the threshold function. We are going to use the simplest thresholding algorithm: binary thresholding.
As already explained on this tutorial, in the binary thresholding operation, we define a threshold value. Then, for each pixel of the gray scale image, if the value of the pixel is lesser than the given threshold, it is set to zero. Otherwise, it is set to a user defined value (in our case, this user defined value should be 255, which corresponds to white).
So, as first input of the threshold function, we pass the gray scale frame. As second input we pass the mentioned threshold. Since we are passing a gray scale image, its pixel values vary between 0 and 255. So we pass a threshold in the middle of the scale: 127.
As third parameter, we pass the user defined value to which a pixel should be converted in case its value is greater than the threshold. As already mentioned, we will pass the value 255, which corresponds to white.
The fourth and last parameter is a constant that specifies the type of thresholding to be applied. We will pass the value THRESH_BINARY, which corresponds to the already mentioned binary thresholding.
This function returns as output a tuple. The first value of the tuple can be ignored for our use case. The second value corresponds to the resulting image, in black and white, after the binary thresholding is applied.
(thresh, blackAndWhiteFrame) = cv2.threshold(grayFrame, 127, 255, cv2.THRESH_BINARY)
To finalize, we will show both frames in two distinct windows.
cv2.imshow('video bw', blackAndWhiteFrame) cv2.imshow('video original', frame)
The full frame capture loop can be seen below.
After the loop breaks we will no longer need the camera. Thus, we should release it with a call to the release method on the VideoCapture object. We will also destroy all the windows previously opened with a call to the destroyAllWindows function.
capture.release() cv2.destroyAllWindows()
The complete code can be seen below.
import cv2 capture = cv2.VideoCapture(0) capture.release() cv2.destroyAllWindows()
Testing the code
To test the code, simply run it in a tool of your choice, with a web camera attached to your computer. In my case I’m using PyCharm, a Python IDE.
You should get a result like the one shown in the figure 1 below. As expected, we see both windows: one showing the original video and the other showing the video converted to black and white.
One Reply to “Python OpenCV: Converting camera video to black and white”
Nice! Thanks for sharing.
|
https://techtutorialsx.com/2020/04/23/python-opencv-converting-camera-video-to-black-and-white/
|
CC-MAIN-2020-40
|
refinedweb
| 770
| 73.88
|
On 15/03/16 19:51, Andrew Cooper wrote:
> On 15/03/16 19:34, Konrad Rzeszutek Wilk wrote:
>> On Tue, Mar 15, 2016 at 07:24:30PM +0000, Andrew Cooper wrote:
>>> On 15/03/16 17:56, Konrad Rzeszutek Wilk wrote:
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 31d2115..b62c91f 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>> @@ -16,6 +16,7 @@
>>>> * GNU General Public License for more details.
>>>> */
>>>>
>>>> +#include <xen/bug_ex_symbols.h>
>>> how about just <xen/virtual_region.h> ? It contains more than just
>>> bugframes.
>> /me nods.
>>>> diff --git a/xen/common/bug_ex_symbols.c b/xen/common/bug_ex_symbols.c
>>>> new file mode 100644
>>>> index 0000000..77bb72b
>>>> --- /dev/null
>>>> +++ b/xen/common/bug_ex_symbols.c
>>>> @@ -0,0 +1,119 @@
>>>> +/*
>>>> + * Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
>>>> + *
>>>> + */
>>>> +
>>>> +#include <xen/bug_ex_symbols.h>
>>>> +#include <xen/config.h>
>>>> +#include <xen/kernel.h>
>>>> +#include <xen/init.h>
>>>> +#include <xen/spinlock.h>
>>>> +
>>>> +extern char __stext[];
>>> There is no such symbol. _stext comes in via kernel.h
>> Argh.
>>
>>>> +
>>>> +struct virtual_region kernel_text = {
>>> How about just "compiled" ? This is more than just .text.
>>>
>>>> + .list = LIST_HEAD_INIT(kernel_text.list),
>>>> + .start = (unsigned long)_stext,
>>>> + .end = (unsigned long)_etext,
>>>> +#ifdef CONFIG_X86
>>>> + .ex = (struct exception_table_entry *)__start___ex_table,
>>>> + .ex_end = (struct exception_table_entry *)__stop___ex_table,
>>>> +#endif
>>>> +};
>>>> +
>>>> +/*
>>>> + * The kernel_inittext should only be used when system_state
>>>> + * is booting. Otherwise all accesses should be ignored.
>>>> + */
>>>> +static bool_t ignore_if_active(unsigned int flag, unsigned long priv)
>>>> +{
>>>> + return (system_state >= SYS_STATE_active);
>>>> +}
>>>> +
>>>> +/*
>>>> + * Becomes irrelevant when __init sections are cleared.
>>>> + */
>>>> +struct virtual_region kernel_inittext = {
>>>> + .list = LIST_HEAD_INIT(kernel_inittext.list),
>>>> + .skip = ignore_if_active,
>>>> + .start = (unsigned long)_sinittext,
>>>> + .end = (unsigned long)_einittext,
>>>> +#ifdef CONFIG_X86
>>>> + /* Even if they are __init their exception entry still gets stuck
>>>> here. */
>>>> + .ex = (struct exception_table_entry *)__start___ex_table,
>>>> + .ex_end = (struct exception_table_entry *)__stop___ex_table,
>>>> +#endif
>>>> +};
>>> This can live in .init.data and be taken off the linked list in
>>> init_done(), which performs other bits of cleanup relating to .init
>> Unfortunatly at that point of time it is SMP - so if we clean it up
>> we need to use a spin_lock.
>>
>>>> +
>>>> +/*
>>>> + * No locking. Additions are done either at startup (when there is only
>>>> + * one CPU) or when all CPUs are running without IRQs.
>>>> + *
>>>> + * Deletions are big tricky. We MUST make sure all but one CPU
>>>> + * are running cpu_relax().
>>> It should still be possible to lock this properly. We expect no
>>> contention, at which point acquiring and releasing the locks will always
>>> hit fastpaths, but it will avoid accidental corruption if something goes
>>> wrong.
>>>
>>> In each of register or deregister, take the lock, then confirm whether
>>> the current region is in a list or not, by looking at r->list. With the
>>> single virtual_region_lock held, that can safely avoid repeatedly adding
>>> the region to the region list.
>> Yeah. I don't know why I was thinking we can't. Ah, I was thinking about
>> traversing the list - and we don't want the spin_lock as this is in
>> the do_traps or other code that really really should not take any spinlocks.
>>
>> But if the adding/removing is done under a spinlock then that is OK.
>>
>> Let me do that.
> Actually, that isn't sufficient. Sorry for misleaing you.
>
> You have to exclude modifications to the list against other cpus waking
> it in an exception handler, which might include NMI and MCE context.
>
> Now I think about it, going lockless here is probably a bonus, as we
> don't want to be messing around with locks in fatal contexts. In which
> case, it would be better to use a single linked list and cmpxchg to
> insert/remove elements. It generally wants to be walked forwards, and
> will only have a handful of elements, so searching forwards to delete
> will be ok.
Actually, knowing that the list is only ever walked forwards by the
exception handlers, and with some regular spinlocks around mutation,
dudicious use of list_add_tail_rcu() and list_del_rcu() should suffice
(I think), and will definitely be better than handrolling a single
linked.
|
https://lists.xenproject.org/archives/html/xen-devel/2016-03/msg01839.html
|
CC-MAIN-2018-30
|
refinedweb
| 652
| 68.36
|
Submitted by Bridge Fibre on Thu, 08/03/2017 - 06:51
Our server running Wedmin/virtualmin has Wordpress and a Mysql db running but twice this week the site has crashed and turned out mysql was stopped. On starting it again all was fine again and in the logs for both days it began '0 [Note] Giving 0 client threads a chance to die gracefully' I've attached the full logs of those two days but unsure as to what is causing Mysql to shutdown as it doesn't appear to be crashing.
Files:
Status:
Active
Submitted by andreychek on Thu, 08/03/2017 - 09:03 Comment #1
Howdy -- it's possible that you're running into a resource issue there.
What is the output of these commands:
free -m
dmesg | tail -30
ps auxw | grep mysql
Submitted by Bridge Fibre on Fri, 08/04/2017 - 07:05 Comment #2
HI,
Thanks for the response, please find results below:
master@ghost35:~$ free -m total used free shared buff/cache available Mem: 7983 1233 5124 112 1625 6315 Swap: 8191 0 8191
master@ghost35:~$ dmesg | tail -30 [ 10.172126] Floppy drive(s): fd0 is 1.44M [ 10.686711] ppdev: user-space parallel port driver [ 11.343510] Adding 8388604k swap on /dev/mapper/ghost35--vg-swap_1. Priority:-1 extents:1 across:8388604k FS [ 11.803235] EXT4-fs (sda1): mounting ext2 file system using the ext4 subsystem [ 11.805699] EXT4-fs (sda1): mounted filesystem without journal. Opts: (null) [ 12.518792] audit: type=1400 audit(1501226164.188:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=990 comm="apparmor_parser" [ 12.529949] audit: type=1400 audit(1501226164.200:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=987 comm="apparmor_parser" [ 12.529963] audit: type=1400 audit(1501226164.200:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=987 comm="apparmor_parser" [ 12.529971] audit: type=1400 audit(1501226164.200:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=987 comm="apparmor_parser" [ 12.529978] audit: type=1400 audit(1501226164.200:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=987 comm="apparmor_parser" [ 12.552694] audit: type=1400 audit(1501226164.224:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=992 comm="apparmor_parser" [ 12.552714] audit: type=1400 audit(1501226164.224:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=992 comm="apparmor_parser" [ 12.577201] audit: type=1400 audit(1501226164.248:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/lxd/lxd-bridge-proxy" pid=991 comm="apparmor_parser" [ 12.582015] audit: type=1400 audit(1501226164.252:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/freshclam" pid=989 comm="apparmor_parser" [ 12.583560] audit: type=1400 audit(1501226164.252:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/sbin/dhclient" pid=988 comm="apparmor_parser" [ 12.940499] vmxnet3 0000:03:00.0 ens160: intr type 3, mode 0, 5 vectors allocated [ 12.943505] vmxnet3 0000:03:00.0 ens160: NIC Link is Up 10000 Mbps [ 13.087176] cgroup: new mount options do not match the existing superblock, will be ignored [ 13.192237] floppy0: no floppy controllers found [ 14.358546] NET: Registered protocol family 40 [ 18.576457] ip_tables: (C) 2000-2006 Netfilter Core Team [295796.965889] audit_printk_skb: 30 callbacks suppressed [295796.965895] audit: type=1400 audit(1501521948.012:22): apparmor="DENIED" operation="open" profile="/usr/bin/freshclam" name="/proc/20473/status" pid=20473 comm="freshclam" requested_mask="r" denied_mask="r" fsuid=113 ouid=0 [369441.579955] audit: type=1400 audit(1501595592.458:23): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/mysqld" pid=7346 comm="apparmor_parser" [369441.676362] audit: type=1400 audit(1501595592.554:24): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/proc/7364/status" pid=7364 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 [369441.676418] audit: type=1400 audit(1501595592.554:25): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/sys/devices/system/node/" pid=7364 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 [369441.676500] audit: type=1400 audit(1501595592.554:26): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/proc/7364/status" pid=7364 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 [370910.457467] audit: type=1400 audit(1501597061.334:27): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/proc/9240/status" pid=9240 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=111 ouid=111 [370910.457531] audit: type=1400 audit(1501597061.334:28): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/sys/devices/system/node/" pid=9240 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=111 ouid=0 [370910.457598] audit: type=1400 audit(1501597061.334:29): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/proc/9240/status" pid=9240 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=111 ouid=111
master@ghost35:~$ ps auxw | grep mysql mysql 9240 0.0 2.0 1316816 166480 ? Ssl Aug01 2:35 /usr/sbin/mysqld master 27571 0.0 0.0 14228 1020 pts/0 S+ 13:04 0:00 grep --color=auto mysql
Submitted by andreychek on Fri, 08/04/2017 - 09:16 Comment #3
Hmm, I didn't see what I was thinking I might see there (references to running out of memory).
Which may just mean the issue is elsewhere... but did you by chance reboot since the last time that problem happened?
Also, you said you included the full logs, but I just wanted to be certain -- there aren't additional log messages in addition to what you shared above (ie, logs just prior to the incident).
The only other thing, if none of the above seems to be leading anywhere, is that you did have a few unusual apparmor messages.
I've never seen that cause an issue with MySQL before, but it might be worth looking into whether it's somehow interfering. In fact we could always temporarily disable it just to see if that helps. Let us know if you'd like to go that route.
Oh, and what MySQL version are you using there, is that the standard one that comes with Ubuntu 16.04?
Submitted by metal696heart on Mon, 08/07/2017 - 00:37 Comment #4
The best bet is apache with a default MaxClients 100 value and this leads to a memory issue. Lower the MaxClients to a fraction based on what your RAM amount is. Monitor your RAM / SWAP .
Submitted by Bridge Fibre on Mon, 08/07/2017 - 08:04 Comment #5
Hi metal696heart,
Thanks for that, I can't find a MaxClient option in the apache2.conf, only MaxKeepAliveRequests, am I looking in the right place? The total RAM is 8GB, what would you recommend lowering it too?
Submitted by Bridge Fibre on Mon, 08/07/2017 - 08:08 Comment #6
Hi andreychek,
My mysql version is 14.14 distrib 5.7.19. The error logs were empty except for the days the mysql stopped. We have another server running virtual min also which is set up the same and this morning we had the same issue with it, showing the same issues in the logs. What is it you mentioned disabling to test?
Submitted by metal696heart on Mon, 08/07/2017 - 10:12 Comment #7
MaxRequestWorkers was called MaxClients before version 2.3.13. The old name is still supported. From documentation.. You should read and understand this, as scaling it depends on more factors. For example if you use mod_php instead of fcgi, you should take php ram at execution into account and setting memory limit accordingly. Use some traffic and server monitoring service. This might get handy in other situations too.
Also, you will find the setting at /etc/apache2/mods-available/mpm_prefork.conf or whatever mpm module you use.
Submitted by andreychek on Mon, 08/07/2017 - 11:33 Comment #8
Usually when we see resource issues, such as running out of memory, there are warning signs that the Linux kernel had to kill processes due to low RAM. You'll see "OOM" errors when running "dmesg",
Now, it's definitely worth making sure that the MaxClients setting is right for your server. You'd want to check how much RAM an Apache process uses, how much RAM a PHP process uses -- and then use that information to determine how many can safely run at one time on your server.
Apache sometimes defaults to allowing too many to run at once.
That said, 8GB is definitely a generous amount of RAM.
The other thing was some unusual apparmor errors, that are indicating some actions being denied.
Apparmor typically works very well, without interfering with day to day things like this. But since MySQL is dying off and we don't quite know why -- looking into that in addition to the RAM wouldn't be a bad idea.
Try running "service apparmor stop" to do that.
Submitted by Bridge Fibre on Thu, 08/02/2018 - 03:21 Comment #9
|
https://www.virtualmin.com/node/53006
|
CC-MAIN-2019-04
|
refinedweb
| 1,511
| 51.24
|
Since.
Pichai said that the company would be able to service 99 percent of all queries. Original Link
A leaked transcript of Google executives meeting shows a different picture from Google’s official statements. Original Link
The new advertising scandal has brought renewed attention to Google’s possible return to China. Original Link
It’s always exciting when a new version of RediSearch comes out — we just released version 1.4 (yes, we skipped 1.3 to align with a new versioning methodology). This new version has two key features which add quite a bit of smarts to querying:
Let’s first take a look at spell check. Everyone knows what spell check is from a broad perspective, but let’s examine how it works in a search engine context. It’s best to think of it as a primitive that would power a "did-you-mean" feature.
Google workers already proved their power after objecting to a project with the US military. Original Link
Bleve is a Go search engine library, and that means that it hits a few good points with me. It is interesting, it is familiar ground, and it is in a language that I’m not too familiar with, so that is a great chance to learn some more.
I reviewed revision: 298302a511a184dbab2c401e2005c1ce9589a001
I like to start by reading from the bottom up, and in this case, the very first thing that I looked at was the storage level. Bleve uses a pluggable storage engine and currently has support for:
This is interesting, if only because I put BoltDB and Moss on my queue of projects to read.
The actual persistent format for Bleve is very well documented here. This makes it much easier to understand what is going on. The way Bleve uses the storage, it has a flat key/value store view of the world as well as needing prefix range queries. Nothing else is required. Navigating the code is a bit hard for me as someone who isn’t too familiar with Go, but the interesting things start here, in scorch.go (no idea why this is called scorch, though).
We get a batch of changes and run over them, adding an _id field to the document. So far, pretty simple to figure out. The next part is interesting:
You can see that we are running in parallel here, starting the analysis work and queuing it all up. Bleve then waits for the analysis to run. I’ll dig a bit deeper into how that work in a bit. First, I want to understand how the whole batch concept works.
So, that tells us some interesting things. First, even though there is the concept of a store, there is also this idea of a segment. I’m familiar with this from Lucene, but there, it is tied very closely to the on-disk format. Before looking at the analysis, let’s look at this concept of segments.
The “zap” package, in this term, seems to refer to the encoding that is used to store the analysis results. It looks like it is running over all the results of the batch and writing them into a single binary value. This is very similar to the way Lucene works so far, although I’m still confused about the key/value store. What is happening is that after the segment is created, it is sent to prepareSegment. This eventually sends it to a Go channel that is used in the Scortch.mainLoop function (which is being run as a separate thread).
Here is the relevant code:
The last bit is the one that is handling the segment introduction, whatever that is. Note that this seems to be strongly related to the store, so hopefully, we’ll see why this is showing up here. What seems to be going on here is that there is a lot of concurrency in the process, the code spawns multiple go functions to do work. The mainLoop is just one of them. The persisterLoop is another as well as the mergerLoop. All of which sounds very much like how Lucene works.
I’m still not sure how this is all tied together. So I’m going to follow just this path for now and see what is going on with these segments. A lot of the work seems to be around managing this structure:
The segment itself is an interface with the following definition:
There are go in memory and mmap versions of this interface, it seems. So far, I’m not following relation between the storage interface and this segments idea. I think that I’m lost here, so I’m going to go a slightly different route. Instead of seeing how Bleve writes stuff, let’s focus on how it reads. I’ll try to follow the path of a query. This path of inquiry leads me to this guy:
Again, very similar to Lucene. And the TermFieldReader is where we are probably going to get the matches for this particular term (field, value). Let’s dig into that. Indeed, following the code for this method leads to the inverted index, called upside_down in this code. I managed to find how the terms are being read, and it makes perfect sense. Exactly as expected, it does a range query and parses both key and values for the relevant values. Still not seeing why there is the need for segments.
Here is where things start to come together. Bleve uses the key/value interface to store some data that it searches on, but document values are stored in segments and are loaded directly from there on demand. At a glance, it looks like the zap encoding is used to store values in chunks. It looks like I didn’t pay attention before, but the zap format is actually documented and it is very helpful. Basically, all the per document (vs. per term/field) data is located there as well as a few other things.
I think that this is where I’ll stop. The codebase is interesting, but I now know enough to have a feeling of how things work. Some closing thoughts:
The problem with work is that you have to do it each and every time. I’m used to Lucene (read it once from disk and keep a cached version in memory that is very fast) or Voron, in which the data is held in memory and can be accessed with zero work.
I didn’t get to any of the core parts of the library (analysis, full-text search). This is because they aren’t likely to be that different and they are full of the storage interaction details that I just went over..
The first step to use ES is to install it in Docker. You can install both manually and through Docker. The easiest way is with Docker following the steps below:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.3.
While Google follows 200+ signals to rank search engine results, problems with fake news and problematic content showing up on top results are causing issues with user engagement. Google has been aware of the problem with search quality since November 2016, and as such, they have initiated Project Owl to address the issues through three ways.
First, there will be a feedback form where users can answer questions about featured snippets. Second, there will be a renewed emphasis on sorting and displaying authoritative content in the top results. Third, their policies on suggestions based results could be revised to include a feedback form for receiving search suggestions.
Problematic searches is a term that Google is using to identify perceptions and concepts without any meaning. Content that feeds a certain perception without any basis in the factual world such as urban myths, rumors, or any derogatory issue that influences the collective mentality is a cause of concern.
In the past, Google has dealt with search spam, poor-quality content, and piracy, but none of these come under the category of problematic searches. Rather, they are about generating and propagating fake news that creates a biased perception.
Previously, Google was aware of the problem, but it did not merit any priority until this happened.
As mentioned, Project Owl proposes three solutions. A brief description of each is given below.
Featured snippets is a functional tool that gives the user an informational gist before the search results display. They are visible with Google Assistant and in Google Home to generate a quicker answer to the user’s query. In the last few months, Google noticed that the answers in the featured snippet section were increasingly becoming problematic. Google is combating this issue with a feedback form for the user to indicate if the featured section content is offensive, vulgar, or helpful. An example form is shown below.
The feedback will find use in improving the algorithm changes. Users with Google Home can use the device to send feedback directly. Each feedback submitted will undergo strong consideration and possible implementation.
Autocomplete saves time. Originally designed to speed up user search, users are bombarded with unsavory beliefs and perceptions when searching for problematic topics. These autocompleted suggestions can deviate the user from the original search intent, shockingly. A case study by Guardian made Google realize the complexity of the problem, prompting the company to find solutions.
Like the featured snippet, there is a form for report inappropriate predictions, prompting users to report problematic autocomplete suggestions such as hateful, racial, sexist, and provocative content. Google made changes to their publication policies to mention the non-legal reasons of removing suggestions such as pirated content, personal information, and court-ordered removals.
It is early to understand whether this step will work; however, the Google team assures that each feedback will undergo thoughtful consideration.
There will be an increased focus to identify authority content and give them higher search engine placement on results. Few changes were initiated since December 2016, and lately, their team has begun to flag content that is offensive or upsetting.
It is a wishful thought to expect changes to happen and reflect overnight. The giant search engine contains trillions of pages, and correcting even half of those results will take months and years. Hence, as users, we should come forward and help Google with Project Owl through our feedback and suggestions.
Recently, we launched a new product INVESTimate from HomeUnion. INVESTimate is using machine learning and AI to help determine the investment potential of a residential property. INVESTimate is powered by big data on 110 million homes, institutional quality research, and on-the-ground experts with deep insight on local real estate market conditions.
Behind the scenes, there is lots of data crunching, with data coming from more than 50+ sources, and mapping key property data with custom modeled PRICE AVM, RENT AVM, and Neighborhood Investment Rating (NIR) for more than 100M residential homes and 30,000+ neighborhoods. All this data are stitched together and indexed in the Apache Solr Search Engine for display of data in the front-end search portal.
The main purpose of this article is to discuss why we chose Solr Search Engine vs. MySQL or any relational database for storing, indexing, and retrieving data.
First, let’s understand the five key differences between search engines and relational databases. In our case, Apache Solr is our chosen search engine and MySQL is our RDBMS :
Now, let’s look at how and where we can efficiently use RDBMS vs. a search engine by taking a simple use case. Let’s say, an investor is looking for an investment property located in Dallas, with an investment of $150,000 located next to the best schools in a simple drill-down wizard type of experience. This would be a perfect use case for an RDBMS-based solution, as the desired results could be presented to the users as a series of fixed, structured queries on the database. First, the top-level query can select all properties within Dallas; then, it can filter properties that fit within or equal to 150K and sort properties based on school ranking within that neighborhood. The investor can finally pick a property of his choice or his liking.
Let’s take a simple use case in which search engines are very useful. Let’s say an investor is looking for “textual” type of search experience. The investor simply types, “Find an investment property in the range of 150,000 with 8% yield.” As you know, Solr stores data as documents and each document represents multiple fields with values. The documents represent a unit of search and index. The above textual content submitted by users is tokenized and matched with all the documents and based on relevancy, respective results are displayed to users. This allows for better user experience to find what users want in a fast and efficient way.
All of our 110M properties with key attributes are processed and enriched in a big data environment. The processed data gets stored and indexed in the Solr Engine on a regular basis using DIH (Data Import Handler) within 15 minutes.
Using Solr for ReadOnly for better performance. All queries coming from our INVESTimate website hit our Solr engine for most of the data required to be served.
Created Java Interface using SolrJ for front-end to interact with Solr Engine. This completely encapsulates and decouples from front-end code and allows services to be scalable independently
Our HomeUnion Asset Recommendation Engine (HARE) is built on top of Solr Engine for the recommendation of properties and search of portfolios. We have used the concepts of facets and boosting very extensively to recommend search results to our Investors
Reduce load on our MySQL database. 90% of our searches are served by a Solr engine hosted within AWS, thereby reducing cost and improving query performance by more than 80%.
Hopefully, this article was useful to learn some practical use cases of using search engines.
Today, we’re going to dive quite a bit deeper and make something useful with Node.js, RediSearch, and the client library we started in Part 2. that we’re not going to use yet, so we’ll need to build an ingestion script.
To follow along, it would be best to get the chapter-3 branch of the GitHub repo.total.
module.exports = { movies : function(search) { return [ search.fieldDefinition.numeric('budget',true), //homepage is just stored, not indexed search.fieldDefinition.text('original_language',true,{ noStem : true }), search.fieldDefinition.text('original_title',true,{ weight : 4.0 }), search.fieldDefinition.text('overview',false), search.fieldDefinition.numeric('popularity'), search.fieldDefinition.numeric('release_date',true), search.fieldDefinition.numeric('revenue',true), search.fieldDefinition.numeric('runtime',true), search.fieldDefinition.text('status',true,{ noStem : true }), search.fieldDefinition.text('title',true,{ weight : 5.0 }), search.fieldDefinition.numeric('vote_average',true), search.fieldDefinition.numeric('vote_count',true) ]; }, castCrew : function(search) { return [ search.fieldDefinition.numeric('movie_id',false), search.fieldDefinition.text('title',true, { noStem : true }), search.fieldDefinition.numeric('cast',true), search.fieldDefinition.numeric('crew',true), search.fieldDefinition.text('name', true, { noStem : true }), //cast only search.fieldDefinition.text('character', true, { noStem : true }), search.fieldDefinition.numeric('cast_id',false), //crew only search.fieldDefinition.text('department',true), search.fieldDefinition.text('job',true) ]; } };:
From the async documentation
The other feature of queue that we’ll be using is the drain function. This function executes when there are no items left in the queue, i straightforward except for the search string — why do you have to repeat 26748 twice? In this case, it’s because the movie_id field in the database is numeric and numerics can only be limited by a range.!
Have you heard about the popular open-source tool used for searching and indexing that is used by giants like Wikipedia and Linkedin? No? I’m pretty sure you’ve heard it in passing.
I’m talking about Elasticsearch. In this blog, you’ll get to know the basics of Elasticsearch, its advantages, how to install it, and how to index documents using Elasticsearch.
Elasticsearch is an open-source, enterprise-grade search engine that can power extremely fast searches and support all data discovery applications. With Elasticsearch, we can store, search, and analyze big volumes of data quickly and in near real-time. It is generally used as the underlying search engine that powers applications that have simple/complex search features and requirements.
Built on top of Lucene: Being built on top of Lucene, it offers the most powerful full-text search capabilities.
Document-oriented: It stores complex entities as structured JSON documents and indexes all fields by default, providing higher performance.
Schema-free: It stores a large quantity of semi-structured (JSON) data in a distributed fashion. It also attempts to detect the data structure and index the present data, making it search-friendly.
Full-text search: Elasticsearch performs linguistic searches against documents and returns the documents that match the search condition. Result relevancy for the given query is calculated using the TF/IDF algorithm.
Restful API: Elasticsearch supports REST APIs, which is light-weight protocol. We can query Elasticsearch using the REST API with the Chrome plug-in Sense. Sense provides a simple user interface and has features like autocomplete Elasticsearch query syntax and copying the query as cURL command.
Cluster: A collection of nodes that share data.
Node: A single server that is part of the cluster, stores the data, and participates in the cluster’s indexing and search capabilities.
Index: A collection of documents with similar characteristics. An index is more equivalent to a schema in RDBMS.
Type: There can be multiple types within an index. For example, an e-commerce application can have used products in one type and new products in another type of the same index. One index can have multiple types as multiple tables in one database.
Document: A a basic unit of information that can be indexed. It is like a row in a table.
Shards and replicas: Elasticsearch indexes are divided into multiple pieces called shards, which allows the index to scale horizontally. Elasticsearch also allows us to make copies of index shards, which are called replicas.
E-commerce websites use Elasticsearch to index their entire product catalog and inventory with all the product attributes with which the end user can search against.
Whenever a user searches for a product on the website, the corresponding query will hit an index with millions of products and retrieve the product in near real-time.
Or, say
Start the Elasticsearch server:
bin/elasticsearch
You can access it at on your web browser. Here, localhost denotes the host (server) and the default port of Elasticsearch is 9200.
To confirm everything is working fine, type into your browser.” }
Elasticsearch tends to use Lucene indexes to store and retrieve data. Adding data, dates, etc.
We can query Elasticsearch using the methods mentioned below:
cURL command
Using an HTTP client
Querying with the JSON DSL
Elasticsearch provides a REST API that we can interact with in a variety of ways through common HTTP methods like
GET,
PUT, and
DELETE — which does the same thing as a CRUD operation does.
Now, let’s try indexing some data in our Elasticsearch instance.
curl -XPUT -d’ { “name” : “John”, “City” : “California” }’
This command will insert the JSON document into an index named
patient with the type named
outpatient. 1 is the ID here. If the
its peers. Systems working with big data may encounter I/O bottlenecks due to data analysis and search operations. For systems like these, Elasticsearch would be the ideal choice.
In our last installment, we started looking at RediSearch, the Redis search engine built as a module. We explored the curious nature of the keys and indexed a single document. In this segment, we’ll lay the groundwork necessary to make working with RediSearch more productive and useful in Node.js.
Now, we could certainly bring in all this data using the RediSearch commands directly or with the bindings, but with a large amount of data, using direct syntax becomes difficult to manage. Let’s take some time to develop a small Node.js module that will make our lives easier.
I’m a big fan of the so-called “fluent” JavaScript syntax, wherein you chain methods together so that functions are separated by dots when operating over a single object. If you’ve used jQuery, then you’ve seen this style.
$('.some-class') .css('color','red') .addClass('another-class') .on('click',function() { ... });
This approach will present some challenges. Firstly, we need to make sure that we can interoperate with “normal” Redis commands and still be able to use pipelining/batching (we’ll address the use of
MULTI in a later installment). Also, RediSearch commands have a highly variadic syntax (for example, commands can have a small or large number of arguments). Translating this directly into JavaScript wouldn’t gain us much over the simple bindings. We can, however, leverage a handful of arguments and then supply optional arguments in the guise of function-level
options objects. What I’m aiming to design looks a little like this:
const myRediSearch = rediSearch(redisClient,'index-key'); myRediSearch.createIndex([ ...fields... ],cbFn); myRediSearch .add(itemUniqueId,itemDataAsObject,cbFn) .add(anotherItemUniqueId,anotherItemDataAsObject,addOptions, cbFn);
Overall, this is a much more idiomatic way of doing things in JavaScript and that’s important when trying to get a team up to speed, or even just to improve the development experience.
Another goal of this module is to make the results more usable. In Redis, results are returned in what is known as a “nested multi-bulk” reply. Unfortunately, this can get quite complex with RediSearch. Let’s take a look at some results returned from redis-cli:
1) (integer) 564 2) "52fe47729251416c75099985" 3) 1) "movie_id" 2) "18292" 3) "title" 4) "George Washington" 5) "department" 6) "Editing" 7) "job" 8) "Editor" 9) "name" 10) "Zene Baker" 11) "crew" 12) "1" 4) "52fe48cbc3a36847f8179cc7" 5) 1) "movie_id" 2) "55420" 3) "title" 4) "Another Earth" 5) "character" 6) "Kim Williams" 7) "cast_id" 8) "149550" 9) "name" 10) "Jordan Baker" 11) "cast" 12) "1"
So, when using
node_redis, you would get nested arrays at two levels — but positions are associative (except for the first one which is the number of results). Without writing an abstraction, it’ll be a mess to use. We can abstract the results into more meaningful nested objects with an array to represent the actual results. The same query would return this type of result:
{
This is a bit confusing due to the terminology and duplication, but each layer has its own job.
node_redis-redisearch just provides the commands to
node_redis, without any parsing or abstraction.
node_redis just opens up the world of Redis to JavaScript. Got it? Good.
Since RediSearch isn’t a default part of Redis, we need to check that it is installed. We’re going to make the assumption that RediSearch is installed on the underlying Redis server. If it isn’t installed, then you’ll simply get a Redis error similar to this:
ERR unknown command 'ft.search'
Not having the bindings is a more subtle error (complaining about an undefined function), so we’ll build in a simple check for the
ft_create command on the instance of the Redis client.
To be able to manage multiple different indexes and potentially different clients in a way that isn’t syntactically ugly and inefficient, we’ll use a factory pattern to pass in both the client and the index key. You won’t need to pass these again. The last two arguments are optional: an
options object and/or a callback.
It looks like this:
... rediSearchBindings(redis); let mySearch = rediSearch(client,'my-index'); //with optional options object let mySearch = rediSearch(client,'my-index', { ... }); //with optional options object and callback. let mySearch = rediSearch(client,'my-index', { ... }, function() { ... }); ...
The callback here doesn’t actually provide an error in its arguments; it is just issued when the
node_redis client is ready. It is entirely optional and provided primarily for benchmarking so you don’t start counting down the time until the connection is fully established.
Another useful feature of this function is that the first argument can optionally be the
node_redis module. We’ll also automatically add in the RediSearch bindings in this case. You can designate this library to manage the creation of your client and specify other connection preferences in the
options object located at
clientOptions. Many scripts have specialized connection management routines so it is completely optional to pass either a client or the
node_redis module.
We’ll be using similar signatures for most functions and the final two arguments are optional: an
options object and a callback. Consistency is good.
Creating an index in RediSearch is a one-time affair. You set up your schema prior to indexing data and then you can’t alter the schema without re-indexing the data.
As previously discussed, there are three basic types of indexes in RediSearch:
Numeric
Text
Geo
(Note: There is a fourth type of index, the tag index, but we’ll cover that in a later installment.)
Each field can have a number of options — this can be a lot to manage! So, let’s abstract this by returning a
fieldDefinition object that has three functions:
numeric,
text, and
geo. Seems familiar, eh?
All three methods have two required options and text fields have an optional options object. They are in this order:
textfields only) with two possible properties:
noStem(do not stem words) and
weight(sorting weight)
These methods return arrays of strings that can be used to build a RediSearch index. Let’s take a look at a few examples:
mySearch.fieldDefinition.text('companyName',true,{ noStem : true }); // -> [ 'companyName', 'TEXT', 'NOSTEM', 'SORTABLE' ] mySearch.fieldDefinition.numeric('revenue',false); // -> [ 'revenue', 'NUMERIC' ] mySearch.fieldDefinition.geo('location',true); // -> [ 'location', 'GEO', 'SORTABLE' ]
So, what do we do with these little functions? Of course, we use them to specify a schema.
mySearch.createIndex([ mySearch.fieldDefinition.text('companyName',true,{ noStem : true }), mySearch.fieldDefinition.numeric('revenue',false), mySearch.fieldDefinition.geo('location',true)], function(err) { /* ... do stuff after the creation of the index ... */ } );
This makes a clear and expressive statement on the fields in the schema. One note here: While we use an array to contain the fields, RediSearch has no concept of order in fields, so it doesn’t really matter in which order you specify fields in the array.
Adding the item to a RediSearch index is pretty simple. To add an item, we supply two required arguments and consider two optional arguments. The required arguments are (in order):
A unique ID
The data as an object
The two optional arguments follow our common signature:
options and a callback. As per common Node.js patterns, the first argument of the callback is an error object (unset if no errors) and the second argument of the callback is the actual data.
myRediSearch .add('kyle',{ dbofchoice : 'redis', languageofchoice : 'javascript' }, { score : 5 }, function(err) { if (err) { throw err; } console.log('added!'); } );
Batch, or “pipeline” as it’s called in the non-Node.js Redis world, is a useful structure in Redis, it allows for multiple commands to be sent at a time without waiting for a reply for each command.
The batch function works pretty similarly to any batch you’d find in
node_redis — you can chain them together with an
exec() at the end. This does cause a conflict, though. Since “normal”
node_redis allows you to batch together commands, you need to distinguish between RediSearch and non-RediSearch commands. First, you need to start a RediSearch batch using one of two methods:
Start a new batch:
let searchBatch = mySearch.batch() // a new, RediSearch enhanced batch
Or, with an existing batch:
let myBatch = client.batch(); let searchBatch = mySearch.batch(myBatch) // a batch command, perhaps already in progress
After you have created the batch, you can add normal
node_redis commands to it or you can use RediSearch commands.
searchBatch .rediSearch.add(...) .hgetall(...) .rediSearch.add(...)
Take note of the
HGETALL stuck in the middle of this chain; this is to illustrate that you can intermix abstracted RediSearch commands with ‘normal’ Redis commands. Cool, right?
As mentioned earlier, the output of RediSearch (and many Redis commands) is likely in a form that you wouldn’t use directly.
FT.GET and
FT.SEARCH produce interleaved field/value results that get represented as an array, for example. The idiomatic way of dealing with data like this in JavaScript is through plain objects. So, we need to do some simple parsing of the interleaved data. There are many ways to accomplish this, but the simplest way is to use a lodash chain to first chunk the array into }
If we didn’t need to contend with pipelines, adding these parsing functions would be a somewhat simple process of monkey patching the client. But with batches in
node_redis, the results are provided both in a function-level callback and at the end of the batch, with many scripts omitting function-level callbacks and just dealing with all the results at the end. Given this, we need to make sure that the commands are only parsing these values when needed-but always at the end.
Additionally, this opens up a can-of-worms when writing our abstraction. Normal client objects and pipeline objects both need RediSearch-specific commands injected. To prevent writing two different repetitious functions, we need to have one function that can be dynamically injected. To accomplish this, the factory pattern is employed — the outer function is passed in a client or pipeline object (let’s call it
cObj) and then it returns a function with the normal arguments.
cObj can represent either a pipeline or just a
node_redis client.
Thankfully,
node_redis is consistent in how it handles pipelined and non-pipelined commands, so the only thing that changes is the object being chained. There are only two exceptions:
These two exceptions only need to be applied when pipelined, thus we need to be able to detect pipelining. To do this, we have to look at the name of the constructor. It’s been abstracted into the function
chainer.
In the RediSearch module, search is executed with the
FT.SEARCH command, which has a ton of options. We’ll abstract this into our
search method. At this point we’re going to provide only the bare minimum of searching abilities — we’ll pass in a search string (where you can use RediSearch’s extensive query language), then an optional Options argument and finally, a callback. Technically the callback is optional, but it would be silly not to include it.
In our initial implementation, we’ll just make a couple of options available:
offset: Where to begin the result set
numberOfResults: The number of results to be returned
These options map directly to the RediSearch LIMIT argument (very similar to the LIMIT argument found throughout SQL implementations).
The search also implements a result parser to make things a little": 1 }
The property
results is an ordered array of the results (with the most relevant results at the top). Notice that each result has both the ID of the document (
docId) and the fields in the document (
doc).
totalResults is the number of items index that match the query (irrespective of any limiting).
requestedResultSize is the maximum number of results to be returned.
resultSize is the number of results returned.
In the previous section, you may have noticed the
docId property. RediSearch stores each document by a unique ID that you need to specify at the time of indexing. Documents can be retrieved by searching or by directly fetching the
docId using the RediSearch command
FT.GET. In our abstraction, we’ll call this method
getDoc (
get has a specific meaning in JavaScript, so it should be avoided as a method name).
getDoc, like most other commands in our module, has a familiar argument signature:
docIdis the first argument and only required argument. You pass in the ID of the previously indexed item.
optionsis the second argument and is optional. We aren’t actually using it yet, but we’ll keep it here for future expansion.
cbis the third argument and is technically optional-this is where you provide your callback function to get your results.
Like the
search method,
getDoc does some parsing to turn the document from an interleaved array into a plain JavaScript object.
One more important thing to cover before we have a minimal set of functionalities: the
dropIndex, which is just a simple wrapper for the command
FT.DROP, is a little different as all it takes is a callback for when the index is dropped.
Neither
dropIndex nor
createIndex allow for chaining as the nature of these commands prevent them from having further chained functions.
In this piece, we’ve discussed the creation of a limited abstraction library for RediSearch in Node.js, as well as its syntax. Reaching back to our previous piece, let’s look at the same small example to see the complete index lifecycle.
/* }); }); } );
As you can see, this example covers all the bases, though it probably isn’t very useful in a real-world scenario. In our next installment, we’ll dig into the TMDB dataset and start playing with real data and further expanding our client library for RediSearch.
In the meantime, I suggest you take a look at the GitHub repo to see how it’s all structured.:
Let’s take a look at the basic concepts of Elasticsearch: clusters, near real-time search, indexes, nodes, shards, mapping types, and more.
A cluster is a collection of one or more servers that together hold entire data and give federated indexing and search capabilities across all servers. For relational databases, the node is DB Instance. There can be N nodes with the same cluster name.
Elasticsearch is a near-real-time search platform. There is a slight from the time you index a document until the time it becomes searchable..
A shard is a subset of documents of an index. An index can be divided into many shards..
I’ve been working with the RediSearch module quite a bit lately — it’s one of the more fascinating developments in the Redis ecosystem and it deserves itsoffs. The RediSearch module fills in these blanks with few trade-offs. In this first installment, we’re going to be exploring the very basics of the module as a gentle introduction.
Modules are add-ons for your Redis server. At their most basic level, they implement new commands, but they can also implement new data types. Modules are written in systems programming languages; C/C++, Rust, and Golang have been used, but other languages are also possible. Since they’re written in compiled languages, extremely high performance is possible.
Modules are distinct from Redis scripting (Lua) in that they are first-class commands in the system and can interface storage directly, enabling the creation of their own datatypes. The only thing that sets them apart from inbuilt commands is that module commands are namespaced by a prefix, often two letters, and a dot (i.
I’ve asked myself the question what isn’t RediSearch — but I’ll attempt to answer it without inverting. RediSearch is a module that provides three main features:
RediSearch utilizes both its own datatype and the inbuilt.
As a hands-on exercise, let’s install the module:
$ git clone $ cd RediSearch/src $ make all $ redis-cli > MODULE LOAD ./redisearch.so
(Or install it in your
redis.conf file and restart
redis-server.)
After your module is loaded, go ahead and run this command in
redis-cli to verify that the module is running:
> module list 1) 1) "name" 2) "ft" 3) "ver" 4) (integer) 2000
In the results:
> FT.CREATE shakespeare SCHEMA line TEXT SORTABLE play TEXT NOSTEM speech NUMERIC SORTABLE speaker TEXT NOSTEM entry TEXT location GEO
This might look a tad complicated, especially if you’re used to commands with one or two (i matches.
location GEO: The
location field holds a geographic coordinate.
See? It’s just a lot in one line, but not really complicated.
Now, let’s add a document to our index:
> FT.ADD shakespeare 57956 1 FIELDS text_entry "Out, damned spot! out, I say!--One: two: why," line "5.1.31" play macbeth speech 15 speaker "LADY MACBETH" location -3.9264,57.5243
Comparing the two commands, you might notice that the
FT.CREATE and
FT.ADDCREATE …: This).
Recall that we created an index with the key
shakespeare” (via the
FT.CREATE command). Let’s do a quick experiment:
> TYPE shakespeare none
Strange, right? This is where we start departing from normal Redis behavior and you’ll start seeing where RediSearch is a solution that is both using and integrated with Redis.
If you’re running this on a non-production database, let’s do
KEYS * for debugging purposes:
> KEYS * 1) "ft:shakespeare/1" 2) "ft:shakespeare/31" 3) "idx:shakespeare" 4) "ft:shakespeare/5" 5) "ft:shakespeare/macbeth" 6) "ft:shakespeare/lady" 7) "nm:shakespeare/speech" 8) "geo:shakespeare/location" 9) "57956"
Running two commands had yielded nine keys. I want to highlight a few of these keys just to fill out the understanding of what is actually going on here:
> TYPE idx:shakespeare ft_index0
Here, we can see that RediSearch has created a key with its own datatype (
ft_index0). We can’t really do much with this key directly, but it’s important to know that it exists and how it was created.
Now, let’s look at key
57956:
> TYPE 57956 hash
A hash! We can work with this — let’s look at this key directly:
> HGETALL 57956 1) "text_entry" 2) "Out, damned spot! out, I say!--One: two: why," 3) "line" 4) "5.1.31" 5) "play" 6) "macbeth" 7) "speech" 8) "15" 9) "speaker" 10) "LADY MACBETH" 11) "location" 12) "-3.9264,57.5243"
This should look familiar as it’s your data from the
FT.ADD command and the key is just your document ID. While it’s important to know how this is being stored, don’t manipulate this key directly with
HASH commands.
> TYPE nm:shakespeare/speech numericdx
Interesting — the field
speech in our dataset is a numeric index and the type is a
numericdx. Again, since this is a RediSearch native datatype, we can’t manipulate this with any “normal” Redis commands.
> TYPE geo:shakespeare/location zset:
> GEOHASH geo:shakespeare/location 1 1) "gfjpnxuzk40" > GEOPOS geo:shakespeare/location 1 1) 1) "-3.92640262842178345" 2) "57.52429905544970268"
Brilliant! RediSearch has stored the coordinates in a bog-standard GEO set. But, like the hash above, don’t modify these values directly with
ZSET or
GEO commands.
Finally, let’s take a look at one more key:
> TYPE ft:shakespeare/lady ft_invidx 2 of Mastering RediSearch coming soon.
|
https://www.savi-labs.com/tag/search-engine/
|
CC-MAIN-2018-47
|
refinedweb
| 6,487
| 54.83
|
In this section of TensorFlow tutorials series we will first develop a TensorFlow 2.0 Hello World program. This is the first program in TensorFlow which will give you idea about running a program in TensorFlow.In this section of TensorFlow tutorials series we will first develop a TensorFlow 2.0 Hello World program. This is the first program in TensorFlow which will give you idea about running a program in TensorFlow.
In this tutorial we will learn to write TensorFlow 2.0 Hello World which is the first program you should learn develop in TensorFlow 2.0. There are lot of changes in the TensorFlow 2.0 as compared to the previous TensorFlow 1.x releases. TensorFlow comes with the ease of development less coding it need in this version of TensorFlow. The TensorFlow 2.0 is developed to remove the issues and complexity of previous versions.
The removal of ts.Session() is one of the important thing and if you are moving your code from TensorFlow 1.x then new methods needs to be used. In the TensorFlow 2.0 eager execution is enabled by default. The eager execution mode evaluates the program immediately and without building the graph. The eager execution mode operation returns the concrete value instead of constructing a computational graph.
Here is sample code of TensorFlow 1.x Hello World application:
import tensorflow as tf msg = tf.constant('Say Hello to TensorFlow!') sess = tf.Session() print(sess.run(msg))
Here we have to use the tf.Session() to run the computational graph. Now the tf.Session() is removed from TensorFlow 2.0 and we can't use it. If you run this program in TensorFlow 2.0 it will throw following error:
>>> sess = tf.Session() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'tensorflow' has no attribute 'Session' >>> print(sess.run(msg)) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'sess' is not defined
Here is screen shot:
To write the program in TensorFlow 2.0 we have to remove tf.Session() and then use tf.print() to print the value of constant. Since, TensorFlow 2.0 is executing in eager mode no computational graph is created and TensorFlow 2.0 executes the code line by line.
Here is example of TensorFlow 2.0 Hello World program:
import tensorflow as tf msg = tf.constant('TensorFlow 2.0 Hello World') tf.print(msg)
Here is the output of program execution:
(tf12alpha) [email protected]:~$ python Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf >>> msg = tf.constant('TensorFlow 2.0 Hello World') 2019-06-05 18:13:09.850002: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3392310000 Hz 2019-06-05 18:13:09.850258: I tensorflow/compiler/xla/service/service.cc:162] XLA service 0x557cc1455bc0 executing computations on platform Host. Devices: 2019-06-05 18:13:09.850296: I tensorflow/compiler/xla/service/service.cc:169] StreamExecutor device (0): >>> tf.print(msg) TensorFlow 2.0 Hello World
Screen shot of TensorFlow 2.0 Hello World program execution:
Above example shows the execution of TensorFlow 2.0 Hello World program.
In TensorFlow 2.0 session has been removed and the eager execution is true by default. TensorFlow 2.0 simplified coding of machine learning programs.
Check more tutorials at:
Ads
|
https://www.roseindia.net/tensorflow/tensorflow2/tensorflow-2.0-hello-world.shtml
|
CC-MAIN-2019-26
|
refinedweb
| 580
| 54.9
|
31 May 2010 11:26 [Source: ICIS news]
SHANGHAI (ICIS news)--China will cut gasoline and diesel prices for the first time this year, effective 1 June, in line with recent falls in international crude values, the National Development and Reform Commission (NDRC) said on Monday.
Gasoline prices would be yuan (CNY) 230/tonne ($33.7/tonne) lower at CNY7,190/tonne, while diesel prices would go down by CNY220/tonne to CNY6,460/tonne, according to China's economic planning body.
In April, ?xml:namespace>
Under the scheme, the Chinese government may adjust fuel prices when crude oil costs change more than 4% over 22 working days.
In a separate announcement,
Crude prices fell to as low at $68/tonne this month before recovering back to around mid-$70/bbl levels.
(
|
http://www.icis.com/Articles/2010/05/31/9363529/china-cuts-fuel-prices-hikes-natural-gas-values-from-1-june.html
|
CC-MAIN-2014-10
|
refinedweb
| 132
| 56.18
|
Android Dev peep on iOS Unit Test
As an Android Developer, if you are curious about iOS unit test and it’s setup, this would be for you. In this blog, I will show you how to import the
Quick and
Nimble test framework (a popular Behavior Driven Test for iOS) as well as the XCode provided
XCTestCase
Pre-requitesite
I would not touch on open XCode and setup a basic project. You could refer to my previous blog for that.
First view of iOS tutorial in Swift (an Android Developer view)
It’s 2018, why not try something new for myself? So I get myself following official iOS Swift tutorial. Faced some…
medium.com
But if you just like to know the term, but not actual hands on, just read on. Don’t worry about starting it on XCode.
Cocoapods — the Gradle of iOS
In Android Studio, we are bless to have Gradle embedded in Android Studio, so that import external libraries is made so easy.
However in XCode, life is harder. The Cocoapods doesn’t come with it.
One have to go to terminal and install the Cocoapods if it need to import libraries (which is called
Pods for iOS).
To get Cocoapods install, type the following command in terminal (obviously on Mac machine)
sudo gem install cocoapods
Import Pods (Library) to your project
To import library to your project, you’ll need to first
- Run
pod initin the project folder to generate all the pod files.
- Add the needed pods (in our case
Quickand
Nimblepods) to
Podfile
target 'yourprojectTest' do
inherit! :search_paths
# Pods for testing
pod "Quick"
pod "Nimble"
end
Note: by default XCode provide us a test framework named
XCTestCase, but I’ll be showing you also another way of testing (Behavioral Driven Test) that is commonly used by iOS developer
- Run
pod` installto download the
Quickand
Nimblepods to your project
- Open the
yourproject.xcworkspace, you’ll see the
Quickand
Nimblehas been added to the project
XCTestCase — JUnit like test in iOS
When you create a project in iOS, this is given to you be default.
The test class given is as below.
import XCTest
@testable import yourprojectclass yourprojectTests: XCTestCase {
override func setUp() {
super.setUp()
}
override func tearDown() {
super.tearDown()
}
func testExample() {
// This is an example of a functional test case.
// Use XCTAssert to verify your tests produce.
} func testPerformanceExample() {
// This is an example of a performance test case.
self.measure {
// Put the code you want to measure the time of here.
}
}
}
This looks very familiar with JUnit, as it has
setup() and
teardown().
For each test, just need to prefix the
test keyword on the function name, it would be a test function. If it is not prefix with
test, it would be a normal function. No annotation needed.
There’s one very interesting item there, which is
self.measure, where any code that is place under this function, will have it’s execution performance measured.
In Kotlin, you could use the following function to measure time
measureNanoTime { /**Code to measure time**/ }
To execute the test, just like Android Studio, click on the green check box on the left (on top of the line number), or use Command+u
P/S: I try to run individual test function, it always execute the entire class instead.
What is Quick & Nimble?
It’s a Swift and Objective-C Testing Framework, that is popular among iOS community, partly due to the more powerful matcher (by
Nimble) and also it support Behavior Driven Test (by
Quick).
Quick
GitHub is where people build software. More than 28 million people use GitHub to discover, fork, and contribute to over…
github.com
What’s Behavior Driven Test
There’s a lot of site out there give a good description on it, but I’m going to simplify it by stating, it is basically breaking your test into 3 categories i.e.
- Given (what’s the condition)
- When (what’s the action)
- Then (what’s the expected result)
In Android JUnit Test, you could do it as below…
@Test
fun `given some condition when trigger action then expect result`() {
// Given
whenever(testMock()).thenReturn(fixedMockedValue)
// When
testClass.testTrigger()
// Then
verify(testObject).testExpectCall()
}
The iOS Way
With
Quick and
Nimble, iOS has a framework that allow one to describe the test, better using the
describe,
context and
it functions that was provided by
Quick as could be seen below.
import Foundation
import Quick
import Nimble
@testable import yourprojectclass myTestSpec: QuickSpec {
override func spec() { beforeEach { /** Code execute for every Describe **/ }
describe("Given some condition") { beforeEach { /** Code execute for every Context **/ }
context("When some action") { beforeEach { /** Code execute for every It **/ }
it("Then some expected result") {
expect(true).to(beTrue())
} } }
}
}
Several notes:
- You’ll need to have the
@testable import yourProjectto enable access to your project classes for test.
- You’ll need to inherit from the
QuickSpecprotocol so that we could write the
specfunction detail.
- There could be more than one
describe,
contextand
it, to allow multiple test scenarios, test actions or test results triggered, yet individually is evaluated as a unit.
- The
beforeEachis to encapsulate the code that needed on it’s designated scope (i.e.
describe,
contextor
it). This is similar to
setupbut more powerful as it could have a different level scope of setup.
There are several good tutorial out there on it. Below is one
With this, hopes this provide some insight of how iOS perform Unit Test. It’s always interesting to know how the other part of the world function.
I hope this post is helpful to you. You could check out my other interesting topics here.
Follow me on medium, Twitter or Facebook for little tips and learning on Android, Kotlin etc related topics. ~Elye~
|
https://elye-project.medium.com/android-dev-peep-on-ios-unit-test-a80cfbbc1b53?source=post_page-----a80cfbbc1b53--------------------------------
|
CC-MAIN-2021-10
|
refinedweb
| 955
| 62.17
|
Modernize the List Edit form widget for Django 1.11 and CSS Components.
Review Request #10823 — Created Jan. 13, 2020 and submitted — Latest diff uploaded
This redoes much of the List Edit form widget, used for dynamically
adding/removing lists of text fields, to be compatible with Django 1.11
(by removing references to old, removed Django static media files). This
allows the Djblets test suite to fully pass on Django 1.11.
I've also moved the widget to use CSS Components, which will help us to
maintain consistent styling and reduce namespace clashes.
While updating the JavaScript for this widget, some unused code was
found and removed, and an HTML formatting bug was also fixed.
Python and JavaScript unit tests pass.
|
https://reviews.reviewboard.org/r/10823/diff/1-2/
|
CC-MAIN-2020-29
|
refinedweb
| 123
| 66.64
|
On Fri, 1 Aug 1997, Dean Gaudet wrote:
>.
Hmmm....I'm not sure how libdb functions would override dbm functionality
in libc - but that was my point exactly. Why would the AuthDBMUserFile
command cause an open to a db file? Is it possible that there is some
sort of namespace collision within the mod_auth code that inserts the
wrong callback? It doesn't look like there is any problem with libc and
libdb - all of the dbm routines have names like "dbm_*", whereas the db
routines start with "db".
I have a pretty generic configuration (Solaris 2.5.1, db 1.85), but
perhaps nobody else tries to do this. In any case, I have been unable to
get these working together since 1.1.1, when I first built httpd, and can
be replicated at will.
Pointers on how to ferret this out would be appreciated.
David.
|
http://mail-archives.apache.org/mod_mbox/httpd-dev/199708.mbox/%3CPine.GSO.3.95.970801165030.7526A-100000@spaten.flatiron.org%3E
|
CC-MAIN-2015-11
|
refinedweb
| 148
| 75.5
|
I have a string encoded in Base64:
eJx9xEERACAIBMBKJyKDcTzR_hEsgOxjAcBQFVVNvi3qEsrRnWXwbhHOmzWnctPHPVkPu-4vBQ==
How can I decode it in Scala language?
I tried to use:
val bytes1 = new sun.misc.BASE64Decoder().decodeBuffer(compressed_code_string)
But when I compare the byte array with the correct one that I generated in Python language, there is an error. Here is the command I used in python:
import base64base64.urlsafe_b64decode(compressed_code_string)
The Byte Array in Scala, -2, 47, 5)
And the one generated in python, -18, 47, 5)
Note that there is a single difference in the end of the array am consuming third party service for downloading images but body of it's response includes html plus base64(not sure) image content on top. Response has content type as
image/jpeg; charset=utf-8
Example response:
����JFIF``��C $.' ",#(7),01444'9=82<.342��C 2!!22222222222222222222222222222222222222222222222222��"�� ���} ���w!1AQaq"2�B���� #3R�br�<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""><html xmlns=""><head></head><body onload="initslide('method1,method2,method3', '');"> // More html goes here</body></html>
And service call:
var params = { url : serviceUrl, form : form, headers : headers};request.post(params, function(error, response, body) { if (error) { console.error("Error:", error); return; } callback(body); // In body i am getting above response });
Now, I am only interested in downloading image portion of it and save it on cloud as png/jpeg format. Any idea how to achieve this in node.js.
I have a string:
RP581147238IN which gets encoded as
A3294Fc0Mb0V1Tb4aBK8rw==
and another string:
RP581147239IN which gets encoded as
A3294Fc0Mb1BPqxRDrRXjQ==
But after spending a day, I still cannot figure out what is the encoding process.
The encoded string looks like its base64 encoded.
But when I decode it, it looks like:base64.decodestring("A3294Fc0Mb0V1Tb4aBK8rw==")
\x03}\xbd\xe0W41\xbdA>\xacQ\x0e\xb4W\x8d
The base 64 decoded string now is looking like a zlib compressed string
I've tried to further use zlib decompression methods but none of them worked.
import zlib, base64rt = 'A3294Fc0Mb1BPqxRDrRXjQ=='for i in range(-50, 50): try: print(zlib.decompress(base64.decodestring(rt), i)); print("{} worked".format(i)) break except: pass
But that did not produce any results either.
Can anybody figure out what is the encoding process used here. @Nirlzr, I am looking at you for the heroic answer you provided in Reverse Engineer HTTP request.
People on stackoverflow helped me with a base64 decoding in Perl but I would like to have the script in reverse :
use strict;use warnings;use MIME::Base64;my $str = 'CS20UumGFaSm0QXZ54HADg';my @chars = split //, decode_base64 $str;my @codes = map ord, @chars;print "@codes\n";
Output = 9 45 180 82 233 134 21 164 166 209 5 217 231 129 192 14
Now I would like to have the output as my
$str and
CS20UumGFaSm0QXZ54HADg as output?I've been trying it for some hours but can't seem to get it right.
|
http://www.convertstring.com/en/EncodeDecode/Base64Decode
|
CC-MAIN-2017-17
|
refinedweb
| 474
| 61.56
|
Created on 2019-09-27 17:26 by Kit Choi, last changed 2019-09-30 09:20 by Kit Choi2. This issue is now closed.
I expect the following test to fail, because an "error" is not a "failure".
Unexpectedly, the test passes:
```
class TestFailure(unittest.TestCase):
@unittest.expectedFailure
def test_expected_failure(self):
raise TypeError() # for example, a typo.
```
```
$ python -m unittest test_main
x
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (expected failures=1)
```
This behaviour exists since Python 2.7, and is still true for the Python 3.8.0b1
A function can fail to return an expected object by either returning a wrong object or raising an (unexpected) exception. The assertXyz methods, which ultimately raise AssertionError or something similar, are mostly about catching the first kind of failure, but tests should also catch and report the second kind. The traceback shows the kind of failure. The assertXyx failures add additional details after the traceback.
import unittest
class T(unittest.TestCase):
def test_f(self): raise TypeError()
unittest.main()
# Properly results in
Traceback (most recent call last):
File "F:\Python\a\tem4.py", line 4, in test_f
def test_f(self): raise TypeError()
TypeError
----------------------------------------------------------------------
Ran 1 test in 0.050s
FAILED (errors=1)
For your test:
class T(unittest.TestCase):
def test_f(self): raise TypeError()
If you run this test with unittest test runner, you should get this result:
E
======================================================================
ERROR: test_f (test_main.T)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_main.py", line 5, in test_f
def test_f(self): raise TypeError()
TypeError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
I expect to get this behaviour even if the test is decorated with unittest.expectedFailure. However, currently we get a success.
Scenario:
You create a class named Duck with a method "quack". Then you added a test, and test needs to call Duck.quack.
Later on for whatever reason, you need to decorate the test with expectedFailure. The test passes with the expected failure.
Then you rename the "quack" method to "walk", but you forget to update the test. Now the test is actually failing with an AttributeError, but you won't notice it because expectedFailure silences it.
In this scenario, it is important to differentiate a "test error" and a "test failure". A test has four status: success, failure, error, skipped. I expect unittest.expectedFailure to make "failure" a "success" and a "success" a "failure", and it should leave "error" and "skipped" unchanged.
Please consider reopening this issue.
P.
A.
I think Python does differentiate "test error" and "test failure" such that a test outcome state can be one of these: success, failure, error, skipped. One could refine these to six: expected success, unexpected success, expected failure, unexpected failure, error, skipped.
For example, in the documentation for failureException:
* failureException: determines which exception will be raised when
the instance's assertion methods fail; test methods raising this
exception will be deemed to have 'failed' rather than 'errored'.
Another evidence: unittest.runner.TextTestResult, there are methods called "addSuccess", "addError", "addFailure", "addSkip", "addExpectedFailure" and "addUnexpectedSuccess".
For example, this test outcome is marked as "FAILED":
def test(self):
x = 1
y = 2
self.assertEqual(x + y, 4)
======================================================================
FAIL: test (test_main.T)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_main.py", line 9, in test
self.assertEqual(x + y, 4)
AssertionError: 3 != 4
But the test outcome for this test is "ERROR":
def test(self):
x = 1
y = 2 + z # NameError
self.assertEqual(x + y, 4)
======================================================================
ERROR: test (test_main.T)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_main.py", line 8, in test
y = 2 + z # NameError
NameError: global name 'z' is not defined
The issue here being "expectedFailure" converting "error" to "success", which is not expected, and is causing decorated tests to become unmaintained. While the rest of unittest differentiates "error" and "failure", expectedFailure does not. This is either a bug in the behaviour of expectedFailure, or a bug in the documentation for not being clear on the fact that unexpected error will be considered as expected failure (which I think is wrong).
See issue38320 for documentation change request
|
https://bugs.python.org/issue38296
|
CC-MAIN-2021-21
|
refinedweb
| 673
| 58.89
|
As long as it works on Windows, it’s Ok for me.
When pushed on mod I’ll adapt my companion win32/Makefile to pass TESTED_TCC as suggested.
C.
From: Tinycc-devel [mailto:tinycc-devel-bounces+eligis=address@hidden] On Behalf Of avih
Sent: Wednesday, June 19, 2019 18:33
To: Petr Skočík; address@hidden
Subject: Re: [Tinycc-devel] test 104 fails on windows: missing mkstemps
As usual, I forgot to attach the patch. Attached now.
On Wednesday, June 19, 2019 7:28 PM, avih <address@hidden> wrote:
If no objects to the attached patch, then I'll push it to mob.
In a nutshell: it creates and executes a shell script, where the tcc binary path (and arguments) are taken from the Makefile (where it `export TESTED_TCC = $(TCC)`). It was tested on Ubuntu and Windows (cygwin).
While I think it's better than before, it's still rather awkward, and at some stage we'll probably want to compile cFileContents directly from the makefile and then execute the commands at the script also at the Makefile.
The patch replaces everything before cFileContents with this:
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
/*
* requires the following in PATH: sh, touch, cat, nm, gawk, sort.
* we use CWD for temp files at main() and at the shell script because:
* - mktemp (shell) is non standard, mkstemps (API) is unavailable on windows.
* - we choose temp names which don't require quoting in shell or C strings.
* - a relative path with forward slashes is identical on *nix/windows
* (FILE APIs, shell, tcc CLI arguments), but absolute path on windows will
* require conversions between shell/API/CLI representations.
*/
#define SCRIPT_PATH "./tmp-tcc104.sh"
#define TMP1 "./tmp-tcc104.1"
#define TMP2 "./tmp-tcc104.2"
int main(int argc, char **argv)
{
extern char const cfileContents[];
const char *tcc;
FILE *f;
int r;
/* TESTED_TCC is bin + args, should stay unquoted */
if (!(tcc = argc > 1 ? argv[1] : getenv("TESTED_TCC")))
return perror("unknown tcc executable"), 1;
if (!(f = fopen(SCRIPT_PATH, "wb")))
return perror("cannot create script"), 1;
fprintf(f,
"set -e\n"
"LC_ALL=C; export LC_ALL\n"
"tmp1=%s; touch $tmp1\n"
"tmp2=%s; touch $tmp2\n"
"cat <<\\CFILE > $tmp1\n"
"%s\n"
"CFILE\n"
"%s -c -xc $tmp1 -o $tmp2\n"
"nm -Ptx $tmp2 > $tmp1\n"
"gawk '{ if($2 == \"T\") print $1 }' $tmp1 > $tmp2\n"
"sort $tmp2\n"
,
TMP1, TMP2, cfileContents, tcc);
fclose(f);
r = system("sh " SCRIPT_PATH);
remove(TMP1);
remove(TMP2);
remove(SCRIPT_PATH);
return r;
}
char const cfileContents[]=
"inline void inline_inline_2decl_only(void);\n"
...
On Wednesday, June 19, 2019 1:49 PM, avih <address@hidden> wrote:
Peter,
You're correct, but on windows tcc isn't posix compliant, hence no mkstemp.
Additionally, `system(...)` on windows invokes the windows shell, which typically is not the shell where `make` is invoked (and which make uses), and it has different path expectations than the test shell.
As for pipe failure, you could (and I'm currently doing while working on it) use the posix `set -e` and execute the commands individually while still failing on errors. I'm aware that `set -e` can be a can of worms, but for this simple script it doesn't pose hidden pitfalls that I can identify.
As for `mktemp` not being standard, I wasn't aware of that, thanks. For now I'll use it, but I won't post it for review before I have a reasonable alternative if it's not available.
Alternatively, if you have a version which uses a shell script (and the tcc path taken from the TCC env var which the makefile sets anyway) then you can push it instead of me keep working on it.
Avi
On Wednesday, June 19, 2019 1:17 PM, Petr Skočík <address@hidden> wrote:
The test started out as a shell script. It could have been a pipeline,
but what I dislike about pipelines in POSIX shell is you can't
really error check them well because if a link fails and the last one
succeeds, you get a success.
That is why I wrote I ended up invoking the links from C
using tempfiles as glue. You can theoretically do the same from a shell
but mktemp isn't standard but you can create tempfiles from C standardly
(mkstemps() is POSIX, tmpnam() is ISO C) and the C code isn't much
longer than the shell code.
Petr
On 6/19/19 11:33 AM, avih wrote:
> Hi Christian and thanks for the quick response.
>
> I can confirm that your patch fixes the issue (makes the diff go away).
>
> However, before the patch it definitely invoked the cygwin sort because
> my PATH only has cygwin's /bin and /local/bin (translated to correct
> windows paths when test 104 executes) and prefixed with TCC's root path.
> I tested this by printing getenv("PATH") at 104_inline_test.c .
>
> I can also confirm the issue simply from cygwin's shell that `sort`
> changes the order of 104_inline_test.expect:
> $ cat tests/tests2/104_inline_test.expect | sort # <-- inst_... is
> before inst2_... i.e. MODIFIED.
>
> while
>
> $ cat tests/tests2/104_inline_test.expect | LC_ALL=C sort # <-- inst2...
> is before inst_... i.e. unmodified = OK
>
> Alternative to your patch, adding `export LC_ALL = C` at the beginning
> of tests/tests2/Makefile also fixes the issue for me.
>
> Running `locale` at the shell prints=
>
> Please don't push this patch to mob, because it adds another potential
> complexity by mixing unix/windows tools.
>
> I'll try to modify 104_inline_test.c such that it uses only one system
> invocation of shell -c "<commands>" so that it effectively becomes a
> shell script, and post the result of my attempt here to get some feedback.
>
> Thanks again,
> Avi
>
>
> On Wednesday, June 19, 2019 8:20 AM, Christian Jullien
> <address@hidden> wrote:
>
>
> Avih,
> I quickly hacked 104 test to start to make it work but I’m not the
> author of this test. I let Petr improve it.
> I fully agree with you that mixing C code and calling system (which
> forks a cmd.exe on Windows) to finally execute gnu commands that require
> Cygwin is a BAD idea!!
> As you said, it’s much easier if 104 test generates a .o which the whole
> ‘unix’ infrastructure test will check. I let the maintainer of this test
> decide what to do.
>
> About the diff, I don’t understand quite well what happens. Among other,
> it calls system(“sort …”); which spawns cmd.exe and then execute first
> sort seen in path. Depending on your PATH it can be Windows sort (if
> Windows path comes first) or Cygwin version if this one comes first.
>
> Can you please try this diff which works as good for me but now forces
> sort Windows version:
>
> diff --git a/tests/tests2/104_inline_test.c b/tests/tests2/104_inline_test.c
> index cb288d2..d191602 100644
> --- a/tests/tests2/104_inline_test.c
> +++ b/tests/tests2/104_inline_test.c
> @@ -5,8 +5,8 @@
>
> #if __linux__ || __APPLE__
> #define SYS_WHICH_NM "which nm >/dev/null 2>&1"
> +#define SYS_SORT "sort"
> #define TCC_COMPILER "../../tcc"
> -#define SYS_AWK
>
> char c[]="/tmp/tcc-XXXXXX"; char o[]="/tmp/tcc-XXXXXX";
> static int mktempfile(char *buf)
> @@ -15,6 +15,7 @@ static int mktempfile(char *buf)
> }
> #elif defined(_WIN32)
> #define SYS_WHICH_NM "which nm >nul 2>&1"
> +#define SYS_SORT "%systemroot%\\system32\\sort /LOCALE C"
>
> #if defined(_WIN64)
> #define TCC_COMPILER "..\\..\\win32\\x86_64-win32-tcc"
> @@ -72,7 +73,7 @@ int main(int C, char **V)
> sprintf(buf, "%s -c -xc %s -o %s", V[1]?V[1]:TCC_COMPILER, c,
> o); if(0!=system(buf)){ r=1;goto out;}
> sprintf(buf, "nm -Ptx %s > %s", o, c); if(system(buf))
> {r=1;goto out;}
> sprintf(buf, "gawk '{ if($2 == \"T\") print $1 }' %s > %s", c,
> o); if(system(buf)) {r=1;goto out;}
> - sprintf(buf, "sort %s", o); if(system(buf)) {r=1;goto out;}
> + sprintf(buf, "%s %s", SYS_SORT, o); if(system(buf)) {r=1;goto out;}
> out:
> remove(c);
> remove(o);
>
> *From:*Tinycc-devel
> [mailto:tinycc-devel-bounces+eligis=address@hidden] *On Behalf Of
> *avih
> *Sent:* Tuesday, June 18, 2019 21:01
> *To:* address@hidden; address@hidden; 'Michael Matz'
> *Subject:* Re: [Tinycc-devel] test 104 fails on windows: missing mkstemps
>
> Well, the diffs are not really diffs. They just moved. Looks like `sort`
> didn't work as expected, or maybe it's some locale issue (mine is
> en_US.UTF-8 at cygwin, and en-US at windows).
>
> A script should handle this too, possibly with LC_ALL=C (and make sure
> the reference file is generated with the same sort locale).
>
> If someone could split it to program+script then I can test the test in
> various use cases and make adjustment if required (I'm terrible with
> Makefiles but reasonably useful with shell).
>
> On Tuesday, June 18, 2019 9:50 PM, avih <address@hidden> wrote:
>
> After commit d39c49db Remove empty conditional _WIN32 code
>
>
> and some hacking of the code (it's an unhealthy mix of basically running
> a shell script from a program compiled using tcc for windows), I get the
> following 2 diffs:
>
>
> +inst_extern_inline_postdeclared
> +inst_extern_inline_predeclared
>
>
>
> and
>
>
> -inst_extern_inline_postdeclared
> -inst_extern_inline_predeclared
>
>
>
> I'm running it in a cygwin environment and the tools (nm, sort, gawk)
> are cygwin tools, while the tested tcc is normal mingw tcc for windows
> (which I build in a reproducible way using my script).
>
>
>
> Regardless of these two diffs, I think the test should be composed of a
> program and a normal shell script (which uses mktemp, awk, sort etc), so
> that the paths are consistent between the tools.
>
>
> Also, the TCC path is hardcoded at the test, but in fact it's parametric
> at the makefile as $(TCC), so it's better to use that instead (but then
> there are forward/backward slash issues which need to be handled too,
> because system(...) in win32 expects backward slashes, but $(TCC) at the
> makefile has forward slashes). Making this a program + a script should
> implicitly solve this issue as well.
>
>
>
> After all, a working shell+tools is assumed for this test anyway, but
> the current way of using system(...) from a win32 program (compiled
> using tcc for windows) invokes a windows shell which can be inconsistent
> with the actual shell where `make` runs.
>
>
> Avi
>
>
>
>
>
>
> On Tuesday, June 18, 2019 12:11 AM, avih <address@hidden> wrote:
>
> Hmm.. I now see that test 104 uses signal and nm, so it might require
> some effort to make it work on windows.
>
> Nevertheless, considering the recent breakage specifically on windows
> from related code, I think it would be beneficial if this test could be
> made to work on windows,
>
> On Monday, June 17, 2019 11:54 PM, avih <address@hidden> wrote:
>
> Wouldn't it be better to just create a known/fixed file instead?
> (assuming the test doesn't need explicitly mkstemps, I didn't look at
> its actual code).
>
> On Monday, June 17, 2019 11:50 PM, Christian Jullien <address@hidden>
> wrote:
>
> Yes it has been previously reported.
>
> Michael, as said in a private mail, I agree with you that this test
> should be skipped on Windows.
>
> C.
>
> *From:*Tinycc-devel
> [mailto:tinycc-devel-bounces+eligis=address@hidden] *On Behalf Of
> *avih
> *Sent:* Monday, June 17, 2019 22:46
> *To:* Tinycc-devel Mailing List
> *Subject:* [Tinycc-devel] test 104 fails on windows: missing mkstemps
>
> Test 104 log on windows (both tcc32 and tcc 64):
>
> Test: 104_inline_test...
> --- 104_inline_test.expect 2019-06-17 23:42:00.162697100 +0300
> +++ 104_inline_test.output 2019-06-17 23:42:35.531550400 +0300
> @@ -1,25 +1,2 @@
> -extern_extern_postdeclared
> -extern_extern_postdeclared2
> -extern_extern_predeclared
> -extern_extern_predeclared2
> -extern_extern_prepostdeclared
> -extern_extern_prepostdeclared2
> -extern_extern_undeclared
> -extern_extern_undeclared2
> -extern_postdeclared
> -extern_postdeclared2
> -extern_predeclared
> -extern_predeclared2
> -extern_prepostdeclared
> -extern_undeclared
> -extern_undeclared2
> -inst2_extern_inline_postdeclared
> -inst2_extern_inline_predeclared
> -inst3_extern_inline_predeclared
> -inst_extern_inline_postdeclared
> -inst_extern_inline_predeclared
> -main
> -noinst_extern_inline_func
> -noinst_extern_inline_postdeclared
> -noinst_extern_inline_postdeclared2
> -noinst_extern_inline_undeclared
> +104_inline_test.c:30: warning: implicit declaration of function 'mkstemps'
> +tcc: error: undefined symbol 'mkstemps'
> make[1]: *** [Makefile:70: 104_inline_test.test] Error 1
> Test: 105_local_extern...
> make[1]: Target 'all' not remade because of errors.
>
>
>
>
>
>
>
|
https://lists.gnu.org/archive/html/tinycc-devel/2019-06/msg00070.html
|
CC-MAIN-2019-39
|
refinedweb
| 1,955
| 60.95
|
If we want to break a number, breaking it into 3s turns out to be the most efficient. 2^3 < 3^2 4^3 < 3^4 5^3 < 3^5 6^3 < 3^6 ... Therefore, intuitively, we want as many 3 as possible if a number % 3 == 0, we just break it into 3s -> the product is Math.pow(3, n/3) As for numbers % 3 == 1, we don't want the 'times * 1' in the end; borrowing a 3 is a natural thought. if we borrow a 3, 3 can be divided into case 1: 1 + 2 -> with the extra 1, we have 2*2 = 4 case 2: (0) + 3 -> with the extra 1, we have 4 turns out these two cases have the same results so, for numbers % 3 == 1 -> the result would be Math.pow(3, n/3-1)*4 Then we have the numbers % 3 == 2 left again, we try to borrow a 3, case 1: 1+2 -> with the extra 2, we have 1*5 or 3*2 => 3*2 is better case 2: 0+3 -> with the extra 2, we have 2*3 or 5 => 2*3 is better and we actually just end up with not borrowing at all! so we can just *2 if we have an extra 2 -> the result would be Math.pow(3, n/3)*2 Then, we have a couple corner cases two deal with since so far we only looked at numbers that are larger than 3 -> luckily, we only have 2 and 3 left, which are pretty easy to figure out Thus my final solution is public class Solution { public int integerBreak(int n) { if(n <= 3) return n-1; //assuming n >= 2 return n%3 == 0 ? (int)Math.pow(3, n/3) : n%3 == 1 ? (int)Math.pow(3, n/3-1)*4 : (int)Math.pow(3, n/3)*2; } }
I just want to know why when should break it into 3s as more as we can ?
is there any theorem or how to prove it?
Okay, I will try to provide an informal proof of why we want as many
3s as possible.
Firstly, it is easy to see why the result is
n - 1 for
n < 4. Therefore, we will only observe the case
n >= 4.
The most optimal way to represent
4 is
2 * 2, again easy to see. What if
n > 4?
We claim that
3 * (n - 3) > n for all
n >= 5. This is indeed the case, since:
3 * (n - 3) > n 3n - 9 > n | -n 2n - 9 > 0 | +9 2n > 9 | /2 n > 4.5 which is always true, since we assumed n >= 5.
With this proof we have actually shown that for any integer bigger than 4 it is actually better to decompose it as 3 times something instead of leaving it as it is.
Now we can recursively solve for
x = n - 3. The new cases are:
1. x == 2 - in this case we leave it as it is, which means that n % 3 == 2. This is covered by (int)Math.pow(3, n/3)*2 in his solution. 2. x == 3 - corresponds to n % 3 == 0, which is covered by (int)Math.pow(3, n/3) 3. x == 4 - corresponds to n % 3 == 1, which is covered by (int)Math.pow(3, n/3-1)*4, meaning that we multiply by 4 instead of 3 once. 4. x >= 5 - in this case we can recursively solve for x.
A natural question would be why we chose exactly
3 and not any other number? It is indeed true that for any positive integer
x >= 2 we have
x * (n - x) >= n for
n big enough. However, it wouldn't make sense to choose
x >= 5, as we have already proved that it is better to decompose it using as many
3s as possible. So the only candidates left are
x = 2 and
x = 4:
1. x == 4 - We will have something like 4 * 4 * 4 ... * something. Let's take only the first two factors: 4 * 4 < 3 * 3 * 2, so we can substitute all of the fours with this expression, meaning that 3 is indeed the better choice. For the remaining non-three factors, we can sum them up and use the same recursion. 2. x == 2 - Similarly, 2 * 2 * 2 < 3 * 3.
The proof is not complete, but should be enough for you to understand the intuition.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
https://discuss.leetcode.com/topic/43099/share-some-thought-process-about-this-problem
|
CC-MAIN-2017-47
|
refinedweb
| 757
| 79.8
|
You can use the Scanner class to simplify reading text files quite a bit. In this tutorial we'll see how to do it, and I'll also explain how to locate your file on the disk and we'll look at a little gotcha that might catch you out if you want to read numbers from your file. Plus, a little insight into some Java terminology, just in case you're sufficiently young that the word "typewriter" sounds to you much like "gramophone" does to me.
When the video is running, click the maximize button in the lower-right-hand corner to make it full screen.
Code for this tutorial:
App.java:
import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; public class App { public static void main(String[] args) throws FileNotFoundException { //String fileName = "C:/Users/John/Desktop/example.txt"; String fileName = "example.txt"; File textFile = new File(fileName); Scanner in = new Scanner(textFile); int value = in.nextInt(); System.out.println("Read value: " + value); in.nextLine(); int count = 2; while(in.hasNextLine()) { String line = in.nextLine(); System.out.println(count + ": " + line); count++; } in.close(); } }
|
https://caveofprogramming.com/java-video/java-for-complete-beginners-video-part-33-reading-files.html
|
CC-MAIN-2018-09
|
refinedweb
| 188
| 61.02
|
Download SQL SERVER Database Coding Standards and Guidelines Complete List
Just like my previous series of SQL Server Interview Questions and Answers Complete List Download, I have received many comments and emails regarding this series. Once I go through all the emails and comments, I will make mini series with me.
Download SQL SERVER Database Coding Standards and Guidelines Complete List
Complete Series of Database Coding Standards and Guidelines
SQL SERVER Database Coding Standards and Guidelines – Introduction
SQL SERVER – Database Coding Standards and Guidelines – Part 1
SQL SERVER – Database Coding Standards and Guidelines – Part 2
SQL SERVER Database Coding Standards and Guidelines Complete List Download
Other popular Series
SQL Server Interview Questions and Answers Complete List Download
SQL SERVER – Data Warehousing Interview Questions and Answers Complete List Download
Reference : Pinal Dave ()
Useful Notes
Hi Pinal,
That is a Very Very Good Compiled Document. Appreciate it
However, just one things i noticed. Please correct me if i am wrong.
Stored Procedure Naming Convention
– Is it feasible to start the User Created Stored procedure Name with “sp” ,since SQL Server looks for the Stored Procedure in the Master Database and assumes it to be as a System defined Stored Procedure?
I know you have mentioned not to start with “sp_”, but just a thought.
Thanks
Deepan S.
Thanks Deepan,
As far as I know ‘SPName’ is no problem but ‘sp_’. When I reviewed this document and that particular issue with other SQL Expert, we all agreed that it is fine.
If anybody has other opinion or documentation. I am willing to adopt, but at this moment I think ‘spNAME’ (no underscore) is fine.
Regards,
Pinal Dave (SQLAuthority.com)
Hi Pinal Dave,
i wanted to know is there any software exist which checks for database standards for a given stored procedure. please let me know this thanks and
Regards
Santosh Kakani
very good articles keep doing things more and more
Its Really Good Article…
Best Of Luck for Bright Future…..
really great articals to be referred by any .net programmer.
thanks to pinal dave……..
Really fantastic…!
hi pinal,
This site is very much usefull
thanks to u
Thanks,
Vinod Dhone
Hello Mr. Dave,
Your articles are amazing and really very useful. I am a beginner in SQL BI and looking for resources, interview questions. It would be a great help if you can point me to those resources.
Thanks
Hi. Nice articles but I am interested in why you would want to prefix all stored procedures with ‘sp’. In the case of stored procedures I understand there is no impact but it seems superfluous and makes the natural reading of code less easy.
It reminds me somewhat of the polish notation used by many VB programmers that in the end leads to problems. Luckily ones that don’t currently afflict SQL.
Hello Mr. Dave,
Your articles are amazing and really very useful. This should
be done by each professional to encourage the begginners.
your work is really appreciable.
Go ahead
Thanks
sachin
Coding standards are very usefull. i like it very much. keep it up boyz.
Hi,
The articals are really very useful,keep uploading these type of DOC
Hi,
I think you should add the (NOLOCK) attribute as well.
eg:
SELECT UserID FROM [User] (NOLOCK)
or
SELECT u.UserID,s.SessionID FROM [User] u (NOLOCK) INNER JOIN [Session] s (NOLOCK) ON u.UserID=s.UserID
Jaysam
Its very good work done by you people.
I really thankful to u……..
Keep Uploading such good technical stuff. Thanks
this is very important .and very easy way to understanding
its nice
Pingback: SQL SERVER - Pre-Code Review Tips - Tips For Enforcing Coding Standards Journey to SQL Authority with Pinal Dave
hi Pinal Dave, I just graduated with a degree in network communication and management i would like to know of any more info to help me learn the sql server to further my carrer. i am studying now for my ccna and want all the expeirence i can get to futher myself if u can help i would greatly appreciate the info to learn maybe some books you could recommend. i am egar to learn from someone who has such a vast knowledge of this field
thanks R.Haynes
Excellent Web Site.
Thank you, I really appreciate your work. I hope one day I will be able to give back too.
I want to appreciate the author of this documents
keep the gud work up.
hello pinal Dave
i am really happy with u r articles
Good Pinal Dave,
Keep updating the things……….
I want to send some new concept of performance tunning and new way of using sql server 2005 coding stratergy. I will appreciate you if you can provide me your mail ID so that i can send you the document which i Collect from my learnings………
Thanks & Regards
Shashi Kant Chauhan
how can i count number of column in a row with same data?
i have a table with column empid,month, day1, day2 ………….day31 i want to count number of ‘p’ and ‘a’ of that employee id?
len(empid)-len(replace(empid,'p','')) as no_of_p,
len(empid)-len(replace(empid,'a','')) as no_of_a
from your_table
You are really good man. Your effort with no charge is really appreciable.
God bless you.
Rana
i am not written this for u .because i really dont no which word i say to u. no word is expressed to u.U R a very tallented person ,u have a great capabilities to done all u r effort.u also done a work that needed all jobseekers like me.
i just searches SQL related questions with ansers ,and i get this wonderful and easy solution from u.
thanks ,may have 2 requst u plz contact with me.
best of luck u r next …………………………………………………………………..
till end
Hi!
Its very nice and helpful link for me and all sql users.
regards,
Ganesh Jamdurkar
Hi!
Its very nice and helpful link for me and all sql users.
regards,
Faisal Qureshi
Wonderful website..appreciate the effort.
Arvind
hi
it is very useful and necessary to all who are all using sql server. keep on adding tips
regards,
ravi
Hi pinaldave,
Very usefull article keep going.
Pingback: SQL SERVER - Add Column With Default Column Constraint to Table Journey to SQL Authority with Pinal Dave
CONNECTION OF SQL SERVER 2005 TABLES WITH VISUAL STUDIO 2005
Hi,
wha is the command used to count foreign key in a table.
To know the foreign keys
EXEC sp_fkeys ‘table_name’
Pingback: SQL SERVER - Popular Articles of SQLAuthority Blog Journey to SQL Authority with Pinal Dave
Hi Pinal,
Exist standars in SQL for creation views ?
Thanks
Cesar
thats very nice tips! lucky i found your article. that save a lot of my time to fix my problem.
it is nice to read ,very interesting.
hi everyone can you tell me the steps of how to disable procedure in sql 2000 and 2005
Hi Pinal Dave,
Surprising greatly. Keep good work. Now I am addicted this website.
Pingback: SQL SERVER - How to Rename Database Objects to Comply With Naming Conventions Journey to SQL Authority with Pinal Dave
Hi,
It is a very very good article. I never think these kind of questions. Thanks a lot for doing research on it and all the best for further research.
Regards
Sajitha
Hello,
Great document!
I noticed you recommend having your table names be ‘plural’. It seems there are those who are passionate about making them singular. I was wondering what your reasoning is for doing it your way.
Thanks
Frank
Its a fantastic job,your work will help many developers and God is almighty with you
Hi Pinal,
Really appreciate the work you have done.
Have a question for you in regards to SQL Server Database Coding Standards and Guidelines
Other than keeping within ANSI92 or being Oracle compatible why do you say “Do not use the identitycol or rowguidcol”?
Thanks keep up the good work!!
Zach
Hi Pinal
Really nice compilation. I am greately benifitted.
I was going thru your article on Coding Standards. You have mentioned
“Use BEGIN..END blocks only when multiple statements are present within a conditional code
segment.”
I think we should always use BEGIN…END blocks. Anyone else maintaining a code that I have written may put another line and lured by the indent think that he has put the statement inside the block. Later this may be a nightmare for the third person to debug.
hello sir,
can u tell why u r not use more than one oprimary key on a table
HI
I am preparing for interviews.
If you can send me latest interview questions on SQL and PL?SQL that would be great.
Site is very useful
Pingback: SQL SERVER - 2008 - Introduction to Policy Management - Enforcing Rules on SQL Server Journey to SQL Authority with Pinal Dave
Hi,
Great document!
I noticed you recommend having table names as ‘plural’ but table is an entity and i think an entity should always be in ‘singular’. Correct me if i am wrong
Thanks
Ashok
Hi!
very nice and helpful link.
Regards,
R.Wright
I agree with most of your guidelines. One that I don’t agree with, is using varchar instead of TEXT. TEXT results in a 16-byte only fixed column, while a varchar(max) could result in 8KB being accessed by the HDD and sent to the resultset. It just serves better to have a huge amount of data off-table and only accessed when needed. Even if a SELECT list is provided, excluding the varchar, the hard drive still has to read all of it into the page before producing the selective list, which is a waste.
Ordinarily, I try to assess the maxlength of a varchar and its usability, and if it’s going to be used infrequently and could be very large, it goes into a TEXT column.
One guideline I use that you didn’t mention, is to have keys at the front of the table, followed by fixed-length columns and variable length columns at the end of the table. Your assessment that varchars are faster to process than chars is only valid when the column length is large. Varchars require the application of a base address with an off-set to reach them. If one places fixed-length columns behind varchars, they, too, will be subject to an off-set. Setting relationships against columns behind varchars places a huge overhead burden on the server.
Just some thoughts.
Did you notice this point from BOL about TEXT datatype?
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature. Use varchar(max), nvarchar(max) and varbinary(max) data types instead. For more information, see Using Large-Value Data Types.
I want list of pratical question of sql server 2000.
Where i will get?
PLEASE GIVE ME THE TIPS ABOU SQL CONTRAINT WORK
Hi Pinal,
Impressive stuff!
I am currently looking for a tool which verifies whether SQL standards are met in the in the SQL script. For example if a input a stored procedure script then i would like see a report on what all predefined coding standards not met in the stored procedure.
Do you know any tools available like that or let me know pointers to build that kind of a tool.
HI
I am preparing for interviews.
If you can send me latest interview questions on SQL and PL?SQL that would be great.
Site is very useful
hi
i am fresher.i don’t know sql server.please give more tips for sql server 2005 and sql server 2008
this is very good artical.
hi Pinal,
extremely helpful info in a very simple manner with simple examples!!!
that’s really great…
Very Useful..Thanks for that…
Pingback: SQLAuthority News - Best Downloads on SQLAuthority.com Journey to SQL Authority with Pinal Dave
Pingback: SQLAuthority News - Best Articles on SQLAuthority.com Journey to SQL Authority with Pinal Dave
Pingback: SQL SERVER - Database Interview Questions and Answers Complete List Journey to SQL Authority with Pinal Dave
thanks for such a nice supportive platform to enhance the skill and share their views for other awareness.
Really, Pinalkumar Dave u are providing a helpline to the SQL professionals.
Ajay Sinha
Good stuff. wondering if you have any conventions namespaces for database objects.
One of the best Sir,
With lots of respect!
Pingback: SQL SERVER – Weekly Series – Memory Lane – #032 | Journey to SQL Authority with Pinal Dave
|
http://blog.sqlauthority.com/2007/06/06/sql-server-database-coding-standards-and-guidelines-complete-list-download/?like=1&source=post_flair&_wpnonce=a34a077b61
|
CC-MAIN-2014-35
|
refinedweb
| 2,100
| 73.78
|
The minimalist guide to Spree Static Content
In many Spree projects where I have worked previously, it had become necessary to handle static pages like Help, Terms and Conditions, About Us, etc… In short, all those "stock pages" and some others that come out of the box with any eCommerce website.
I always recommend my colleagues to deliver a simple way to generate the pages, although I "LOVE" to write all the HTML with my own hands, it is more efficient to use Rails helpers or some other markup as HAML.
The point is, as consultants, it is our duty to deliver a stable, functional, installable and, somewhat recoverable product. To achieve this, the extension Spree Static Content has always worked for me because it is simple to configure and we can easily generate a task to regenerate all the "core" static pages of our project. It is especially useful for those cases where someone "accidentally" deletes their contents.
** lib/tasks/static_content.rake **
namespace: static_content do desc 'Reload static pages' reload task: environment do Spree::Page.destroy_all pages = YAML.load_file(Rails.root.join ('db', 'fixtures', 'static_content.yml')) controller_instance = ApplicationController.new pages.each_value do |data| html = controller_instance.render_to_string (template: data ['body']) data['body'] = html Spree::Page.create (data) end end end
As you can see, the rake task is quite simple and consists of the following steps:
Delete existing pages:
This is a matter of taste, it is possible to separate the "default" static pages from any other with some scope or you are able to find specific pages you want to delete. In my case, I'm destroying all of them.
Read the information on pages:
For convenience, I always create a "setup" file separately which contains the necessary information for our pages such as Title, Slug, and the name of the View that we'll be using as our static page's Body.
Create an instance of a controller:
This has the purpose of delegating the rendering of the views to the proper object allowing us, as I said before, to use view helpers or another abstraction markup language like HAML.
Iterate each page:
For each entry we have in our configuration file, we're going to generate the body of the page letting our Controller instance the renderee, from there on out we only create the Spree::Page.
Static Content Example config file:
*** db/fixtures/static_content.yml***
privacy_policy: title: Privacy Policy body: 'spree/static_content/_privacy_policy' slug: '/privacy-policy' show_in_header: false show_in_footer: true position: 0 visible: true layout: '' render_layout_as_partial: false help: title: My Site Help body: 'spree/static_content/_help' slug: '/help' show_in_header: true show_in_footer: true position: 1 visible: true layout: '' render_layout_as_partial: false
Example page:
app/views/spree/staticcontent/privacy_policy.html.haml
%h2 General Statement of Principles %w ... = image_tag 'banner.png', width: 100, height: 10, alt: 'Policy Banner' %h3 For Any question Please send us a mail to = link_to 'info@mysite.com', 'mailto: info@mysite.com', alt: 'Info' %span Directly or send your question: = form_tag do text\_field\_tag =: name ....
That's all, we just need to go to a terminal an run:
terminal
$ be rake static_content:reload
And… Voilà! we won't have headaches trying to update, maintain or recover any "static page" we deliver and that somebody had erased "accidentally".
Questions, comments and suggestions are always welcome:
twitter: @mumoc
github: mumoc
|
http://blog.magmalabs.io/2014/12/02/the-minimalist-guide-to-spree-static-content.html
|
CC-MAIN-2017-17
|
refinedweb
| 553
| 50.87
|
Docker-ize Datadog with agent containers
Docker is an exciting technology that offers a different approach to building and running applications thanks to a clever combination of linux containers (good for ops) and a git-like approach to packaging software (good for dev) so that your containers have everything they need to run without dependencies.
Many of you who are using Docker are embracing the Docker way and taking a container-only approach. As we release our new Docker integration, we don’t want to force you to break from a container-only strategy because of the traditional Datadog agent architecture. Instead, we’ve also embraced the Docker way and we’re pleased to announce a Docker-ized Datadog agent deployed in a container.
The Docker philosophy
First, a brief introduction on how infrastructure is set up with Docker. In Docker, each of your applications is isolated in its own container. The blueprint for a container is its DockerFile which is a set of steps to create the container. These steps build the standard binaries and libraries and install your application’s code and its dependencies such as Python, Redis, Postgres, etc.
The Docker engine then creates the actual container to run using namespaces and cgroups. These are two features found in recent versions of the Linux kernel used to isolate system calls and resource usage (CPU, memory, disk I/O, etc.) directly on your server. The end result is multiple containers on the server with each application thinking it is in its own machine by itself, without the overhead associated with fully virtualized machines.
The traditional Datadog set-up
Until Docker arrived, applications were built in virtual servers or directly on raw servers. In this case, you install the agent on your server and decide what applications and services you want to monitor in Datadog. If you want to send custom metrics to Datadog, you instrument your application with our Datadog version of StatsD, called DogStatsD. This set-up is illustrated below.
|
https://www.datadoghq.com/blog/docker-performance-datadog/
|
CC-MAIN-2017-30
|
refinedweb
| 333
| 51.07
|
nxppy 1.0
A python extension for interfacing with the NXP PN512 NFC Reader. Targeted specifically for Raspberry Pi and the EXPLORE-NFC module
nxppy
nxppy is a very simple Python wrapper for interfacing with the excellent NXP EXPLORE-NFC shield for the Raspberry Pi. It takes NXP’s Public Reader Library and provides a thin layer for detecting and reading the UID (unique identifier) of a Mifare RFID tag present on the reader.
This was based very heavily on NXP’s card_polling example code. The example code was only reorganized to be more conducive as an interface. NXP still retains full copyright and ownership of the example code. nxppy.c and the relevant Python setup files are distributed under the MIT license.
Installation
nxppy is available from pypi. Simply run:
pip install nxppy
Source
To install from source, use distutils:
python setup.py build install
Usage
Currently, the module supports one static method which returns either the UID as a string or None if no card is present:
import nxppy uid = nxppy.read_mifare()
Feedback
I welcome your feedback and pull requests! This project started as a necessity for my own Raspberry Pi development, but I’m hoping others will find it useful as a way to quickly bootstrap NFC-based projects. Enjoy!
- Author: Scott Vitale
- Package Index Owner: svvitale
- DOAP record: nxppy-1.0.xml
|
https://pypi.python.org/pypi/nxppy/1.0
|
CC-MAIN-2017-17
|
refinedweb
| 226
| 54.83
|
NAMEgetrlimit, getrusage, setrlimit - get/set resource limits and usage
SYNOPSIS#include <sys/time.h>
#include <sys/resource.h>
#include <unistd.h>
int getrlimit(int resource, struct rlimit *rlim);
int getrusage(int who, struct rusage *usage);
int setrlimit(int resource, const struct rlimit *rlim);
DESCRIPTIONgetrlimit and setrlimit get and set resource limits respectively. Each resource has an associated soft and hard limit, as defined by the rlimit structure (the rlim argument to both getrlimit() and setrlimit()): and 2.4 behaviour.() signal, a process must employ an alternate signal stack (sigaltstackSVr4, BSD 4.3
NOTEIncluding <sys/time.h> is not required these days, but increases portability. (Indeed, struct timeval is defined in <sys/time.h>.)) only the fields ru_utime, ru_stime, ru_minflt, ru_majflt, and ru_nswap are maintained.
SEE ALSOdup(2), fcntl(2), fork(2), mlock(2), mlockall(2), mmap(2), open(2), quotactl(2), sbrk(2), wait3(2), wait4(2), malloc(3), ulimit(3), signal(7)
Important: Use the man command (% man) to see how a command is used on your particular computer.
>> Linux/Unix Command Library
|
http://linux.about.com/library/cmd/blcmdl2_setrlimit.htm
|
CC-MAIN-2016-30
|
refinedweb
| 174
| 50.94
|
Hi I have a file
Hello.cpp (I said i was a novice).
#include <iostream.h>
int main()
{
int x = 5;
int y = 7;
cout "\n";
cout << x + y << " " << x * y;
cout "\n";
return 0;
}
I try and run and compile it by running
bcc32 -If:\Borland\bcc55\include -Lf:\Borland\bcc55\Lib Hello.cpp
But am getting the following error
Hello.cpp:
Error E2379 Hello.cpp 6: Statement missing ; in function main()
Error E2379 Hello.cpp 8: Statement missing ; in function main() *** 2 errors in Compile
I apologise for the post but I really waqnt to get on and can't even get past this simple problem.
What do you see different in these two statements?
cout "\n";
cout << x + y << " " << x * y;
Apologies for the delay coming back Work comes first
If I'm perfectly honest I don't see anything different except the question marks are around the /n
in the first line and not the second line statement
I know /n is a new line so expected the following result (once it compiled)
12 35
I'm sorry to be a hassle
u forgot "<<"
Code:
cout << "\n";
cout << "\n";
Forum Rules
|
http://forums.codeguru.com/showthread.php?477588-Can-System-XCopy-source-paths-from-textboxes&goto=nextnewest
|
CC-MAIN-2016-26
|
refinedweb
| 195
| 68.2
|
Hi!I've been mostly offline for a bit, and Racket 8.2 was released today (a little ahead of schedule), so I will rework this patch series to just update to 8.2 and not deal with adding "-next" variants for now. I'll respond to here, though, to keep the discussion together.
On 7/8/21 5:25 PM, Ludovic Courtès wrote:
Philip McGrath <philip@philipmcgrath.com> skribis:* gnu/packages/racket.scm (racket-next-minimal,racket-next): New variables.[...]+++ b/gnu/packages/racket.scm @@ -23,6 +23,7 @@ #:use-module ((guix licenses) #:select (asl2.0 expat lgpl3+)) #:use-module (guix packages) + #:use-module (guix base16)Leftover?
Yes, thanks!
+;; - `racket-pkg-` should probably be the prefix for Racket packages +;; available as Guix packages, once we're able to build those. +;; More specifically, it should correspond +;; to packages registered in the catalog at. +;; This is a social convention to manage the namespace, not a technical +;; limitation: Racket can use other catalogs (e.g. for pre-built packages +;; or packages pinned to specific versions), unregistered package source +;; urls, or purely local packages. But we also need a convention to +;; manage the namespace, so we should use this one. In practice, +;; all generally useful libre Racket packages are registered there. +;; We probably will need a clever encoding scheme to deal with the fact +;; that Racket package names can contain [A-Za-z_-], i.e. including "_", +;; which is not allowed in Guix package names.For this there’s already a documented convention (info "(guix) Package Naming"), although part of it is undocumented. The prefix would rather be “racket-” to match what we do with other packages–“ghc-”, “ocaml-”, “guile-”, and so forth.
I wrote these as statements in the hope of eliciting any disagreement :)The problem I see with using just “racket-” as the prefix is the potential for collisions, especially because Racket uses a lot of the namespace: for example, "_" is a useful example package for testing package issues, and I maintain the "_-exp" package. There don't seem to be Racket packages named "minimal" or "next" right now, but they seem reasonably likely to be used in the future, and Guix likewise may want to add packages that don't correspond directly to a single Racket-level package. (In fact, I think this may be necessary to build Racket packages with mutually recursive dependencies.) Other Racket package names that I think might be less confusing if prefixed with “racket-pkg-” include "base", "racket-lib", "unstable", "profile", "make", "data", "images", "compiler", "compatibility", "pkg-build", and "main-distribution".".
+(define %pre-release-installers + "") + +(define-public racket-next-minimal + (package + (inherit racket-minimal) + (name "racket-next-minimal") + (version "8.1.900") + (source + (origin + (inherit (package-source racket-minimal)) + (sha256 + (base32 + "0dm849wvlaxpfgz2qmgy2kwdslyi515rxn1m1yff38lagbn21vxq")) + (uri (string-append %pre-release-installers + "racket-minimal-src.tgz")))))) + +(define-public racket-next + (package + (inherit racket) + (name "racket-next") + (version (package-version racket-next-minimal)) + (source + (origin + (inherit (package-source racket)) + (sha256 + (base32 + "0ysvzgm0lx4b1p4k9balvcbvh2kapbfx91c9ls80ba062cd8y5qv")) + (uri (string-append %pre-release-installers + "racket-src.tgz"))))))Do I get it right that *-src.tgz are not versioned? That they’re updated in place regularly? In that case, we cannot refer to them in a package definition since the hash is bound to become stale. What we could do is refer to, say, <>. However, I suspect this file would vanish fairly quickly from the web site, which is not okay either. I’m not sure what a good solution would be. WDYT? It may be that ‘--with-source=’ would do the job for those who’re into that.
This is also a good catch! For now, I will avoid the problem by just not dealing with "-next" variants.For posterity: while working on this patch series before the release, I faced a similar issue, because the "snapshot" builds explicitly are not retained indefinitely. As a work-around, I based my work on snapshots from Northwestern University (as opposed to the University of Utah), because they retain one snapshot per week for a few months. For the longer term, rather than using the tarballs directly, I used them to produce patch files, which I checked into Guix. Since minimal Racket could be build from Git, I could restrict the patch to main-distribution Racket package sources, which kept the size manageable.
Something analogous would probably work for release candidates, but the right long-term solution is for Guix to be able to build Racket packages directly, so we don't have to rely on particular snapshot bundles.Something analogous would probably work for release candidates, but the right long-term solution is for Guix to be able to build Racket packages directly, so we don't have to rely on particular snapshot bundles.
On 7/8/21 5:43 PM, Ludovic Courtès wrote: > I’d find it clearer like this: > > (add-before 'configure 'change-directory > (lambda _ > (chdir "racket/src"))) Ah, that's nice. > >> + (add-after 'install 'remove-pkgs-directory >> + ;; otherwise, e.g., `raco pkg show` will try and fail to >> + ;; create a lock file >> + (lambda* (#:key outputs #:allow-other-keys) >> + ;; rmdir because we want an error if it isn't empty >> + (rmdir (string-append (assoc-ref outputs "out") >> + "/share/racket/pkgs")) >> + #t))))) > > Please write full sentences with a bit more context (“Remove package > directory, otherwise ‘raco pkg show’ …”). Will do. >> +(define-public racket-next-minimal-bc-3m >> + (hidden-package >> + (package/inherit racket-next-minimal >> + (name "racket-next-minimal-bc-3m") > > This is “-next” because it’s targeting 8.1, which is not released yet, > right?Correct, but 8.2 (8.1 was released in May). Now that it's been released, the name would be "racket-minimal-bc-3m".
> Since it’s only used for bootstrapping, perhaps use ‘define’ instead of > ‘define-public’ and remove the call to ‘hidden-package’.In addition to bootstrapping, there are three reasons I know of to want Racket BC:
1. The BC and CS implementations have different C APIs, so some low-level code may support BC but not CS. But this isn't usually a good reason. Racket packages should support both implementations. Embedding applications ideally would also be portable: if it's only feasible to support one implementation, it should be CS. 2. Comparing the BC and CS implementations can be useful for testing and debugging, both for packages that use the FFI and when hacking on the Racket runtime system itself. 3. Most importantly, BC supports some architectures that CS does not.In particular, Racket CS does not (yet) support ppc64le, which Racket BC does support. The recommendation to packagers, and what Debian does, is to explicitly use BC on platforms without CS support:
I'm not sure what the most idiomatic way to do this is in Guix.(Just for the record, Racket CS also supports platforms which Racket BC supports only partially---without the JIT, places, or futures---or does not support at all. One motivation of Racket CS was to make porting easier in general.)
> > It should also be (package (inherit …) …) rather than (package/inherit > …). The latter is only useful when defining variants of a package (same > version, same code) where the same security updates would apply.I don't think I understand this very well. Setting aside “-next”-related issues, a given commit in the Racket source repository will be used to build CGC, 3M, and CS (the default) variants with the same version---at least in the Racket senses of “version” and “variant”. It's possible that there could be a VM-specific security issue, but usually a bug in Racket, security-related or otherwise, will affect all three variants.
>> + (inputs >> + `(("libffi" ,libffi) ;; <- only for BC variants >> + ,@(filter (match-lambda >> + ((label . _) >> + (not (member label >> + '("zlib" "zlib:static" >> + "lz4" "lz4:static"))))) >> + (package-inputs racket-next-minimal)))) > > Please use this more common idiom: > > ,@(fold alist-delete (package-inputs racket-next-minimal) '("zlib" …)) Thanks, I was looking for something like `alist-delete` but didn't find it.>> +This packackage is the normal implementation of Racket BC with a precise garbage collector, 3M (``Moving Memory Mana
> ^ > Typo here, and lines too long (here and in other places). :-) Thanks, usually I have Emacs set up to catch that. >> + (license (package-license chez-scheme))))) > > You cannot do that since here since potentially we could end up with > circular top-level references from these two modules. > > Instead, restate what the license is. Ok, I'd been lulled into complacency by the implicitly thunked fields. - Philip
|
https://lists.gnu.org/archive/html/guix-patches/2021-07/msg01102.html
|
CC-MAIN-2021-39
|
refinedweb
| 1,409
| 54.42
|
The following is the explanation to the C++ code to create a single colored blank image in C++ using the tool OpenCV.
Things to know:
(1) The code will only compile in Linux environment.
(2) To run in windows, please use the file: ‘blank.o’ and run it in cmd. However if it does not run (problem in system architecture) then compile it in windows by making suitable and obvious changes to the code like: Use <iostream.h> in place of <iostream>.
(3) Compile command: g++ -w blank.cpp -o blank `pkg-config –libs opencv`
(4) Run command: ./article
Before you run the code, please make sure that you have OpenCV installed on your system.
Code Snippet:
// Title: Create a coloured image in C++ using OpenCV. // highgui - an easy-to-use interface to // video capturing, image and video codecs, // as well as simple UI capabilities. #include "opencv2/highgui/highgui.hpp" // Namespace where all the C++ OpenCV // functionality resides. using namespace cv; // For basic input / output operations. // Else use macro 'std::' everywhere. using namespace std; int main() { // To create an image // CV_8UC3 depicts : (3 channels,8 bit image depth // Height = 500 pixels, Width = 1000 pixels // (0, 0, 100) assigned for Blue, Green and Red // plane respectively. // So the image will appear red as the red // component is set to 100. Mat img(500, 1000, CV_8UC3, Scalar(0,0, 100)); // check whether the image is loaded or not if (img.empty()) { cout<<"\n Image not created. " "You have done something wrong. \n"; return -1; // Unsuccessful. } // first argument: name of the window // second argument: flag- types: // WINDOW_NORMAL If this is set, the user can // resize the window. // WINDOW_AUTOSIZE If this is set, the window size // is automatically adjusted to fit // the displayed image, and you cannot // change the window size manually. // WINDOW_OPENGL If this is set, the window will be // created with OpenGL support. namedWindow("A_good_name", CV_WINDOW_AUTOSIZE); // first argument: name of the window // second argument: image to be shown(Mat object) imshow("A_good_name", img); waitKey(0); //wait infinite time for a keypress // destroy the window with the name, "MyWindow" destroyWindow("A_good_name");.
|
http://126kr.com/article/6y74flahw0p
|
CC-MAIN-2017-04
|
refinedweb
| 348
| 66.03
|
0
I was messing around trying to figure this out and came up with this. This works but I'm looking for a more elegant recursive solution. Can anyone think of one? I couldn't find anything with google. Oh and sorry for the stupid variable names.
public class arraybackwards { public static void main(String[] args){ int[] bob = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18}; int[] toSend = new int[bob.length]; int index = 0; bob = backwards(bob,toSend,bob.length, index); for(int i = 0; i < bob.length; i++){ System.out.print(bob[i] + " "); } } public static int[] backwards(int[] arg, int[] toSend, int length, int index){ if (length > 0) { int bob = arg[length-1]; toSend[index] = bob; backwards(arg,toSend,length-1,++index); } return toSend; } }
|
https://www.daniweb.com/programming/software-development/threads/158322/recursively-reverse-an-array
|
CC-MAIN-2016-50
|
refinedweb
| 136
| 70.29
|
On Sun, Feb 03, 2002 at 03:31:24PM +0000, Graham/Aniartia wrote: > On Sunday 03 February 2002 3:02 pm, Ben Collins wrote: > > How are you being bombarded? > > I'll be working in say emacs typing away on a script & I'll get these > messages over-writing the information on screen this will happen about once > every 5-10 min > > > This message only prints like 6 times every boot. What 2.2.x kernel are you > > running? > > At boot and then repeatedly untill I turn the machine off. I'm running 2.2.19 > from the woddy rescue disk... I'm using a vanilla woddy install btw. asmlinkage unsigned long c_sys_nis_syscall (struct pt_regs *regs) { static int count = 0; if (count++ > 5) return -ENOSYS; lock_kernel(); printk ("%s[%d]: Unimplemented SPARC system call %d\n", current->comm, current->pid, (int)regs->u_regs[1]); #ifdef DEBUG_UNIMP_SYSCALL show_regs (regs); #endif unlock_kernel(); return -ENOSYS; } I don't see how there's any way for that to happen. -- .----------=======-=-======-=========-----------=====------------=-=-----. / Ben Collins -- Debian GNU/Linux -- WatchGuard.com \ ` bcollins@debian.org -- Ben.Collins@watchguard.com ' `---=========------=======-------------=-=-----=-===-======-------=--=---'
|
https://lists.debian.org/debian-sparc/2002/02/msg00016.html
|
CC-MAIN-2014-10
|
refinedweb
| 177
| 63.39
|
stitchs_loginMember
Content count150
Joined
Last visited
Community Reputation1361 Excellent
About stitchs_login
- RankCrossbones+
Personal Information
- InterestsArt
Design
DevOps
Production
Programming
2D Making 2D simple games (Zelda for example)
stitchs_login replied to Xeddy's topic in For BeginnersA good place to start for tutorials would be Youtube. One such tutorial I am currently working from uses Java, the instructors pace is moderate, so really gives you a chance to keep up to speed with bite-sized tutorial. The requirement is that you have *some* basic Java knowledge. I won't go too much into Java, as it is heavily documented and summarised around the web, and in places on this website. I'll just say that Java is a very high-level language that takes away the headache of memory management, which a language like C++ imposes (though it is easier now with things like smart-pointers). It also compiles once and is usable on any platform: Linux, Windows etc. Tutorials here: Another option specifically tailored to beginner games programming is Unity, though I've not had any experience with it. Good luck! Stitchs.
- stitchs_login started following Metroid-vania style level design and 2D Camera Jitters
2D Camera Jitters
stitchs_login replied to stitchs_login's topic in For BeginnersHi @GoliathForge, The tick() method in Camera is a hangover that I'm removing in this phase. When you say that I should refactor some naming, do you mean that having the parameter names the same as the instance-level members could be confusing? Or am I missing some consistency in the naming pattern? As for Creature::move() package dev.mygame.entities.creatures; import dev.mygame.entities.Entity; import dev.mygame.Handler; public abstract class Creature extends Entity { public static final int DEFAULT_HEALTH = 10; public static final float DEFAULT_SPEED = 1.0f; public static final int DEFAULT_WIDTH = 64; public static final int DEFAULT_HEIGHT = 64; protected int health; protected float speed; protected float xMove; protected float yMove; public Creature(Handler handler, float x, float y, int width, int height) { super(handler, x, y, width, height); health = DEFAULT_HEALTH; speed = DEFAULT_SPEED; xMove = 0f; yMove = 0f; } public void move() { x += xMove; y += yMove; } .... } I admit, I haven't yet printed the values to the setFocus method, so I will do that in my next round of implementation. Thinking about my original wording, if anything, it's not camera jitter I'm experiencing. It's more akin to a tear in some graphic rendering between tiles when I move my player vertically. @the incredible smoker Could you elaborate in relation to the screen-tearing that I'm experiencing? Right now, I don't *need* the precision provided by doubles. It would be something I'd consider in the future when I begin to implement physics calculations that may require it. Currently, I'm following a tutorial series that doesn't suggest using them. If this is a possible solution to my issues, I would love to hear some more context to your answer. Thanks, Stitchs.
2D Camera Jitters
stitchs_login posted a topic in For BeginnersHey, I'm running through some game tutorials and have just implemented my camera. It scrolls, following the player sprite, and culls tiles from the map outside of bounds. One thing I'm noticing is some minor artifact tearing between some of the tiles that make up the on-screen map. Originally, I assumed it was because I coded my camera X and Y as Integers, and my player coordinates as floats, so there was some classic data loss from downcasting float -> int. I modified that, and removed integer casts that related to passing player coordinates to the camera. This has reduced the tearing, but everything still jitters a small amount. The player only moves at a speed of 1.0f. If I run it at 3.0f, it is more noticable. Where could I be going wrong? Thanks, Stitchs (code below). package dev.mygame.display; public class Camera { private float x; private float y; private int w; private int h; private int focusX; private int focusY; public Camera(float x, float y, int w, int h) { this.x = x; this.y = y; this.w = w; this.h = h; } public float getX() { return x; } public void setX(float x) { this.x = x; } public float getY() { return y; } public void setY(float y) { this.y = y; } public int getFocusX() { return focusX; } public void setFocus(float focusX, float focusY) { float lerp = 1.0f; x += lerp * (focusX - w/2 - x); y += lerp * (focusY - h/2 - y); } public int getFocusY() { return focusY; } public int getWidth() { return w; } public int getHeight() { return h; } public void tick() { // centre the camera on the focus point //focusX -= w/2; //focusY -= h/2; // set the origin so we can offset everything else x = focusX; y = focusY; } } package dev.mygame.entities.creatures; import java.awt.Graphics; import dev.mygame.gfx.Assets; import dev.mygame.Game; public class Player extends Creature { private Game game; public Player(Game game, float x, float y) { super(game, x, y, Creature.DEFAULT_WIDTH, Creature.DEFAULT_HEIGHT); this.game = game; } @Override public void tick() { getInput(); move(); } private void getInput() { xMove = 0; yMove = 0; if(game.getKeyManager().up) { yMove = -speed; } if(game.getKeyManager().down) { yMove = +speed; } if(game.getKeyManager().left) { xMove = -speed; } if(game.getKeyManager().right) { xMove = +speed; } } @Override public void render(Graphics g) { g.drawImage(Assets.playerOne, (int)(x - game.getCamera().getX()), (int)(y - game.getCamera().getY()), width, height, null); } } package dev.mygame.states; import java.awt.Graphics; import dev.mygame.entities.creatures.Player; import dev.mygame.gfx.Assets; import dev.mygame.Game; import dev.mygame.worlds.World; import dev.mygame.display.Camera; public class GameState extends State { private Player player; private World world; public GameState(Game game) { super(game); world = new World(game, "/maps/World1.json"); player = new Player(game, world.getSpawnX(), world.getSpawnY()); } @Override public void tick() { player.tick(); game.getCamera().setFocus(player.getX() + player.getWidth()/2, player.getY() + player.getHeight()/2); } @Override public void render(Graphics g) { world.render(g); player.render(g); } }
- I watched a few minutes of your video, some pretty slick looking stuff there! Far beyond what I was even imagining my room/grid layout would look like! So what you're saying is, you have your tmx files, and the generation process looks through these and outputs JSON, which is read later to layout the connection of the rooms? And I assume the tmx extension is from the Tiled map program you use? Stitchs.
- This is my train of thought initially. But more in terms of, which doors link to which doors, and N doors might be stored in an array in the Room class.
- Thanks for the responses. Alot more to think about that I first anticipated. With this approach, couldn't it become inefficient to have to search the possible rooms each time, if the map is huge? What I'm saying is, that you could do an initial cull on rooms that are past the door (say, the door is facing left, and if the right half of the grid, then we ignore all rooms behind it), but you would still have to search every rooms grid coordinates and compare 2 values. Whereas with an ID to ID mapping, you either compare one value, or create some Door class, which stores the ID of the door it joins to (similar to how a linked list stores a reference/pointer (language dependent =D) to the next element in a list. This does sound pretty interesting, and I do a lot with JSON at work, so it wouldn't be unfamiliar ground for me. Could you provide snippets of what info these files contain. When you say "generate", do you have a randomly generated labyrinth every time a brand new game a started? Thanks again. Really useful stuff! Liyaan.
- Thanks for the reply. I did wonder if what I had would be expandable for more complex features and interactions. So when you say that the world is laid out on a grid, does this mean that all the rooms are loaded into memory from the very start of the game, with resources such as sprites and graphics being loaded on "is in room" basis? Does the concept of rooms even exist? Or is everything traversable as if the entire map is a single room/level? Stitchs.
Metroid-vania style level design
stitchs_login posted a topic in For BeginnersHi all, I'm currently designing a map of rooms for a Metroid-style game (really inspired by the AM2R remake) and wanted to run my design by you for some feedback. My thinking is you have a Room class, said class being able to load a text file which is made up of a map of ID's. These ID's would be mapped to different block types which have varying properties (wall/floor/lava blocks as an example). Each room would contain one+ door objects. These doors would have ID's allowing you to link rooms based on the connecting ID's. This would also be information found in each rooms textfile config. Every room would contain an inventory of objects like enemies, items, power-ups, that the player can interact with in some way. There will be no random generation (yet, limited experience) and each room will be 'crafted' in a text file. The locations for these item would also be laid out in the text file that is loaded. Once loaded, their individual behaviours would be (like enemies patrol etc.). My thinking is also to load rooms on a basis relative to the current room the player is in. The rooms connecting to the current room are loaded into memory and once a new room is entered, rooms outside of the depth n-1 or n+1 will be unloaded. Thanks for your time guys! Stitchs.
Magic Item Tech, testing - part 1.
stitchs_login commented on Spidi's blog entry in Spidi, Magic Item Tech JournalGreat read! I've been having to do a lot of JUnit testing at my new job and this is completely fascinating/a way that I want to work in the future.
String: How to find the most common character occurrence.
stitchs_login replied to stitchs_login's topic in General and Gameplay ProgrammingI completely agree Nypyren. This is a self-imposed limitation that I am doing to test with that particular set of Unicode characters. I wanted to refine the method first, and then expand it to include a bigger character set. Stitchs.
String: How to find the most common character occurrence.
stitchs_login posted a topic in General and Gameplay ProgrammingHi all, I've been presented with a small challenge: 1) Make a function that counts the number of each individual character in a string. Report back to the user in the form <character, number of occurrences> the character that appears the most. So the function would take a string input; i.e. "rabbit" and output the character that occurs the most in said string; i.e. <'b', 2>. I have coded up a function (to my understanding of Big O notation is that it is Linear O(N)). public static string GetMostFrequentCharacter(string input) { string result = "The String is empty."; if (IsStringValid(input)) { int INDEX_CONVERTER = 32; int[] characterCounts = new int[127 - INDEX_CONVERTER]; char characterToConvert; // first loop counts the occurrence of each character in the string for (int i = 0; i < input.Length; i++) { characterToConvert = input[i]; characterCounts[(int)characterToConvert - INDEX_CONVERTER]++; } int highestCount = 0, arrayPosition = -1; // second loop finds which character appeared the most, only the last // highest count to appear in the list will count. for (int i = 0; i < characterCounts.Length; i++) { // check to see if the current index value is higher // than the previous higher count if (characterCounts[i] >= highestCount) { // get the value at the current index highestCount = characterCounts[i]; arrayPosition = i; } } if (arrayPosition < 0) { result = "There was an error in processing the string."; } else { // finally, convert the arrayPosition into a suitable char representation char characterToOutput = (char)(arrayPosition + INDEX_CONVERTER); // format the output string result = string.Format("<'{0}', {1}>", characterToOutput, highestCount); } } return result; } I'm happy with the way most of it works. My only problem is that, say I have 2 characters that appear an equal number of times. My current method only takes the last highest value, in order of UNICODE value. Say we have the string "foof". The output for said function would be <'o', 2>, it does not present the last letter to appear, only the last in order of UNICODE. I don't want to create another storage for characters that appear equal number of times (I already have 2 arrays). I have looked on the internet, but the only responses I am finding is if someone knows the character they are looking for, before they use the function; they ask "How many times does 'a' appear?" Any help or feedback would be greatly appreciated. Stitchs.
C# Linked List implementation review
stitchs_login replied to stitchs_login's topic in General and Gameplay Programming@ChaosEngine: I was going to implement it as a Generic Type, but I'm never going to use it for my projects and wanted something that I could get information from quickly, for testing, which is why I pre-made it using the String class. @Pink Horror: I' guessing that if I wanted to look at/use every item I would (internal code): for(int i = 0; i < list.Length; i++) { DoSomething(currentNode); currentNode = currentNode.next; } User code: DoSomethingWithAllListItems(int pLength); This would be O(N) Linear as it will take an amount of time and resources proportional, to the size of the list (which if it gets very big, will cost a lot). If I want constant time, such as the change I have made to my AddNode() method, I would need to be able to access a node directly, knowing it's location beforehand. I hope I have understood what you are trying to say, but what would be a constant alternative to looping the list. Would I have to order it first? Stitchs.
C# Linked List implementation review
stitchs_login replied to stitchs_login's topic in General and Gameplay ProgrammingAs an update: I went back to my AddNode method. If I use a reference (let's call it tail) which stores the final node in the list so that new nodes can be added without traversing the list. public void AddNode(string stringIn) { StringNode newNode = new StringNode(stringIn); if (IsEmpty()) { list = newNode; tail = list; } else { tail.next = newNode; tail = tail.next; } } What I'm struggling to understand is how my else statement works. It works, as in it adds nodes successfully. Can anyone clarify that my thought process as to what I'm doing is correct? If the list is empty, I add the new node as the head and make the tail equal to the head (the head is both the first and last node). If there is a node (say we are at the point that we haven't added anymore than one), then we make the tail.next (aka list.next) equal to our new node. Then we make tail equal this last node. What I'm struggling to understand; once we exit the AddNode function, and the temp variable newNode is no longer in memory, what is preserving the references to these different nodes in memory? Only list and tail exist, I'm changing what tail points to every time I add a node to a non-empty list, how are any nodes in-between able to be referenced/not being flagged for garbage collection? Please ask if you need anymore clarification. Stitchs.
C# Linked List implementation review
stitchs_login replied to stitchs_login's topic in General and Gameplay ProgrammingHi all, Thanks for the feedback so far. I am implementing it to further my understanding of how they work. My intent would be to use the .NET implementation. I did think as I was implementing the loops with indexes that what happens if the list gets too big? I know that each node is stored, non-sequentially, in memory which makes it longer to iterate over. I have thought about adding a reference for the tail; making it the result of AddNode(). What I'm stuck on is the idea of holding a place in the list. Isn't this just as slow as starting from the beginning? If you start at the beginning then you don't have to worry about going backwards. Otherwise I need to create a doubly Linked List to be able to traverse back and forth. I have some great feedback to work on from this. Thanks, Stitchs.
C# Linked List implementation review
stitchs_login posted a topic in General and Gameplay ProgrammingHi all, I am part way through implementing my own Linked List class (only stores Strings for now) to better understand how they work. I have completed the major portions of it; Add, Delete, Insert etc. and was just wondering if I could get some feedback on the code thus far. private StringNode list; public MyStringList() { list = null; // empty by default; } public void InsertNode(string stringIn, uint index) { // Check that user is not trying to insert outside the valid // range of nodes. if (IndexOutOfUpperBounds(index)) { return; } // create a string node to insert into the list StringNode newNode = new StringNode(stringIn); StringNode current; StringNode temp; if (index == 0) { // store the list head. temp = list; // set the new node as the new list head list = newNode; // reconnect the old list head to the new list head. list.next = temp; return; } // temp node that is a reference to the beginning node current = list; // loop to the position of the node at index // because of the way that current is initialized, we can // skip index zero. for (int i = 1; i < index; i++) { // check that there is another node to process. if (current.next != null) { current = current.next; } } // store a reference to the next node (the one at the index we desire) so as to preserve it temp = current.next; // set the current.next to point to the location of the new node // and set the new nodes next to point to that of the old current.next = newNode; newNode.next = temp; } public bool DeleteNode(uint index) { if (IndexOutOfUpperBounds(index)) { return false; } // temp node representing the current node in the list. StringNode current = list; // temp node representing the previous node in the list. StringNode previous = null; // if the user has searched for a node that is not the first in the list if (index > 0) { // loop from 0 to the index position. for (int i = 0; i < index; i++) { if (current != null) { previous = current; current = current.next; } } } // need conditions to assure that the predecessor of a node, // removed from the end will point to null // a check to see if we are at the end of the list. if ((current.next == null) && (current != list)) { // make the very last node null, so it will be removed by // garbage collection previous.next = null; current = null; } // condition that a node removed from the middle will link the two // nodes that surround it, properly else if ((current.next != null) && (current != list)) { // change the previous node to link to the node up ahead. previous.next = current.next; current = null; } // condition that the successor of a node removed from the front // will properly set the new list head. else { // check that the list head is not the only node if (current.next != null) { list = current.next; } else { list = null; } } // reduce number of nodes by 1, if we have got to this point, // there is no need to check if we are allowed to decrement. this.Count--; return true; } I have not included the entire implementation, for simplicity's sake. I would like to draw your attention to the Insert and Delete methods. They work in all the tests I have performed. I feel like they could be streamlined more. If you need anymore information, feel free to ask and I shall do my best to explain. Many thanks, Stitchs.
|
https://www.gamedev.net/profile/180726-stitchs/?tab=classifieds
|
CC-MAIN-2018-05
|
refinedweb
| 3,326
| 63.39
|
This guide describes Angular Universal, a technology that runs your Angular application on the server..
This guide describes a Universal sample application that launches quickly as a server-rendered page. Meanwhile, the browser downloads the full client version and switches to it automatically after the code loads.
Download the finished sample code, which runs in a node express server.
There are three main reasons to create a Universal version of your app.. It also makes a site preview available since each URL returns a fully-rendered page.
Enabling web crawlers is often referred to as Search Engine Optimization (SEO).
Some devices don't support JavaScript or execute JavaScript so poorly that the user experience is unacceptable. For these cases, you may require a server-rendered, no-JavaScript version of the app. This version, however limited, may be the only practical alternative for people who otherwise would not be able to use the app at all.
Displaying the first page quickly can be critical for user engagement.
53% do not handle browser events, but they do support navigation through the site using routerLink.
In practice, you'll serve a static version of the landing page to hold the user's attention. At the same time, you'll load the full Angular app behind it in the manner explained below. The user perceives near-instant performance from the landing page and gets the full interactive experience after the full app loads..
Because a Universal
platform-server app doesn't execute in the browser, you may have to work around some of the browser APIs and capabilities that are missing on the server.
You won't be able reference browser-only native objects such as
window,
document,
navigator or
location. If you don't need them on the server-rendered page, side-step them with conditional logic.
Alternatively, look for an injectable Angular abstraction over the object you need such as
Location or
Document; it may substitute adequately for the specific API that you're calling. If Angular doesn't provide it, you may be able to write your own abstraction that delegates to the browser API while in the browser and to a satisfactory alternative implementation while on the server.
Without mouse or keyboard events, a universal app can't rely on a user clicking a button to show a component. A universal app should determine what to render based solely on the incoming client request. This is a good argument for making the app routeable.
Because the user of a server-rendered page can't do much more than click links, you should swap in the real client app as quickly as possible for a proper interactive experience.
The Tour of Heroes tutorial is the foundation for the Universal sample described in this guide.
The core application files are mostly untouched, with a few exceptions described below. You'll add more files to support building and serving with Universal.
In this example, the Angular CLI compiles and bundles the Universal version of the app with the AOT (Ahead-of-Time) compiler. A node/express web server turns client requests into the HTML pages rendered by Universal.
You will create:
app.server.module.ts
main.server.ts
server.ts
tsconfig.server.json
webpack.server.config.js
When you're done, the folder structure will look like this:
src/ index.html app web page main.ts bootstrapper for client app main.server.ts * bootstrapper for server app tsconfig.app.json TypeScript client configuration tsconfig.server.json * TypeScript server configuration tsconfig.spec.json TypeScript spec configuration style.css styles for the app app/ ... application code app.server.module.ts * server-side application module server.ts * express web server tsconfig.json TypeScript client configuration package.json npm configuration webpack.server.config.js * Webpack server configuration
The files marked with
* are new and not in the original tutorial sample. This guide covers them in the sections below.
Download the Tour of Heroes project and install the dependencies from it.
To get started, install these packages.
@angular/platform-server- Universal server-side components.
@nguniversal/module-map-ngfactory-loader- For handling lazy-loading in the context of a server-render.
@nguniversal/express-engine- An express engine for Universal applications.
ts-loader- To transpile the server application
Install them with the following commands:
npm install --save @angular/platform-server @nguniversal/module-map-ngfactory-loader ts-loader @nguniversal/express-engine
A Universal app can act as a dynamic, content-rich "splash screen" that engages the user. It gives the appearance of a near-instant application.
Meanwhile, the browser downloads the client app scripts in background. Once loaded, Angular transitions from the static server-rendered page to the dynamically rendered views of the interactive client app.
You must make a few changes to your application code to support both server-side rendering and the transition to the client app.
AppModule
Open file
src/app/app.module.ts and find the
BrowserModule import in the
NgModule metadata. Replace that import with this one:
BrowserModule.withServerTransition({ appId: 'tour-of-heroes' }),
Angular adds the
appId value (which can be any string) to the style-names of the server-rendered pages, so that they can be identified and removed when the client app starts.
You can get runtime information about the current platform and the
appId by injection.
import { PLATFORM_ID, APP_ID, Inject } from '@angular/core'; import { isPlatformBrowser } from '@angular/common'; constructor( @Inject(PLATFORM_ID) private platformId: Object, @Inject(APP_ID) private appId: string) { const platform = isPlatformBrowser(platformId) ? 'on the server' : 'in the browser'; console.log(`Running ${platform} with appId=${appId}`); }
The tutorial's
HeroService and
HeroSearchService delegate to the Angular
Http module to fetch application data. These services send requests to relative URLs such as
api/heroes.
In a Universal app,
Http URLs must be absolute (e.g.,) even when the Universal web server is capable of handling those requests.
You'll have to change the services to make requests with absolute URLs when running on the server and with relative URLs when running in the browser.
One solution is to provide the server's runtime origin under the Angular
APP_BASE_REF token, inject it into the service, and prepend the origin to the request URL.
Start by changing the
HeroService constructor to take a second
origin parameter that is optionally injected via the
APP_BASE_HREF token.
constructor( private http: HttpClient, private messageService: MessageService, @Optional() @Inject(APP_BASE_HREF) origin: string) { this.heroesUrl = `${origin}${this.heroesUrl}`; }
Note how the constructor prepends the origin (if it exists) to the
heroesUrl.
You don't provide
APP_BASE_HREF in the browser version, so the
heroesUrl remains relative.
You can ignore
APP_BASE_HREFin the browser if you've specified
<base href="/">in the
index.htmlto satisfy the router's need for a base address, as the tutorial sample does.
To run an Angular Universal application, you'll need a server that accepts client requests and returns rendered pages.
The app server module class (conventionally named
AppServerModule) is an Angular module that wraps the application's root module (
AppModule) so that Universal can mediate between your application and the server.
AppServerModule also tells Angular how to bootstrap your application when running as a Universal app.
Create an
app.server.module.ts file in the
src/app/ directory with the following
AppServerModule code:
import { NgModule } from '@angular/core'; import { ServerModule } from '@angular/platform-server'; import { ModuleMapLoaderModule } from '@nguniversal/module-map-ngfactory-loader'; import { AppModule } from './app.module'; import { AppComponent } from './app.component'; @NgModule({ imports: [ AppModule, ServerModule, ModuleMapLoaderModule ], providers: [ // Add universal-only providers here ], bootstrap: [ AppComponent ], }) export class AppServerModule {}
Notice that it imports first the client app's
AppModule, the Angular Universal's
ServerModule and the
ModuleMapLoaderModule.
The
ModuleMapLoaderModule is a server-side module that allows lazy-loading of routes.
This is also the place to register providers that are specific to running your app under Universal.
A Universal web server responds to application page requests with static HTML rendered by the Universal template engine.
It receives and responds to HTTP requests from clients (usually browsers). It serves static assets such as scripts, css, and images. It may respond to data requests, perhaps directly or as a proxy to a separate data server.
The sample web server for this guide is based on the popular Express framework.
Any web server technology can serve a Universal app as long as it can call Universal's
renderModuleFactory. The principles and decision points discussed below apply to any web server technology that you chose.
Create a
server.ts file in the root directory and add the following code:
// These are important and needed before anything else import 'zone.js/dist/zone-node'; import 'reflect-metadata'; import { enableProdMode } from '@angular/core'; import * as express from 'express'; import { join } from 'path'; // Faster server renders w/ Prod mode (dev mode never needed) enableProdMode(); // Express server const app = express(); const PORT = process.env.PORT || 4000; const DIST_FOLDER = join(process.cwd(), 'dist'); // * NOTE :: leave this as require() since this file is built Dynamically from webpack const { AppServerModuleNgFactory, LAZY_MODULE_MAP } = require('./dist/server/main.bundle'); // Express Engine import { ngExpressEngine } from '@nguniversal/express-engine'; // Import module map for lazy loading import { provideModuleMap } from '@nguniversal/module-map-ngfactory-loader'; app.engine('html', ngExpressEngine({ bootstrap: AppServerModuleNgFactory, providers: [ provideModuleMap(LAZY_MODULE_MAP) ] })); app.set('view engine', 'html'); app.set('views', join(DIST_FOLDER, 'browser')); // TODO: implement data requests securely app.get('/api/*', (req, res) => { res.status(404).send('data requests are not supported'); }); // Server static files from /browser app.get('*.*', express.static(join(DIST_FOLDER, 'browser'))); // All regular routes use the Universal engine app.get('*', (req, res) => { res.render(join(DIST_FOLDER, 'browser', 'index.html'), { req }); }); // Start up the Node server app.listen(PORT, () => { console.log(`Node server listening on{PORT}`); });
This sample server is not secure! Be sure to add middleware to authenticate and authorize users just as you would for a normal Angular application server.
The important bit in this file is the
ngExpressEngine function:
app.engine('html', ngExpressEngine({ bootstrap: AppServerModuleNgFactory, providers: [ provideModuleMap(LAZY_MODULE_MAP) ] }));
The
ngExpressEngine is a wrapper around the universal's
renderModuleFactory function that turns a client's requests into server-rendered HTML pages. You'll call that function within a template engine that's appropriate for your server stack.
The first parameter is the
AppServerModule that you wrote earlier. It's the bridge between the Universal server-side renderer and your application.
The second parameter is the
extraProviders. It is an optional Angular dependency injection providers, applicable when running on this server.
You supply
extraProviders when your app needs information that can only be determined by the currently running server instance.
The required information in this case is the running server's origin, provided under the
APP_BASE_HREF token, so that the app can calculate absolute HTTP URLs.
The
ngExpressEngine function returns a promise that resolves to the rendered page.
It's up to your engine to decide what to do with that page. This engine's promise callback returns the rendered page to the web server, which then forwards it to the client in the HTTP response.
This wrappers are very useful to hide the complexity of the
renderModuleFactory. There are more wrappers for different backend technologies at the Universal repository. (e.g.,
main.js or
/node_modules/zone.js/dist/zone.js).
So we can easily recognize the three types of requests and handle them differently.
/api
An Express server is a pipeline of middleware that filters and processes URL requests one after the other.
You configure the Express server pipeline with calls to
app.get() like this one for data requests.
// TODO: implement data requests securely app.get('/api/*', (req, res) => { res.status(404).send('data requests are not supported'); });.
Universal HTTP requests have different security requirements
HTTP requests issued from a browser app are not the same as when issued by the universal app on the server.
When a browser makes an HTTP request, the server can make assumptions about cookies, XSRF headers, etc.
For example, the browser automatically sends auth cookies for the current user. Angular Universal cannot forward these credentials to a separate data server. If your server handles HTTP requests, you'll have to add your own security plumbing.
The following code filters for request URLs with no extensions and treats them as navigation requests.
// All regular routes use the Universal engine app.get('*', (req, res) => { res.render(join(DIST_FOLDER, 'browser', 'index.html'), { req }); });
A single
app.use() treats all other URLs as requests for static assets such as JavaScript, image, and style files.
To ensure that clients can only download the files that they are permitted to see, you will put all client-facing asset files in the
/dist folder and will only honor requests for files from the
/dist folder.
The following express code routes all remaining requests to
/dist; it returns a
404 - NOT FOUND if the file is not found.
// Server static files from /browser app.get('*.*', express.static(join(DIST_FOLDER, 'browser')));
The server application requires its own build configuration.
Create a
tsconfig.server.json file in the project root directory to configure TypeScript and AOT compilation of the universal app.
{ "extends": "../tsconfig.json", "compilerOptions": { "outDir": "../out-tsc/app", "baseUrl": "./", "module": "commonjs", "types": [] }, "exclude": [ "test.ts", "**/*.spec.ts" ], "angularCompilerOptions": { "entryModule": "app/app.server.module#AppServerModule" } }
This config extends from the root's
tsconfig.json file. Certain settings are noteworthy for their differences.
The
module property must be commonjs which can be require()'d into our server application.
The
angularCompilerOptions section guides the AOT compiler:
entryModule- the root module of the server application, expressed as
path/to/file#ClassName.
Universal applications doesn't need any extra Webpack configuration, the CLI takes care of that for you, but since the server is a typescript application, you will use Webpack to transpile it.
Create a
webpack.server.config.js file in the project root directory with the following code.
const path = require('path'); const webpack = require('webpack'); module.exports = { entry: { server: './server.ts' }, resolve: { extensions: ['.js', '.ts'] }, target: 'node', // this makes sure we include node_modules and other 3rd party libraries externals: [/(node_modules|main\..*\.js)/], output: { path: path.join(__dirname, 'dist'), filename: '[name].js' }, module: { rules: [{ test: /\.ts$/, loader: 'ts-loader' }] }, plugins: [ // Temporary Fix for issue: // for 'WARNING Critical dependency: the request of a dependency is an expression' new webpack.ContextReplacementPlugin( /(.+)?angular(\\|\/)core(.+)?/, path.join(__dirname, 'src'), // location of your src {} // a map of your routes ), new webpack.ContextReplacementPlugin( /(.+)?express(\\|\/)(.+)?/, path.join(__dirname, 'src'), {} ) ] };
Webpack configuration is a rich topic beyond the scope of this guide.
Now that you've created the TypeScript and Webpack config files, you can build and run the Universal application.
First add the build and serve commands to the
scripts section of the
package.json:
"scripts": { ... "build:universal": "npm run build:client-and-server-bundles && npm run webpack:server", "serve:universal": "node dist/server.js", "build:client-and-server-bundles": "ng build --prod && ng build --prod --app 1 --output-hashing=false", "webpack:server": "webpack --config webpack.server.config.js --progress --colors" ... }
From the command prompt, type
npm run build:universal
The Angular CLI compiles and bundles the universal app into two different folders,
browser and
server. Webpack transpiles the
server.ts file into Javascript.
After building the application, start the server.
npm run serve:universal
The console window should say
Node server listening on
Open a browser to. You should see the familiar Tour of Heroes dashboard page.
Navigation via
routerLinks works correctly. You can go from the Dashboard to the Heroes page and back. You can click on a hero on the Dashboard page to display its Details page.
But clicks, mouse-moves, and keyboard entries are inert.
User events other than
routerLink clicks aren't supported. The user must wait for the full client app to arrive.
It will never arrive until you compile the client app and move the output into the
dist/ folder, a step you'll take in just a moment.
The transition from the server-rendered app to the client app happens quickly on a development machine. You can simulate a slower network to see the transition more clearly and better appreciate the launch-speed advantage of a universal app running on a low powered, poorly connected device.
Open the Chrome Dev Tools and go to the Network tab. Find the Network Throttling dropdown on the far right of the menu bar.
Try one of the "3G" speeds. The server-rendered app still launches quickly but the full client app may take seconds to load.
This guide showed you how to take an existing Angular application and make it into a Universal app that does server-side rendering. It also explained some of the key reasons for doing so.
Angular Universal can greatly improve the perceived startup performance of your app. The slower the network, the more advantageous it becomes to have Universal display the first page to the user.
© 2010–2018 Google, Inc.
Licensed under the Creative Commons Attribution License 4.0.
|
http://docs.w3cub.com/angular/guide/universal/
|
CC-MAIN-2018-39
|
refinedweb
| 2,805
| 50.02
|
list QML Basic Type
a list of QML objects.
The
list type refers to a list of QML objects.
A list value can be accessed in a similar way to a JavaScript array:
- Values are assigned using the
[]square bracket syntax with comma-separated values
- The
lengthproperty provides the number of items in the list
- Values in the list are accessed using the
[index]syntax
A
list can only store QML objects, and cannot contain any basic type values. (To store basic types within a list, use the var type instead.)
When integrating with C++, note that any QQmlListProperty value passed into QML from C++ is automatically converted into a
list value, and vice-versa.
Using the list Type
For example, the Item type has a states list-type property that can be assigned to and used as follows:
import QtQuick 2.0 Item { width: 100; height: 100 states: [ State { name: "activated" }, State { name: "deactivated" } ] Component.onCompleted: { console.log("Name of first state:", states[0].name) for (var i = 0; i < states.length; i++) console.log("state", i, states[i].name) } }
The defined State objects will be added to the
states list in the order in which they are defined.
If the list only contains one object, the square brackets may be omitted:
import QtQuick 2.0 Item { width: 100; height: 100 states: State { name: "activated" } }
Note that objects cannot be individually added to or removed from the list once created; to modify the contents of a list, it must be reassigned to a new list.
Note: The
list type is not recommended as a type for custom properties. The
var type should be used instead for this purpose as lists stored by the
var type can be manipulated with greater flexibility from within QML..
|
https://doc.qt.io/archives/qt-5.5/qml-list.html
|
CC-MAIN-2021-39
|
refinedweb
| 295
| 62.27
|
Ben, I'm still trying to figure out the glibc 2.1.1 stuff. A lot of stuff seems to break if I use your 2.1.1-0.2 package (on a sun4u machine). I'm not sure why (the 2.1.1-0.1.1 package worked fine). I tried a recompile of the 0.2 package and it didn't help (2.2.5+cvs kernel). In an effort to figure out what is going wrong, I grabbed the Red Hat source package from their RawHide distribution (which runs fine with their 2.2.4 kernel on all machines). There were a few differences: They have nothing resembling our sigaction, and sigstack patches (are they still needed?) They have an. Ben, I'm currently recompiling without sigstack and sigaction. (So I just have your chown patch.) If it works, we may also want to add the attached patch (it may fix the problems with non-cvs kernels). One other, general note. The RedHat ncsd init script doesn't start it on 2.0 kernels, they say that it won't run on kernels older than 2.2.0 because of threading problems. Steve dunham@cse.msu.edu
1999-03-28 Andreas Jaeger <aj@arthur.rhein-neckar.de> * libio/iopopen.c (_IO_fork): Use fork instead of vfork since vfork doesn't allow e.g. closing dup2 and close calls. Fixes PR libc/966+967. --- libio/iopopen.c.~1~ Mon Nov 23 19:58:12 1998 +++ libio/iopopen.c Sun Mar 28 12:01:11 1999 @@ -1,4 +1,4 @@ -/* Copyright (C) 1993, 1997, 1998 Free Software Foundation, Inc. +/* Copyright (C) 1993, 1997, 1998, 1999 Free Software Foundation, Inc. This file is part of the GNU IO Library. Written by Per Bothner <bothner@cygnus.com>. @@ -42,9 +42,9 @@ #ifndef _IO_fork #ifdef _LIBC -#define _IO_fork __vfork +#define _IO_fork __fork #else -#define _IO_fork vfork /* defined in libiberty, if needed */ +#define _IO_fork fork /* defined in libiberty, if needed */ #endif extern _IO_pid_t _IO_fork __P ((void)); #endif -- Andreas Jaeger aj@arthur.rhein-neckar.de jaeger@informatik.uni-kl.de for pgp-key finger ajaeger@aixd1.rhrk.uni-kl.de
|
https://lists.debian.org/debian-sparc/1999/04/msg00124.html
|
CC-MAIN-2014-15
|
refinedweb
| 356
| 78.45
|
middleware to serve npm-css processed files
connect/express middleware for serving
require enabled css files for use with web widgets.
In our app/server/whatever.js
appuse'/css/widgets.css' makeup__dirname + '/css/widgets.css';
widgets.css would look similar to
/*@require typeahead*//* other css rules can go here */
Typeahead is a widget we installed via npm. It provides some javascript (ala commonjs style) and a base stylesheet. We want to use that base stylesheet and customize it for our needs.
Our widgets.css remains a valid css file. If passed through makeup, the @require statements will load any css which the typeahead module provided (via the package.json:style) field.
When we visit
site.com/css/widgets.css we will be served a single css file with the rules for our typeahead widget.
All of the typeahead widget rules will be namespaced with
.typeahead.
npm install makeup
|
https://www.npmjs.com/package/makeup
|
CC-MAIN-2015-40
|
refinedweb
| 148
| 60.51
|
Problem with list_dialog in combination with an already presented view
Hi
I want to present a view with a tiny little ui design. I use a dialogs.list_dialog to choose between options at the beginning. Everything works fine. Now I want to put this in a loop. The ui is visible, I choose the option, I do anything and when I finish this part I want to show the list_dialog again. But at the second time I get the Message: The view is already beeing presented. I analyzed some combinations and I see that everytime the ui.on_screen is true, the dialog comes up with the error message. If it is false everything is fine. It is false when I press the cross button of the ui. Then I come to the dialog again and the view ist not ui.on_screen.
Now the question. How can I close the ui in the script so that ui.on_screen is false? ui.close and ui.View.close(v) does it not. When I close the view, ui.on_screen is already true.
Thanks Jens
@Kluesi Not sure I correctly understand (as usual with my poor English 😢)
This works
import dialogs def l(items): f = dialogs.list_dialog(items=items) return f while True: items = ['one','two'] f = l(items) print(f) if f == None: break
Please post your code
Thats funny. I worked on this code many hours and right after I posted the problem I found the solution. After v.close() I put v.wait_modal() and then it works. It seems that the close process is not completely done when the next command is interpreted. Wait modal waits until the view is completely closed. May be a workaround but for me it is ok.
|
https://forum.omz-software.com/topic/5772/problem-with-list_dialog-in-combination-with-an-already-presented-view/1
|
CC-MAIN-2022-27
|
refinedweb
| 291
| 86.6
|
NAME
DEVICE_PROBE - probe for device existence
SYNOPSIS
#include <sys/param.h> #include <sys/bus.h> int DEVICE_PROBE(device_t dev);
DESCRIPTION
The DEVICE_PROBE() method should probe to see if the device is present. It should return 0 if the device exists, ENXIO if it cannot be found. If some other error happens during the probe (such as a memory allocation failure), an appropriate error code should be returned. For cases where more than one driver matches a device, a priority value can be returned. In this case, success codes are values less than or equal to zero with the highest value representing the best match. Failure codes are represented by positive values and the regular UNIX error codes should be used for the purpose. If a driver returns a success code which is less than zero, it must not assume that it will be the same driver which is attached to the device. In particular, it must not assume that any values stored in the softc structure will be available for its attach method and any resources allocated during probe must be released and re-allocated if the attach method is called. In addition it is an absolute requirement that the probe routine have no side effects whatsoever. The probe routine may be called more than once before the attach routine is called. If a success code of zero is returned, the driver can assume that it will be the one attached, but must not hold any resources when the probe routine returns. A driver may assume that the softc is preserved when it returns a success code of zero.
RETURN VALUES
A value equal to or less than zero indicates success, greater than zero indicates an error (errno). For values equal to or less than zero: zero indicates highest priority, no further probing is done; for a value less than zero, the lower the value the lower the priority, e.g. -100 indicates a lower priority than -50. The following values are used by convention to indicate different strengths of matching in a probe routine. Except as noted, these are just suggested values, and there’s nothing magical about them. BUS_PROBE_SPECIFIC The device that cannot be reprobed, and that no possible other driver may exist (typically legacy drivers who don’t fallow all the rules, or special needs drivers). BUS_PROBE_VENDOR The device is supported by a vendor driver. This is for source or binary drivers that are not yet integrated into the FreeBSD tree. Its use in the base OS is prohibited. BUS_PROBE_DEFAULT The device is a normal device matching some plug and play ID. This is the normal return value for drivers to use. It is intended that nearly all of the drivers in the tree should return this value. BUS_PROBE_LOW_PRIORITY The driver is a legacy driver, or an otherwise less desirable driver for a given plug and play ID. The driver has special requirements like when there are two drivers that support overlapping series of hardware devices. In this case the one that supports the older part of the line would return this value, while the one that supports the newer ones would return BUS_PROBE_DEFAULT. BUS_PROBE_GENERIC The driver matches the type of device generally. This allows drivers to match all serial ports generally, with sepcialized drivers matching particular types of serial ports that need special treatment for some reason. BUS_PROBE_HOOVER The driver matches all unclaimed devices on a bus. The ugen(5) device is one example. BUS_PROBE_NOWILDCARD The driver expects its parent to tell it which children to manage and no probing is really done. The device only matches if its parent bus specifically said to use this driver.
SEE ALSO
device(9), DEVICE_ATTACH(9), DEVICE_DETACH(9), DEVICE_IDENTIFY(9), DEVICE_SHUTDOWN(9)
AUTHORS
This manual page was written by Doug Rabson.
|
http://manpages.ubuntu.com/manpages/maverick/man9/DEVICE_PROBE.9freebsd.html
|
CC-MAIN-2015-27
|
refinedweb
| 632
| 62.58
|
Save 37% off Deep Learning and the Game of Go. Just enter fccpumperla into the discount code box at checkout at manning.com.
How can we program a computer to decide what move to make next in a game? To start, we can think about how humans would make the same decision. Let’s start with the simplest deterministic perfect information game there is: tic-tac-toe. The technical name for the strategy we’ll describe is minimaxing. “Minimaxing” is a contraction of “minimizing and maximizing”: you are trying to maximize your score, while your opponent is trying to minimize your score. You can sum it up the algorithm in one sentence: assume your opponent is as smart as you are. Let’s see how minimaxing works in practice.
Figure 1. What move should X make next? This is an easy one: playing in the lower right corner wins the game.
Take a look at figure 1. What move should X make next? There’s no trick here; taking the lower right corner wins the game. We can make that into a general rule: take any move that immediately wins the game. There’s no way this plan can go wrong. We could implement this rule in code with something like this:
def find_winning_move(game_state, next_player): for candidate_move in game_state.legal_moves(next_player): ❶ next_state = game_state.apply_move(candidate_move) ❷ if next_state.is_over() and next_state.winner == next_player: return candidate_move ❸ return None ❹
❶ Loop over all legal moves
❷ Calculates what the board would look like if we pick this move
❸ This is a winning move! No need to continue searching
❹ Can’t win on this turn
Figure 2 illustrates the hypothetical board positions this function would examine. This structure, where a board position points to possible follow-ups, is called a game tree.
Figure 2. An illustration of a algorithm to find the winning move. We start with the position at the top. We loop over every possible move and calculate the game state that would result if we played that move. Then we check if that hypothetical game state is a winning position for X.
Let’s back up a bit. How did we get into this position? Perhaps the previous position looked like figure 3. The O player naively hoped to make three in a row across the bottom. But that assumes that X will cooperate with the plan. This gives a corollary to our previous rule: don’t choose any move that gives our opponent a winning move.
Figure 3. What move should O make next? If O plays in the lower left, we must assume that X will follow up in the lower right to win the game. O must find the only move that prevents this.
def eliminate_losing_moves(game_state, next_player): opponent = next_player.other() possible_moves = [] ❶ for candidate_move in game_state.legal_moves(next_player): ❷ next_state = game_state.apply_move(candidate_move) ❸ opponent_winning_move = find_winning_move(next_state, opponent) ❹ if opponent_winning_move is None: ❹ possible_moves.append(candidate_move) ❹ return possible_moves
❶ possible_moves will become a list of all moves worth considering
❷ Loops over all legal moves
❸ Calculates what the board would look like if we play this move
❹ Does this give our opponent a winning move? If not, this move is plausible
Figure 4. What move should X make? If X plays in the center, then there will be two different ways to complete three-in-a-row: top middle and lower right. O can only block one of them, so X is guaranteed a win.
Now, we know that we must block our opponent from getting into a winning position. Therefore we should assume that our opponent is going to do the same to us. With that in mind, how can we play to win? Take a look at the board in figure 4. If we play in the center, we have two ways to complete three in a row: top middle or lower right. The opponent can’t block them both. We can describe this general principle as: look for a move where our opponent can’t block from setting up a winning move. Sounds complicated, but it’s actually easy to build this logic on top of the functions we’ve already written:
def find_two_step_win(game_state, next_player): opponent = next_player.other() for candidate_move in game_state.legal_moves(next_player): ❶ next_state = game_state.apply_move(candidate_move) ❷ good_responses = eliminate_losing_moves(next_state, opponent) ❸ if not good_responses: ❸ return candidate_move ❸ return None ❹
❶ Loop over all legal moves
❷ Calculates what the board would look like if we play this move
❸ Does our opponent have a good defense? If not, pick this move
❹ No matter what move we pick, our opponent can prevent a win
Of course, our opponent will anticipate that we will try to do this, and also try to block such a play. We can start to see a general strategy forming:
- First, see if we can win on the next move. If so, play that move.
- If not, see if our opponent can win on the next move. If so, block that.
- If not, see if we can force a win in two moves. If so, play to set that up.
- If not, see if our opponent could set up a two-move win on their next move…
Notice that all three of our functions have a similar structure. Each function loops over all valid moves and examines the hypothetical board position that we’d get after playing that move. Furthermore, each function builds on the previous function to simulate what our opponent would do in response. If we generalize this concept, we get an algorithm that can always identify the best possible move.
Solving tic-tac-toe: a minimax example
In the previous section, we examined how to anticipate your opponent’s play one or two moves ahead. Here we show how to generalize that strategy to pick perfect moves in tic-tac-toe. The core idea is exactly the same, but we need the flexibility to look an arbitrary number of moves in the future.
First let’s define an enum that represents the three possible outcomes of a game: win, loss, or draw. These possibilities are defined relative to a particular player: a loss for one player is a win for the other.
Listing 1. An enum to represent the outcome of a game.
class GameResult(enum.Enum): loss = 1 draw = 2 win = 3
Imagine we had a function
best_result that took a game state and told us the best outcome that a player could achieve from that state. If that player could guarantee a win—by any sequence, no matter how complicated—the
best_result function would return
GameResult.win. If that player could force a draw, it would return
GameResult.draw. Otherwise, it would return
GameResult.loss. If we assume that function already exists, it’s easy to write a function to pick a move: we loop over all possible moves, call
best_result, and pick the move that leads to the best result for us. Of course, there may be multiple moves that lead to equal results; we can just pick randomly from them in that case. Listing 2 shows how to implement this.
Listing 2. A game-playing agent that implements minimax search.
class MinimaxAgent(Agent): def select_move(self, game_state): winning_moves = [] draw_moves = [] losing_moves = [] for possible_move in game_state.legal_moves(): ❶ next_state = game_state.apply_move(possible_move) ❷ opponent_best_outcome = best_result(next_state) ❸ our_best_outcome = reverse_game_result(opponent_best_outcome) ❸ if our_best_outcome == GameResult.win: ❹ winning_moves.append(possible_move) ❹ elif our_best_outcome == GameResult.draw: ❹ draw_moves.append(possible_move) ❹ else: ❹ losing_moves.append(possible_move) ❹ if winning_moves: ❺ return random.choice(winning_moves) ❺ if draw_moves: ❺ return random.choice(draw_moves) ❺ return random.choice(losing_moves) ❺
❶ Loops over all legal moves
❷ Calculates the game state if we select this move
❸ Since our opponent plays next, figure out their best possible outcome from there. Our outcome is the opposite of that
❹ Categorizes this move according to its outcome
❺ Picks a move that leads to our best outcome
Now the question is how to implement
best_result. As in the previous section, we can start from the end of the game and work backward. Listing 3 shows the easy case: if the game is already over, there’s only one possible result. We just return it.
Listing 3. First step of the minimax search algorithm. If the game is already over, we already know the result.
def best_result(game_state): """Find the best result that next_player can get from this game state. Returns: GameResult.win if next_player can guarantee a win GameResult.draw if next_player can guarantee a draw GameResult.loss if, no matter what next_player chooses, the opponent can still force a win """ if game_state.is_over(): if game_state.winner() == game_state.next_player: return GameResult.win elif game_state.winner() is None: return GameResult.draw else: return GameResult.loss
If we’re somewhere in the middle of the game, we need to search ahead. By now, the pattern should be familiar. We start by looping over all possible moves and calculating the next game state. Then we must assume our opponent will do their best to counter our hypothetical move. To do so, we can just call
best_result from this new position. That tells us the result our opponent can get from the new position; we invert it to find out our result. Out of all the moves we consider, we select the one that leads to the best result for us. Listing 4 shows how to implement this logic, which makes up the second half of
best_result. Figure 5 illustrates the board positions this function will consider for a particular tic-tac-toe board.
Figure 5. A tic-tac-toe game tree. In the top position, it is X’s turn. If X plays in the top center, then O can guarantee a win. If X plays in the left center, X will win. If X plays right center, then O can force a draw. Therefore X will choose to play in the left center.
Listing 4.4. Implementing minimax search.
best_result_so_far = GameResult.loss opponent = game_state.next_player.other for candidate_move in game_state.legal_moves(): ❶ next_state = game_state.apply_move(candidate_move) ❷ opponent_best_result = best_result(next_state) ❸ our_result = reverse_game_result(opponent_best_result) ❹ if our_result.value > best_result_so_far.value: best_result_so_far = our_result return best_result_so_far
❶ See what the board would look like if we play this move.
❷ Find out our opponent’s best move.
❸ Whatever our opponent wants, we want the opposite.
❹ See if this result is better than the best we’ve seen so far.
If we apply this algorithm to a simple game such as tic-tac-toe, we get an unbeatable opponent. You can play against it and see for yourself: try the
play_ttt.py example on GitHub (). In theory, this algorithm would also work for chess, Go, or any other deterministic perfect information game. In reality, it’s far too slow for any of those games.
That’s all for now. If you want to learn more about the book, check it out on liveBook here and see this slide deck.
|
https://freecontent.manning.com/anticipating-your-opponent-with-minimax-search/
|
CC-MAIN-2019-18
|
refinedweb
| 1,853
| 67.15
|
Walkthrough: Caching Application Data in ASP.NET
Caching enables you to store data in memory for rapid access. Applications can access the cache and not have to retrieve the data from the original source whenever the data is accessed..
This walkthrough shows you how to use the caching functionality that is available in the .NET Framework as part of an ASP.NET application. In the walkthrough, you cache the contents of a text file.
Tasks illustrated in this walkthrough include the following:
Creating an ASP.NET Web site.
Adding a reference to the .NET Framework 4.
Adding a cache entry that caches the contents of a file.
Providing an eviction policy for the cache entry.
Monitoring the path of the cached file, and notifying the cache of changes to the monitored items.
You will start by creating an ASP.NET Web site.
To create an ASP.NET Web site
Start Visual Studio 2010.\AppCaching and then Web Developer displays the page in Source view, where you can see the page's HTML elements.
The next step is to add the text file use the System.Runtime.Caching namespace in an ASP.NET application, you must add a reference to the namespace.
To add a reference the Website
In Solution Explorer, right-click the name of the Web site and then click Add Reference.
Select the .NET tab, select System.Runtime.Caching, and then click OK.
The next step is to add a button and a Label control to the page. You will create an event handler for the button's Click event. Later you will add code to so when you click the button, the cached text is displayed in the Label control.
To add controls to the page
Open or switch to the Default.aspx page.
From the Standard tab of the Toolbox, drag a Button control to the Default.aspx page.
In the Properties window, set the Text property of the Button control to Get From Cache. Accept the default ID property.
From the Standard tab of the Toolbox, drag a Label control to the page. Accept the default ID property.
Next, you will add the code to perform the following tasks:
Create an instance of the cache class—that is, you will instantiate the cache to create an event handler in the Default.aspx.cs or Default.aspx.vb file.
At the top of the file (before the class declaration), add the following Imports (Visual Basic) or using (C#) statements.
[Visual Basic]
[C#]
In the event handler, add the following code to instantiate the cache.
[Visual Basic]
[C#]
The ObjectCache is a base class that provides methods for implementing an in-memory cache object.
Add the following code to read the contents of a cache entry named filecontents
[Visual Basic]
[C#]
Add the following code to check whether the cache entry named filecontents exists
[Visual Basic]
[C#]
If the specified cache entry does not exist, you must read the text file and add it as a cache entry to the cache.
In the if/then block, add the following code to create a new CacheItemPolicy object that specifies that the cache expires after 10 seconds.
[Visual Basic]
[C#].
Inside the if/then block and following the code you added in the previous step, add the following code to create a collection for the file paths that you want to monitor and to add the path of the text file to the collection.
[Visual Basic]
[C#]
The HttpServerUtilityMapPath() method returns the path to the root of the current Web site.
Following the code you added in the previous step, add the following code to add a new HostFileChangeMonitor object to the collection of change monitors for the cache entry.
[Visual Basic]
[C#]
The HostFileChangeMonitor object monitors the text file's path and notifies the cache if changes occur. In this example, the cache entry will automatically expire if the contents of the file changes.
Following the code you added in the previous step, add the following code to read the contents of the text file.
The date and time timestamp is added to help you determine when the cache entry expires.
Following the code you added in the previous step, add the following code to insert the contents of the file into the cache object as a CacheItem instance.
[Visual Basic]
[C#]
You specify information about how the cache entry should be evicted by passing the CacheItemPolicy object as a parameter Set method.
After the if/then block, add the following code to display the cached file content in a Label control.
[Visual Basic]
[C#]
You can now test the application.
To test caching in the ASP.NET Web site
Press CTRL+F5 to run the application.
Click Get From Cache.
The cached content in the text file is displayed in the label. Notice the timestamp at the end of the file.
Click Get From Cache again.
The timestamp is unchanged. This indicates the cached content is displayed.
Wait 10 seconds or more and then click Get From Cache again.
This time a new timestamp is displayed. This indicates that the policy let the cache expire after 10 seconds and that new cached content is displayed.
In a text editor, open the text file that you added to the Web site project. Do not make any changes yet.
Click Get From Cache again.
Notice the time stamp again.
Make a change to the text file and then save the file.
Click Get From Cache again.
This time the timestamp changes immediately. This indicates that the host-file change monitor evicted the cache item immediately when you made a change.
After you have completed this walkthrough, the code for the Web site you created will resemble the following example.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Runtime.Caching; using System.IO; public partial class _Default : System.Web.UI.Page { protected void Button1_Click1(object sender, Event>(); string cachedFilePath = Server.MapPath("~") + "\\cacheText.txt"; filePaths.Add(cachedFilePath); policy.ChangeMonitors.Add(new HostFileChangeMonitor(filePaths)); // Fetch the file contents. fileContents = File.ReadAllText(cachedFilePath) + "\n" + DateTime.Now.ToString(); cache.Set("filecontents", fileContents, policy); } Label1.Text = fileContents; } }
In ASP.NET, you can use multiple cache implementations to cache data. For more information, see Caching Application Data by Using Multiple Cache Objects in an ASP.NET Application.
|
http://msdn.microsoft.com/en-us/library/ff477235(v=vs.100).aspx
|
CC-MAIN-2014-41
|
refinedweb
| 1,066
| 68.26
|
Ten Little Algorithms, Part 5: Quadratic Extremum Interpolation and Chandrupatla's root-finding and minimization. As a bonus, we’ll also look at a nifty root-finding method that uses quadratic interpolation as well.
You’ve probably heard of linear interpolation, where you are trying to find a point along the line containing the points \( (x_1, y_1) \) and \( (x_2, y_2) \). There’s not much to say about it; if you have an x-coordinate \( x \) then the corresponding y-coordinate is \( y = y_1 + \frac{y_2-y_1}{x_2-x_1}(x-x_1) \). The inversion of the problem looks the same: given a y-coordinate \( y \), find the x-coordinate by computing \( x = x_1 + \frac{x_2-x_1}{y_2-y_1}(y-y_1) \). Both are linear in \( x \) and \( y \).
What if we had three points \( p_1 = (x_1, y_1) \), \( p_2 = (x_2, y_2) \), and \( p_3 = (x_3, y_3) \) and we wanted to find a curve with a quadratic equation that passes through all three of them?
This is an example of Lagrange interpolation, and the form of the equation is roughly the same as linear interpolation, only with more stuff:
$$ y = y_1 \left(\frac{x-x_2}{x_1-x_2}\cdot\frac{x-x_3}{x_1-x_3}\right) + y_2 \left(\frac{x-x_1}{x_2-x_1}\cdot\frac{x-x_3}{x_2-x_3}\right) + y_3 \left(\frac{x-x_1}{x_3-x_1}\cdot\frac{x-x_2}{x_3-x_2}\right) $$
When you evaluate it at one of the three points, each of the parenthesized fractions here has a value of 1 or 0; the first term, for example, has a value of zero if \( x = x_2 \) or \( x = x_3 \), and a value of \( y_1 \) if \( x = x_1 \).
Rather than getting stuck in another game of Grungy Algebra again, let’s use an example:
- \( p_1 = (1,3) \)
- \( p_2 = (5,2) \)
- \( p_3 = (7,5) \)
If we run through Lagrange’s formula, we get
$$\begin{eqnarray} y &=& 3 \frac{x-5}{1-5}\cdot\frac{x-7}{1-7} + 2\frac{x-1}{5-1}\cdot\frac{x-7}{5-7} + 5\frac{x-1}{7-1}\cdot\frac{x-5}{7-5} \cr &=& \tfrac{3}{24}(x^2 - 12x + 35) - \tfrac{6}{24}(x^2 - 8x + 7) + \tfrac{10}{24}(x^2 - 6x + 5) \cr &=& \tfrac{1}{24}(7x^2 - 48x + 113) \end{eqnarray}$$
Now that we have this equation, we could find the minimum of the parabola; for a quadratic equation \( ax^2 + bx + c \), the minimum occurs at \( x = -\frac{b}{2a} \), so if we plug in \( x = \frac{24}{7} \) and do all the arithmetic, we get \( y = \frac{215}{168} \), so the point shown is \( p_0 = (\frac{24}{7}, \frac{215}{168}) \).
OK, great — so what?
Finding a maximum between samples
Let’s say we have some regularly-sampled data and we want to find the maximum value of a waveform. Here’s the step response of a second-order system with natural frequency 405 rad/s and damping factor \( \zeta=0.52 \):
import numpy as np import matplotlib.pyplot as plt import scipy.signal # step response of 1/(s^2/w0^2 + 2*zeta*s/w0 + 1) w0 = 405 zeta = 0.52 H = scipy.signal.lti([1],[1.0/w0/w0, 2*zeta/w0, 1]) t,y=scipy.signal.step(H,T=np.arange(0,0.01401,0.0002)) plt.plot(t,y) t1,y1=scipy.signal.step(H,T=np.arange(0,0.01401,0.002)) plt.plot(t1,y1,'.') for tj,yj in zip(t1[4:7],y1[4:7]): plt.text(tj,yj,'\n%.5f'%yj,verticalalignment='top')
If we wanted to find the maximum value of this waveform, we could try to find a closed-form equation and then find its maximum. But that’s not an easy problem in general; it can be done in limited cases (like step responses) but not for the response of general input waveforms.
Alternatively, we could try to get the maximum value numerically. We will get a more accurate answer the closer these sampling points are. But that comes with a cost. We’d have to simulate more samples, and our simulation might be expensive.
One approach we could use is to find the maximum value of the sampled points, and look at its two adjacent neighbors, and then fit a parabola to these points. Here we have \( t_0 = 0.008 \), \( t_1 = 0.010 \), and \( t_2 = 0.012 \).
This is a much easier interpolation effort. We can rescale the x-axis as \( u - \frac{t-t_1}{\Delta t} \) with \( \Delta t \) = the sampling interval; then this equation becomes
$$ \begin{eqnarray} y _ 0 &=& \left.au^2 + bu + c\right| _ {u=-1} &= a-b+c \cr y _ 1 &=& \left.au^2 + bu + c\right| _ {u=0} &= c\cr y _ 2 &=& \left.au^2 + bu + c\right| _ {u=1} &= a+b+c \end{eqnarray} $$
and then we have \( c = y_1 \), \( b = \frac{1}{2}(y_2 - y_0) \), and \( a = \frac{1}{2}(y_2 + y_0) - c \). With our example system, that means \( c = 1.13878 \), \( b = -0.023854 \), and \( a = -0.031247 \).
The maximum value of this polynomial occurs at \( u=-\frac{b}{2a} \). This is true for all quadratic equations (remember completing the square from high school math?). At this value of \( u \), \( y = a\frac{b^2}{4a^2} - b\frac{b}{2a} + c = c - \frac{b^2}{4a} \).
plt.plot(t,y,'b') plt.xlim(0.0079, 0.0121) plt.ylim(1.05,1.2) tpts = t1[4:7] ypts = y1[4:7] plt.plot(tpts,ypts,'b.',markersize=8) c = ypts[1] b = (ypts[2]-ypts[0])/2.0 a = (ypts[2]+ypts[0])/2.0 - c print a,b,c t2 = np.arange(0.008,0.012,0.0001) u2 = (t2 - tpts[1])/(tpts[1]-tpts[0]) plt.plot(t2,a*u2*u2 + b*u2 + c,'r--') t3 = (tpts[1]-tpts[0])*(-b/2.0/a) + tpts[1] y3 = c - b*b/4.0/a plt.plot(t3,y3,'r.', markersize=8) plt.annotate('(%.5f,%.5f)' % (t3,y3), xy=(t3,y3), xytext = (0.009,1.12), horizontalalignment='center', arrowprops = dict(arrowstyle="->"))
-0.0312470769869 -0.0238536034466 1.13878353174
We predict an overshoot of 0.14334. The actual overshoot here is \( e^{-\pi \frac{\zeta}{\sqrt{1-\zeta^2}}} \); for \( \zeta=0.52 \) we get 0.14770 instead. Not bad. Here’s the whole algorithm:
def qinterp_max(y, extras=False): ''' Find the maximum point in an array, and if it's an interior point, interpolate among it and its two nearest neighbors to predict the interpolated maximum. Returns that interpolated maximum, if extras=True, also returns the coefficients and the index and value of the sample maximum. ''' imax = 0 ymax = -float('inf') # run through the points to find the maximum for i,y_i in enumerate(y): if y_i > ymax: imax = i ymax = y_i # no interpolation if at the ends if imax == 0 or imax == i-1: return ymax # otherwise, y[imax] >= either of its neighbors, # and we use quadratic interpolation: c = y[imax] b = (y[imax+1]-y[imax-1])/2.0 a = (y[imax+1]+y[imax-1])/2.0 - c yinterp = c - b*b/4.0/a if extras: return yinterp, (a,b,c), imax, ymax else: return yinterp
OK, now let’s use it and see how its accuracy varies with step size:
ymax_actual = 1+np.exp(-np.pi*zeta/np.sqrt(1-zeta**2)) print 'actual peak: ', ymax_actual def fit_power(x,y): lnx = np.log(x) lny = np.log(np.abs(y)) A = np.vstack((np.ones(lnx.shape), lnx)).T c,m = np.linalg.lstsq(A, lny)[0] C = np.exp(c) return C,m dt_array = [] errq_array = [] err_array = [] for k in xrange(10): delta_t = 0.002 / (2**k) dt_array.append(delta_t) t,y=scipy.signal.step(H,T=np.arange(0,0.015,delta_t)) ymax_interp, coeffs, imax, ymax = qinterp_max(y, extras=True) err = ymax - ymax_actual err_array.append(err) errq = ymax_interp - ymax_actual errq_array.append(errq) print '%d %.6f %.8f %g' % (k, delta_t, ymax_interp, errq) C,m = fit_power(dt_array, err_array) print 'sampled error ~= C * dt^m; C=%f, m=%f' % (C,m) C,m = fit_power(dt_array, errq_array) print 'interpolated error ~= C * dt^m; C=%f, m=%f' % (C,m) plt.loglog(dt_array, np.abs(err_array), '+', dt_array, np.abs(errq_array),'.') dt_bestfit = [2e-6, 5e-3] plt.plot(dt_bestfit,C*dt_bestfit**m,'--') plt.grid('on') plt.xlabel('$\Delta t$', fontsize=15) plt.ylabel('error') plt.legend(('sampled','interpolated'), loc='best')
actual peak: 1.14770455973 0 0.002000 1.14333591 -0.00436865 1 0.001000 1.14789591 0.000191353 2 0.000500 1.14774129 3.67308e-05 3 0.000250 1.14771241 7.84859e-06 4 0.000125 1.14770355 -1.01057e-06 5 0.000063 1.14770467 1.14566e-07 6 0.000031 1.14770454 -1.72937e-08 7 0.000016 1.14770456 1.30011e-09 8 0.000008 1.14770456 2.79934e-10 9 0.000004 1.14770456 -1.62288e-11 sampled error ~= C * dt^m; C=382.019171, m=1.923258 interpolated error ~= C * dt^m; C=326991.894388, m=2.977363
Our interpolated error has a very clear cubic dependency on the timestep Δt. This makes sense: pretty much whenever you are using a polynomial method of degree \( n \), you will get an error that is a polynomial of degree \( n+1 \). So if I divide the timestep by 10, my error will decrease by a factor of around 1000.
The uninterpolated error has a quadratic dependency on the timestep Δt. So quadratic interpolation in this case is a clear winner for small timesteps. What constitutes a small timestep? Well, in order for interpolation to be accurate, the data needs to be smooth, and “quadratic-looking”. Perhaps if you use Chebyshev approximation to fit a quadratic to a series of 5-10 successive points, and you compare the residuals to the magnitude of the Chebyshev quadratic component, you’d get an idea. Let’s just try that for two timesteps. Looking at the graph, it seems like a timestep of 10-3 is not very quadratic (interpolation actually gives us a slightly higher error than taking the raw samples), but a timestep of 10-4 is reasonably quadratic.
for delta_t in [1e-3, 1e-4]: t,y=scipy.signal.step(H,T=np.arange(0,0.015,delta_t)) ymax_interp, coeffs, imax, ymax = qinterp_max(y, extras=True) tt = t[imax-5:imax+6] yy = y[imax-5:imax+6] fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(tt,yy, '.') a,b,c = coeffs tinterp = np.linspace(tt[0],tt[-1],100) uinterp = (tinterp - t[imax])/delta_t plt.plot(tinterp, a*uinterp**2 + b*uinterp + c, '--') plt.plot(t[imax] - b/2.0/a *delta_t, ymax_interp, 'x') plt.legend(('sample points','quadratic eqn containing\nmax + 2 neighbors','quad interpolated max'), loc='best') # find u linearly related to tt such that u spans the interval -1 to +1 u = (tt*2 - (tt[-1] + tt[0])) / (tt[-1] - tt[0]) coeffs, extra = np.polynomial.chebyshev.chebfit(u,yy,2,full=True) rms_residuals = np.sqrt(extra[0][0]) print 'coeffs=',coeffs, 'RMS residual=',rms_residuals
coeffs= [ 0.99384519 0.13676706 -0.15229459] RMS residual= 0.10100632695 coeffs= [ 1.14619769e+00 -7.34868794e-05 -1.50279049e-03] RMS residual= 0.000133276857498
In the first case, with timestep = 10-3, the RMS of the residuals is almost as large as the magnitude of the quadratic component; with timestep = 10-4, the RMS of the residuals is only about a tenth as large as the magnitude of the quadratic component.
Yes, we could use more points and/or fit a cubic or a quartic equation to them, but quadratic interpolation based on three sample points is fast and simple, and does the job.
Root-finding and Chandrupatla’s Method
Today you’re going to get two algorithms for the price of one. This article was originally going to be about Brent’s method for finding the root of an equation numerically. It is an improvement developed by Richard Brent in 1973, on an earlier algorithm developed by T.J. Dekker in 1969. Brent’s method is awesome! It’s the de facto standard for root-finding of a single-variable function, used in
scipy.optimize.brentq in Python,
org.apache.commons.math3.analysis.solvers.BrentSolver in Java, and
fzero in MATLAB.
Let’s say you want to find some value of x such that \( \tan x - x = 0.1 \). This is a nonlinear equation with a non-algebraic solution: we’re not going to find a closed-form solution in terms of elementary functions. All we have to do is define a function \( f(x) = \tan x - x - 0.1 \) and bound some interval such that the function is continuous within that interval, and when \( f(x) \) is evaluated at the endpoints of that interval, they have opposite signs. This is called bracketing the root. That’s pretty easy: \( f(0) = -0.1 \) and \( f(\pi/4) = 0.9 - \pi/4 \approx 0.115 \).
Then we run Brent’s method with the input of \( f(x) \) and the interval \( x \in [0, \pi/4] \):
import scipy.optimize def f(x): return np.tan(x) - x - 0.1 x1 = scipy.optimize.brentq(f, 0, np.pi/4) print 'x=',x1, 'f(x1)=',f(x1) x = np.arange(0, 0.78, 0.001) plt.plot(x, np.tan(x) - x) plt.plot(x1, np.tan(x1) - x1, '.', markersize=8) plt.grid('on')
x= 0.631659472661 f(x1)= 1.94289029309e-16
There’s only one problem: Brent’s method is… um… well I’d call it unsatisfying. It works very well by alternating between three different kinds of iteration steps to narrow the bracket around the root, but it’s very heuristic in doing this.
General root-finding methods are very tricky, because functions themselves can be full of crazy nasty behaviors. (Just look at the Weierstrass function.) I’m not going to cover the whole topic of root-finding: it’s long and full of pitfalls, and you want to use a library function when you can. So here’s the 3-minute version, applicable to continuous functions of one variable within a given interval bracketing a root:
The simple robust algorithm is bisection, where we evaluate the function at the interval midpoint, guaranteeing that one of the half-intervals also brackets a root. This converges linearly, taking roughly 53 steps for 64-bit IEEE-754 double-precision calculations since they contain a 53-bit mantissa, and bisection gains one bit of precision at each iteration.
The faster algorithms include things like Newton’s method, the secant method, and inverse quadratic interpolation, which converge much more quickly… except sometimes they don’t converge at all. Newton’s method requires either a closed-form expression for a function’s derivative, or extra evaluations to calculate that derivative; the secant method is similar but doesn’t require derivatives. These methods take advantage of the fact that most functions become smoother when you narrow the interval of evaluation, so they look more linear, and the higher-degree components of their polynomial approximation become smaller.
Brent’s method switches back and forth between these methods based on some rules he proved would guarantee worst-case convergence.
While looking for a good explanation of Brent’s method, I stumbled across a reference to a newer algorithm developed by Tirupathi Chandrupatla in 1997. Chandrupatla is a professor of mechanical engineering at Rowan University in southern New Jersey; this algorithm appears to be mentioned only in his original article published in 1997 in an academic journal, and described second-hand in a book on computational physics by Philipp Scherer. Scherer’s description is very usable, and includes comparisons between Chandrupatla’s method and several other root-finding methods including Brent’s method, but it doesn’t give much background on why Chandrupatla’s method works. I haven’t been able to access Chandrupatla’s original article, but the description in Scherer’s book is easy enough to implement in Python and compare to other methods.
Chandrupatla’s method is both simpler than Brent’s method, and converges faster for functions that are flat around their roots (which means they have multiple roots or closely-located roots). Basically it uses either bisection or inverse quadratic interpolation, based on a relatively simple criteria. Inverse quadratic interpolation is just quadratic interpolation using the y-values as inputs and the x-value as output.
Here’s an example of inverse quadratic interpolation. Let’s say we have our function \( f(x) = \tan x - x - 0.1 \) and we have three points \( (x_1, y_1) \), \( (x_2, y_2) \), and \( (x_3, y_3) \), with \( x_1 = 0.5, x_2 = 0.65, x_3 = 0.8 \). We use Lagrange’s quadratic interpolation formula mentioned earlier, but swapping the x and y values:
$$ x = x_1 \left(\frac{y-y_2}{y_1-y_2}\cdot\frac{y-y_3}{y_1-y_3}\right) + x_2 \left(\frac{y-y_1}{y_2-y_1}\cdot\frac{y-y_3}{y_2-y_3}\right) + x_3 \left(\frac{y-y_1}{y_3-y_1}\cdot\frac{y-y_2}{y_3-y_2}\right) $$
And we just plug in \( y=0 \) to find an estimate of the corresponding value for x:
def iqi(f, x1, x2, x3): y1 = f(x1) y2 = f(x2) y3 = f(x3) def g(y): return (x1*(y-y2)*(y-y3)/(y1-y2)/(y1-y3) + x2*(y-y1)*(y-y3)/(y2-y1)/(y2-y3) + x3*(y-y1)*(y-y2)/(y3-y1)/(y3-y2)) return g def f(x): return np.tan(x) - x - 0.1 g = iqi(f, 0.5, 0.65, 0.8) x3 = g(0) print 'x3=',x3, 'f(x3)=',f(x3) x = np.arange(0.5,0.8,0.001) y = f(x) plt.plot(x,y,'-',g(y),y,'--') plt.plot([0.5, 0.65, 0.8],f(np.array([0.5,0.65,0.8])),'.',markersize=8) plt.xlim(0.5,0.8) plt.grid('on') plt.plot(x3,f(x3),'g+',markersize=8) plt.legend(('f(x)','inv quad interp'),loc='best')
x3= 0.629308756053 f(x3)= -0.00125220861941
We get a much more accurate estimate. (The reason we use inverse quadratic interpolation rather than quadratic interpolation is that the latter requires solving a quadratic equation numerically, which requires a square root operation.) It works very well for smooth functions that are mostly linear and a little bit quadratic. But sometimes it fails and bisection is a reliable backup. So here’s Chandrupatla’s algorithm; I’ve added a few comments to help explain some of it.
def chandrupatla(f,x0,x1,verbose=False, eps_m = None, eps_a = None): # adapted from Chandrupatla's algorithm as described in Scherer # # Initialization b = x0 a = x1 c = x1 fa = f(a) fb = f(b) fc = fa # Make sure we are bracketing a root assert np.sign(fa) * np.sign(fb) <= 0 t = 0.5 do_iqi = False # jms: some guesses for default values of the eps_m and eps_a settings # based on machine precision... not sure exactly what to do here eps = np.finfo(float).eps if eps_m is None: eps_m = eps if eps_a is None: eps_a = 2*eps while True: # use t to linearly interpolate between a and b, # and evaluate this function as our newest estimate xt xt = a + t*(b-a) ft = f(xt) if verbose: print '%s t=%f xt=%f ft=%g a=%f b=%f c=%f' % ( 'IQI' if do_iqi else 'BIS', t, xt, ft, a, b, c) # update our history of the last few points so that # - a is the newest estimate (we're going to update it from xt) # - c and b get the preceding two estimates # - a and b maintain opposite signs for f(a) and f(b) if np.sign(ft) == np.sign(fa): c = a fc = fa else: c = b b = a fc = fb fb = fa a = xt fa = ft # set xm so that f(xm) is the minimum magnitude of f(a) and f(b) if np.abs(fa) < np.abs(fb): xm = a fm = fa else: xm = b fm = fb if fm == 0: return xm # Figure out values xi and phi # to determine which method we should use next tol = 2*eps_m*np.abs(xm) + eps_a tlim = tol/np.abs(b-c) if tlim > 0.5: return xm xi = (a-b)/(c-b) phi = (fa-fb)/(fc-fb) do_iqi = phi**2 < xi and (1-phi)**2 < 1-xi if do_iqi: # inverse quadratic interpolation t = fa / (fb-fa) * fc / (fb-fc) + (c-a)/(b-a)*fa/(fc-fa)*fb/(fc-fb) else: # bisection t = 0.5 # limit to the range (tlim, 1-tlim) t = np.minimum(1-tlim, np.maximum(tlim, t))
Let’s give it a spin to find \( \sqrt{2} \):
def tracker(f, a=0): ''' decorates calls to f(x) to track each call ''' i = [0] def g(x): i[0] += 1 y = f(x) print "i=%2d, x=%f, y=%f, err=%g" % (i[0],x,y,y-a) return y-a return g chandrupatla(tracker(lambda x: x**2, 2),1,2,verbose=True)
i= 1, x=2.000000, y=4.000000, err=2 i= 2, x=1.000000, y=1.000000, err=-1 i= 3, x=1.500000, y=2.250000, err=0.25 BIS t=0.500000 xt=1.500000 ft=0.25 a=2.000000 b=1.000000 c=2.000000 i= 4, x=1.409524, y=1.986757, err=-0.0132426 IQI t=0.180952 xt=1.409524 ft=-0.0132426 a=1.500000 b=1.000000 c=2.000000 i= 5, x=1.414264, y=2.000143, err=0.000143176 IQI t=0.052394 xt=1.414264 ft=0.000143176 a=1.409524 b=1.500000 c=1.000000 i= 6, x=1.414214, y=2.000000, err=-1.38041e-08 IQI t=0.010679 xt=1.414214 ft=-1.38041e-08 a=1.414264 b=1.409524 c=1.500000 i= 7, x=1.414214, y=2.000000, err=8.88178e-16 IQI t=0.000096 xt=1.414214 ft=8.88178e-16 a=1.414214 b=1.414264 c=1.409524 i= 8, x=1.414214, y=2.000000, err=4.44089e-16 IQI t=0.000000 xt=1.414214 ft=4.44089e-16 a=1.414214 b=1.414214 c=1.414264 i= 9, x=1.414214, y=2.000000, err=-2.88658e-15 IQI t=0.000000 xt=1.414214 ft=-2.88658e-15 a=1.414214 b=1.414214 c=1.414214 i=10, x=1.414214, y=2.000000, err=-4.44089e-16 IQI t=0.866667 xt=1.414214 ft=-4.44089e-16 a=1.414214 b=1.414214 c=1.414214
It worked! The first iteration was handled by bisection, but after that we ended up using inverse quadratic interpolation to converge quickly. It looks like I gave it poor termination criteria for the
eps_m and
eps_a numbers; it gets to nearly machine precision at iteration 7, and then mumbles around before giving up. And here’s Brent’s method:
scipy.optimize.brentq(tracker(lambda x: x**2, 2),1,2)
i= 1, x=1.000000, y=1.000000, err=-1 i= 2, x=2.000000, y=4.000000, err=2 i= 3, x=1.333333, y=1.777778, err=-0.222222 i= 4, x=1.419048, y=2.013696, err=0.0136961 i= 5, x=1.414072, y=1.999598, err=-0.000401762 i= 6, x=1.414213, y=1.999999, err=-6.85547e-07 i= 7, x=1.414214, y=2.000000, err=1.1724e-13 i= 8, x=1.414214, y=2.000000, err=-2.71294e-12
Let’s look at a function that’s nasty to solve because of its flatness, and requires bisection for a couple of terms:
chandrupatla(tracker(lambda x: np.cos(x) - 0.999),-0.01,0.8,verbose=True)
i= 1, x=0.800000, y=-0.302293, err=-0.302293 i= 2, x=-0.010000, y=0.000950, err=0.00095 i= 3, x=0.395000, y=-0.076003, err=-0.0760034 BIS t=0.500000 xt=0.395000 ft=-0.0760034 a=0.800000 b=-0.010000 c=0.800000 i= 4, x=0.192500, y=-0.017471, err=-0.017471 BIS t=0.500000 xt=0.192500 ft=-0.017471 a=0.395000 b=-0.010000 c=0.800000 i= 5, x=0.091250, y=-0.003160, err=-0.00316039 BIS t=0.500000 xt=0.091250 ft=-0.00316039 a=0.192500 b=-0.010000 c=0.395000 i= 6, x=0.040625, y=0.000175, err=0.000174918 BIS t=0.500000 xt=0.040625 ft=0.000174918 a=0.091250 b=-0.010000 c=0.192500 i= 7, x=0.065937, y=-0.001173, err=-0.00117309 BIS t=0.500000 xt=0.065937 ft=-0.00117309 a=0.040625 b=0.091250 c=-0.010000 i= 8, x=0.044281, y=0.000020, err=1.97482e-05 IQI t=0.855558 xt=0.044281 ft=1.97482e-05 a=0.065937 b=0.040625 c=0.091250 i= 9, x=0.044733, y=-0.000000, err=-3.38295e-07 IQI t=0.020847 xt=0.044733 ft=-3.38295e-07 a=0.044281 b=0.065937 c=0.040625 i=10, x=0.044725, y=0.000000, err=5.85319e-10 IQI t=0.016787 xt=0.044725 ft=5.85319e-10 a=0.044733 b=0.044281 c=0.065937 i=11, x=0.044725, y=-0.000000, err=-4.44089e-16 IQI t=0.001727 xt=0.044725 ft=-4.44089e-16 a=0.044725 b=0.044733 c=0.044281 i=12, x=0.044725, y=0.000000, err=0 IQI t=0.000001 xt=0.044725 ft=0 a=0.044725 b=0.044725 c=0.044733
scipy.optimize.brentq(tracker(lambda x: np.cos(x) - 0.999),-0.01,0.8)
i= 1, x=-0.010000, y=0.000950, err=0.00095 i= 2, x=0.800000, y=-0.302293, err=-0.302293 i= 3, x=-0.007462, y=0.000972, err=0.000972156 i= 4, x=0.396269, y=-0.076492, err=-0.0764924 i= 5, x=-0.002396, y=0.000997, err=0.00099713 i= 6, x=0.196937, y=-0.018329, err=-0.0183294 i= 7, x=0.007889, y=0.000969, err=0.000968885 i= 8, x=0.102413, y=-0.004240, err=-0.00423958 i= 9, x=0.025472, y=0.000676, err=0.000675605 i=10, x=0.060410, y=-0.000824, err=-0.000824129 i=11, x=0.041211, y=0.000151, err=0.000150947 i=12, x=0.045038, y=-0.000014, err=-1.40481e-05 i=13, x=0.044712, y=0.000001, err=5.69923e-07 i=14, x=0.044725, y=0.000000, err=1.98721e-09 i=15, x=0.044725, y=-0.000000, err=-1.9984e-15 i=16, x=0.044725, y=0.000000, err=4.27436e-14
We can even write a function to investigate the rate of convergence using graphs like those given in Scherer’s book:
def show_converge_brent_chandrupatla(f, a, b, title=None): def track_history(f): xh = [] def g(x): xh.append(x) return f(x) return g, xh fbrent, xbrent = track_history(f) fchand, xchand = track_history(f) scipy.optimize.brentq(fbrent, a, b) chandrupatla(fchand, a, b) xbrent = np.array(xbrent) xchand = np.array(xchand) fig = plt.figure() ax = fig.add_subplot(1,1,1) for x,s in ((xbrent,'.'),(xchand,'+')): ax.semilogy(np.arange(len(x)), np.abs(f(x)), s) ax.legend(('Brent','Chandrupatla'), loc='best') ax.set_xlabel('iteration') ax.set_ylabel('error') if title is not None: fig.suptitle(title, fontsize=18) show_converge_brent_chandrupatla(lambda x: x*x-2, 0, 2, '$x^2-2$') show_converge_brent_chandrupatla(lambda x: np.cos(x)-0.999, -0.01, 0.8, '$\\cos\ x - 0.999$') show_converge_brent_chandrupatla(lambda x: (x-1.7)**17, 0, 2, '$(x-1.7)^{17}$')
Chandrupatla’s method is either comparable to Brent’s method, or (in the case of multiple roots or local flatness) converges faster.
In fact, I would have focused this article completely on Chandrupatla’s algorithm… except that I don’t understand the part about \( \xi \) and \( \Phi \) which picks whether bisection or inverse quadratic interpolation is used. Boo. Not completely satisfying.
So you get my maximum-interpolation algorithm instead.
Summary
Today we looked at two algorithms that use Lagrange’s formula for interpolating a quadratic polynomial to three given points:
an algorithm that uses quadratic interpolation to give a better estimate of the maximum of regularly-sampled data (for good results, the data must be smooth and have enough samples so that near the maximum, it has low content of degree 3 or higher — do not use digitized samples from an ADC for this!)
Chandrupatla’s method, a relatively new algorithm for finding roots of functions of one variable. It is underdocumented and underappreciated, and appears to be simpler than Brent’s method, with equal or better rate of convergence.
For more information on root finding algorithms, here are some references:
Numerical Recipes, by Press, Teukolsky, Vetterling and Flannery. You should have one of these books (pick your favorite implementation language) on your bookshelf. The algorithm descriptions are great. The code is very difficult to read, using lots of one- or two-letter variable names with cryptic Fortran-style designs, but it’s a start.
Cleve Moler’s Cleve’s Corner series on the “Zeroin” algorithm designed by T. J. Dekker, improved by Brent, and incorporated into MATLAB’s
fzero
Richard Brent’s posting of the old out-of-print edition of his book Algorithms for Minimization Without Derivatives in PDF format. Brent’s method for finding roots starts on p. 47. Warning: this is a rigorous math text, so it has lots of theory, and it’s old enough that the sample algorithm is implemented in ALGOL 60.
Previous post by Jason Sachs:
The Dilemma of Unwritten Requirements
Next post by Jason Sachs:
Margin Call: Fermi Problems, Highway Horrors, Black Swans, and Why You Should Worry About When You Should Worry
- Write a CommentSelect to add a comment
>.
|
https://www.embeddedrelated.com/showarticle/855.php
|
CC-MAIN-2018-22
|
refinedweb
| 5,024
| 67.65
|
We begin with an update of the Java licensing situation brought up last month. Then we'll dive into reader questions.
Java licensing update
The Java licensing hullabaloo that we discussed
drags on, but the major question of Sun's intentions about Java's openness has now been answered.
Here's a quick recap of the situation: Due to mounting concerns over Sun's licensing terms, the folks doing the non-commercial ports of Java "fringe" platforms like Linux pulled the plug on the port of the Sun JDK v. 1.0.2.
Since last month, the Sun folks have described this occurrence as a case of confusion between the non-commercial and commercial licenses. The Sun folks then explained their position, saying:
"...the bottom line is that JavaSoft welcomes and encourages the distribution of non-commercial ports, and we are sorry that any confusion existed on this issue. It seems our fault existed in not responding quickly enough to your diligent inquiries for further information."
The above statement does not put the matter to rest entirely; the porters are waiting for Sun's response on a few issues. As of this writing, Sun still has not given explanations. Because it appears that these relatively minor issues will be resolved reasonably, work on the port itself has been resumed. The port will not be released until the issues are finally resolved.
The bottom line is that it looks like Sun is relatively serious about seeing that the "openness" of Java is preserved. Yea!!!
List broken?
Question: The following code works under the Appletviewer but draws nothing under Netscape. What's up?
import java.applet.Applet; import java.awt.List; public class simple extends Applet { public void init() { List foo = new List (4,true); this.add(foo); } }
Well, I would say that it is a question of interpretation of what the abstract windowing toolkit (AWT) is supposed to do in this situation (where the created list is empty).
The applet is there and is running. Under Netscape 3.0, if the list does not have any entries then it will not build-out/draw the list.
One trick could have been to put something like " " as the only item in the list and then delete it after the list has been built. Unfortunately, in this case and any of the simple variations thereof, it will only build out a list that has one visible element.
Also note that it only builds out MIN(visible list elements value [e.g., 4], the number of actually added items) so in this case you will have to do foo.addItem (" ") four times! But note that the list will not have a scrollbar; if you want to have it build out with a scrollbar initially, you should use n+1 (e.g., 5) instead.
At this point, if you really want to have an empty list, you will have to do something more complex.
Serial port access
Jim Field asks: How do you get a Java application to send data out the serial port (COM1 or 2) of the PC?
Since this sort of low-level system access is not portable, there is no portable Java way to do it. You will need to resort to writing Java native methods -- that is, C code which does the actual work. You write Java "wrappers" for the C code to make it available to your Java code.
A tutorial on writing native methods is available from the online version of Sun's The Java Tutorial by Mary Campione and Kathy Walrath.
Page-hit counters
Mark Roth asks: I've been hunting high and low for a Java script or Java page-based hit counter. My system administrator doesn't want anything of this in his cgi-bin directory.
I guess I'm confused by the situation that you are in. If your system administrator will not allow you to run programs on the server side, then how would you run a Java application to update the hit-counter file?
A Java applet would have to connect to some running program on the server that would do the actual page-hit counter update and reporting. You can find some examples of this via the Gamelan Java Directory under the Networks resources in the Networks and Communications section.
Page-hit counters and file locking
Jeffery Anderson asks: How would you implement file locking in Java? For instance, I want to create an access counter for my Web site using Java, and I want to lock the file containing the number of hits to prevent it from being over-written.
Well, I can think of at least three methods. If you want to work just at the Java level then you could create a regular file using java.io.File.File() as a file system-level semaphore that your hit-counter file is in use. That is, if your hit-counter file is "hits.txt" then before opening it see if the lockfile (say, "hits.lck") exists using java.io.File.exists(). If the lockfile exists then one of your other threads is currently accessing it. If it does not exist then the hit-counter is available; you then create the lockfile and open the hit-counter file itself. When you're done updating the hit-counter file, close it and then delete the lockfile. While this is relatively simple to implement, you have to worry about things like race conditions, and you may not have any control over other programs that will access the hit-counter file. Basically, this would be a reasonable solution if your application is the only thing that touches the file, and only one copy of it is running at a time.
The other two options use the same underlying method to lock the file. The idea here is to write C code as either a stand-alone program or as a Java native method, which does the updating of the hit-counter file. The C code would then use system-level file locking code such as lock() or flock(). You would use the stand-alone program via a call to java.lang.Runtime.exec(). You would just invoke the Java native method. [See above for information about how to write native methods.]
Well, heretically speaking, in the particular case for which you want to use Java, I would just use some existing, well-tested, freely available Perl code to maintain the access counter. For existing Java solutions, see the previous question.
Asynchronous applet class loading?
Billy Quinn asks: I was wondering if you know of any way of preventing Java from loading all of the classes while an applet is being loaded, as opposed to loading the classes as you need them dynamically.
Well, I do not know of any tricks to do this for normal situations. You could probably write your own class loader but that would not be a good general solution since you can't control the class loader used by the browsers of folks using your applet.
Socket security exception
Tom O'Shea asks: When an applet on my server tries to connect to my mail host to send a message under Netscape 2.0, I keep getting this message:
#Security Exception: Socket.connect: "server name" -> "mail host name"
This is most likely due to the security restrictions in the Netscape browser. The security manager does not allow the applet to connect to any host other than the one from which it was retrieved. So, you need to make your applet use the mail system on your Web server instead.
Tom also notes: It works fine when running under Symantec Cafe Applet Browser.
That is not surprising. The various applet viewers/browsers are basically developer tools and therefore tend to be the most lenient on security issues. For information on the security policy of a specific browser, you should also check out the browser documentation. Alas, that documentation is not always all that helpful. The comp.lang.java.security newsgroup is a good resource on Java security.
Inter-applet communication update
A quick update on the inter-applet communication update from last month. The static data member "trick" that was used to communicate between applets is valid. The issue for Netscape revolved around some security problems with the old, relatively loose definition of how the different classes would be able to interact. Netscape in particular will be making this "trick" work again but with some constraints to make things reasonably secure.
Learn more about this topic
- Netscape.
- Gamelan.
- Sun's Java Native Method Tutorial
- Sun's Java Source Release.
- Linux Java
- JOLT Project"
|
https://www.javaworld.com/article/2077266/client-side-java/the-answer-to-the-question--will-java-remain--open---.amp.html
|
CC-MAIN-2018-22
|
refinedweb
| 1,443
| 62.78
|
Freedom or Power Redux 309
Ed. note - a brief response to Tim. A) my name isn't Timothy. (I know, I know, we all look alike. :) And B) I was trying to say pretty much what O'Reilly is saying - that all licensing, including the GPL, is an expression of power over what other people can do with the software. Hence the term "all licensing". If there were no copyright whatsoever on computer code, no intellectual property considerations at all, perhaps we could approach the state of true freedom. In the meantime, the GPL is a good way to place code firmly into a state where it is mostly free - you are free to do anything with GPL code except take it out of its free state. As far as restrictions go, this one is infinitely more palatable than most of the powers that software licensing seeks to exercise over software users.
As a more general point, I take issue with O'Reilly's description of copyright law as a compromise between creators and users. There's absolutely no evidence that the rights of users are considered when copyright laws are made. All copyright law changes made in my lifetime, nearly all copyright law changes ever, have been expansions of copyright law - if it's a compromise, it's an extraordinarily one-sided one. (I suppose you could a describe a mugging as a compromise between the mugger and the little old lady over rights to her purse.) Copyright law is more accurately described as a compromise between copyright holders and copyright holders. Other descriptions are both inaccurate and do a disservice to efforts to reform the laws.
O'Reilley : RMS :: Libertarianism : Socialism (Score:3, Insightful)
Unfortunately, the two viewpoints are irreconcilable. One values the rights of the individual over the needs of the Free Software world, and one values the needs of the Free Software world over the rights of the individual. RMS promises that everyone will have the right to see the code they're running, and that right will be enforced by a society who accepts the GPL. O'Reilley promises that everyone will have the right of self-determination as an author, as long as the GPL is not mainstream. The problem here is that the realization of both visions is mutually exclusive.
So, to these men, I say: drop it. Let the chips fall where they may. Let the people decide which license should govern them. It's nothing short of a vi vs. emacs or Christianity vs. Islam battle, and neither side stands a chance at winning. Let the users decide.
~wally
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:5, Insightful)
Here is a symbolized version of this debate and why it is pointless. Tim says: x=1, y=2, therefore x+y = 3. FSF says: x=5, y=5, therefore x+y = 10. Instead of discussing the original assumptions about the values of x and y, this debate is over the value of x+y where each side has chosen its own values for x and y.
Tim says in this log: "My goal is to see as much good software created as possible."
RMS/FSF says: ." Note: they do not say "Software deserves to be good no matter what."
These assumptions may spring a priori from the moral and aesthetic convictions held by each side in this debate, but until they agree on assumptions, arguing the consequents is fruitless.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:3)
That's an interesting perspective, but it's wrong. What is this "Free Software world" you're talking about?
The goal of the FSF is very much to increase individual rights, by calling into question the validity of a system that allows a few individuals to limit the rights of many individuals.
Sometimes, instead of saying 'many individuals', one might say 'society'. From this, the word association football rampant in this forum jumps to 'socialism', and from there to 'RMS is a communist'. This doesn't even make a lick of sense. Remember, it's the beneficiaries of copyright and patent law who are asking for state-sponsored support, not the other way around.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:3, Insightful)
So why do you support capitalism again? Or do you?
And before I get modded down as a troll (whoopty), I do mean this as a serious question. You are using the same rhetoric that communists have used against capitalism since communism was born.
As far as it goes, anything short of a fully participatory democracy is a case of a few individuals limiting the rights of many individuals (because, despite the ideal of my representative being beholden to me the constituent, s/he isn't really). So why are you wasting your time in the small backwater of software development and licensing, when you could be out advocating revolution to TRULY free us all?
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:4, Insightful)
The goal of the FSF is very much to increase individual rights, by calling into question the validity of a system that allows a few individuals to limit the rights of many individuals.
And from the above poster:
You are using the same rhetoric that communists have used against capitalism since communism was born.
Surely the staunch republicans of the USA would say that a reduction in government and a promotion of individualism is exactly the same goal as the FSF in this respect, namely the promotion of the individual over that of some limited set that govern.
This communist-capitalist debate strikes me as being rather meaningless because each camp claims the other is some extreme - the Rand followers would say 'the FSF is communism, we should be allowed to do whatever we want as individuals', and the FSF followers would reply 'the FSF is republicanism because we are promoting the needs of everyone against some governing body [meaning large monopolistic software corporations that reduce freedom]'.
The truth is, Richard Stallman doesn't want to be hindered by not being able to fix his programs when they go wrong, and he hates it. He hates it so much that he doesn't want anyone else to have this problem. This is not the same goal as no license, which is the maximum freedom possible. Stallman doesn't want true freedom, because true freedom could take away from his goal of ALWAYS being able to get inside and sort a program out if he wanted to do something that wasn't anticipated by the developers. True freedom on the part of the software company permits one to reduce people's freedom with regard to WHAT THEY HAVE DONE. This may be morally wrong to some people, because they don't have freedom with the creations of others. This is what Stallman wants.
Bit of a ramble.
thenerd.
thenerd.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
I should start by saying I don't (or shouldn't
In a few short posts, we've created a conversation that encompasses democracy, socialism, communism, capitalism, and copyright and patent law. I'm not going to even attempt to tie all of this together.
But to answer your question: personally, I don't see any contradiction between the goals of the FSF and those of the free market. Do you? Copyright and patent protections might benefit a particular manefestation of a market for software, but I really can't see that's it's the only way, or the best way, to promote either social progress, or progress in the art of writing good software.
In a different slashdot discussion, someone made a comparison between math theorems and software. I don't know that there are any mathematicians hawking theorems for cash, but they seem to be produced in great quantity nonetheless. Like math theorems, if we are to make any progress in the field of software, we must build on the work of others. It seems to me that a system that values financial reward above all else can only get so far. In a system where everyone hoards their knowledge, everyone must always be reinventing the wheel.
People may not get paid for selling math theorems directly, but they can be compensated in other ways. Tenure comes to mind. Likewise, there are reasons people write software other than because they want to be in the business of selling it directly to consumers for profit. Donald Becker is a salaried employee of NASA and wants to network his Linux PC's so he writes NIC drivers. And shares them. I'm sure NASA is delighted to have such a resource at their disposal. How many copies of the linux kernel has Linus sold now?...
In short, we don't need our government to create an artificial shortage of programming knowledge in order to advance the market for good software.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
The fact remains that things I've created are the things I've created, and I can give them away or sell them under whatever terms I like. And if I don't like the terms of the GPL, but it's all that's available, I won't release them. How does that benefit anyone?
The comparison to theorems in math is specious. It might have been nice if things had turned out that software was treated like theorems, but the fact is that's not the world we live in. You can't change that by decreeing it to be so, and attempts in spite of that will fail.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Why is this? I can understand that many people would (and do) choose a different licence when given the choice. But why would you keep something to yourself rather than release it under the GPL? You wouldn't make money either way, and the GPL doesn't impose any future restrictions on you as the copyright holder.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
I said release software, not code.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Ah. That makes more sense to me. It was I who misunderstood you.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:3, Informative)
In a few short posts, we've created a conversation that encompasses democracy, socialism, communism, capitalism, and copyright and patent law. I'm not going to even attempt to tie all of this together.
It's ok, I will.
Democracy is a pipe dream, just like communism and socialism. Communism is an ideal where all goods and services produced in an economy are communal, or shared; democracy intends to share the responsibility of governing a nation but most people just don't want it or are too stupid to be trusted with it. Socialism is more of a philosophy of the government taking care of its people and due to far-right rhetoric in the USA, has become synonymous with communism in our vernacular and doesn't apply here.
Capitalism begets copyright and patent law, to ensure that ideas are worth as much as finished product. In a communist state, nobody's work would need any protection because all work is for everyone, not just the guy that made it.
Limiting the duration of copyrights for software is a wise move. The ideas in a book or piece of music are worth something 40 years later - software isn't.
In a different slashdot discussion, someone made a comparison between math theorems and software.
That was me, hello there. I've got no problem with people wanting to give their work away. I've got a problem with people being FORCED to give their work away, which is what the GPL says - if one piece of this software is touched by the GPL, it's all touched by the GPL and must be free. It's like the brown acid of licenses, you take it once and you're screwed.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
If you don't want to abide by the license terms, don't use the software. Maybe you could get one of the Libertarians to sell you an alternative. Heck, they might even donate a copy to you if they're in a good mood that day.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
No, the GPL says nothing about what you might charge for your work. It simply abridges a developer's right to dictate the terms of use.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Now, realistically speaking, after a few years only the most rabid control freaks continue to care about how their old code is used, so it's really all a moot point anyway except in the short term (1-5 years). Who wants 5 year old code bad enough to infringe a copyright to use it? By then whatever technique it used that makes it so special is probably common knowledge; if it's a device driver then better hardware probably exists. In the rare cases where the 5 year old code is the best solution, then why would the programmer/corp care? 5 years is the lifetime of 3 product lines in the tech business.
Thus, I think this whole thing is just a colossal waste of energy. Either give it away or don't, and stop trying to nitpick the world to pieces, life's too short for that shit.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Off topic
To supplement this line: a democracy is not about following the wishes of the majority, but about compromising in favour of a majority while protecting the rights of all minorities.
This is the thin dividing line between democracy and (political) communism.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:3, Insightful)
Remember, it's the beneficiaries of copyright and patent law who are asking for state-sponsored support
And while the current law, thanks to a corrupt congress, equates "beneficiaries of copyright law" with "corporate interests", the fact is that EVERY INDIVIDUAL is intended to be a potential benficiary of copyright law. If you are a creator of potentially copyrighted material, you are one of these benficiaries.
Again the comparison to capitalism vs. communism--each of us is a potential entrepeneur (which of course I can't spell off the cuff). At which point the protections of business are suddenly the protections of the individual too.
Certainly there are avenues for abuse, and the way our system lets money unduly influence it today is a really big problem. But the solution isn't to ban money, nor to take all protections (including the reasonable ones) away from business. It's to fix the system so money can't be the corrupting influence.
To mandate GPL as the only valid license would take away my individual rights as an author of software. And this is exactly the same place that communism has largely failed in any major attempt to implement it--attempts to dictate the good of the many at the expense of the few are doomed to failure on the rocks of human nature. You cannot legislate or impose by any power (including the power to force me to use GPL for my work) individual good behavior before the fact.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Please check if you are using any software to which I have contributed (my time and effort, at no charge). If you are doing so, I would appreciate you coming over to do some of my gardening some time.
You see, I'm not so hot on the getting down and dirty side of life, but I'm a decent hand at coding. Either of which represents an amount of time and effort.
So if I choose to code up something (using my time and effort) and give it away at no charge, I won't appreciate being told under what rights I have to give it away.
In the same way as you doing your gardening (using your time and effort) should in no way imply that you have a requirement to do my gardening, just because you did your's for free.
What I'm trying to say is: I have rights over my creations. I have the right to make it closed source and sell it. I have the right to destroy it and never let anyone know about it. I have the right to make it freely available to anyone, for any purpose. And I have the right to put it under the GPL.
And unless I give someone the right, they have no right to tell me what to do with my property. Whether you consider IP property or not is actually irrelevent - my creation is the product of my time and effort.
So if you don't like my right (when applied to my "property") to restrict your rights
... then kindly remember that you don't have the right to restrict my rights when your capabilities are concerned.
See you in the garden on Saturday
...
Code. (Score:3, Insightful)
Attempt to paint the FSF as communist fail to address that they are talking about intellectual property, not real goods; additionally, they fail to realize that the FSF focuses its efforts on motivating developers to release code under the GPL, rather than coercing them to do so. To describe them as communist would be akin to describing the United Way as Stalinist.
Re:Code. (Score:2)
Contract law won't help you, either, since code "in the wild" can be used by individuals who haven't explicitly entered into a contract.
Check out "Intellectual Property Rights Viewed As Contracts [freenation.org]", an excellent discussion of IP in an anarcho-capitalist society. It addresses your "wild" code.
Re:Code. (Score:2)
If those works have value, then the people who derive value from those works will support it, with or without copyright law. Much software is already written by commission or by bid, some of which is even released into the GPL later. Absent the artificial constraints created by trying to package software as if it were a shipped good, the service of writing software will be funded the way other services are: if there's a need, someone will pay for the service in advance. If no one does, then there's no need. If an industry needs something, industry consortiums can (and already do) come together to fund the development of a package. Academia can be (and already is) a viable source for supporting people to write software.
Re:Code. (Score:2)
Go take any recipe or joke book on the shelves of B&N, scan it, and post it on the web. Lawyers will be talking to you within 24 hours.
if there's a need, someone will pay for the service in advance. If no one does, then there's no need.
True enough, but it doesn't tell the whole story. Your typical service is not unlimited or indefinite. Take your bookkeeper as an example. Let's say you pay him $1,000 for his services. That $1,000 covers *you* for a *limited* amount of time. No one else is able to use that same bookkeeper without also paying. You can't use that same bookkeeper in the future without paying again.
Software doesn't work that way. You pay a consultant $1,000 for a program and without some notion of software ownership, everyone in the world can get that program for free. I won't argue whether that's good or bad, just that you can't compare development to other kinds of services.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2, Insightful)
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Well said!
It is ultimately up to the programmer to decide how (s)he wants to see his/her code used. Fortunately for those of us who make a living from this soft of thing, RMS can't force us to give away everything we do for free
;)
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:3, Interesting)
O'Reilley : RMS
O'Reilley supports the rule of copyright law over software. This is not libertarianism. RMS argues against copyright law covering software [gnu.org], this is a much more libertarian viewpoint than O'Reilly's. Socialism recommends government ownership and control of key means of production. This has nothing to do with what RMS is arguing for.
I would redo the analogy as
O'Reilly : RMS
O'Reilly : RMS
Unfortunately, the two viewpoints are irreconcilable. One values the rights of the individual over the needs of the Free Software world, and one values the needs of the Free Software world over the rights of the individual.
Not quite right. Both of them feel they have the best interests of the Free Software world in mind.
The irreconcileable difference in viewpoints is simple:
* Tim O'Reilly values the rights of the developer over the rights of the user.
* RMS values the rights of the user over the rights of the developer.
I, as a developer, feel that RMS's viewpoint is the healthier one in the long run. Many developers understandably disagree. What baffles me is how many non-developers seem to prefer the rights of developers over the rights of users.
So, to these men, I say: drop it. Let the chips fall where they may.
It is unlikely that either will drop it. RMS advocates Free Software both as a living and as an ethical calling. Tim O'Reilly has fears for his personal livelihood and those of the people whose books he publishes.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Socialism does NOT recommend Governmrnt control over means of production. It reccomends that each individual has as much right to means of produciton, and goods produced, as every other individual. Govenmental contorl is merely a means to that end, and is a historiclly demonstratable bad one. Ina true socialist society, there would be no need for government to have any control over the economy whatsoever, including taxation.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
What? Most Libertarians quite firmly believe that it is appropriate to exercise property rights over one's intellectual output, and that people are free to contract their rights away to others in any way they choose, which necessarily (assuming the precept that intellectual property is property) includes things like software licensing.
If you don't think Libertarians believe in IP rights, you need to spend a little more time reading [lp.org] about the things they believe.
You'll note that everything on the web page is copyrighted, not copylefted.
The Libertarian Party doesn't have an official position on the GPL, but I can assure you that if they did, it wouldn't be in favor of mandating it's use.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
Actually, Mr. O'Reilley's position is very compatible with libertarianism.
The libertarian view is that people are free to make whatever contracts they choose; Mr. O'Reilley is in favor of software developers using whatever license they choose.
RMS argues against copyright law covering software, this is a much more libertarian viewpoint than O'Reilly's.
First of all, you are wrong: RMS likes copyright law covering software. He hates the BSD license, or public domain software, or any other license that does not prevent someone from taking source code private again. He has decided that the GPL is the only acceptable license, and GPL depends in turn upon copyright. If there were no copyright, then effectively all software would be public domain, and nothing would stop anyone from making a few tweaks and releasing a product while keeping the source code a secret.
Also, RMS has stated publicly that he is not in favor of letting a software developer choose which license to use; use of the GPL should be mandated. This is far from a libertarian position!
RMS isn't opposed to developers being paid, but he wants to take away their ability to maximize their earnings with an appropriate choice of licenses. He once seriously suggested that government should collect a tax, and use the tax revenue to pay developers, to compensate developers for being forced to write only GPL code. This tax idea is a very socialist idea.
* Tim O'Reilly values the rights of the developer over the rights of the user.
* RMS values the rights of the user over the rights of the developer.
Almost correct. Mr. O'Reilly doesn't want to take away any rights from the developer. RMS has framed the terms of the debate as developer vs. user, but it really isn't that simple. More rights for the developer do not mean less rights for the user. The developer and the user aren't enemies!
Users have a large body of software to choose from. Some is free software, some is open-source, some is shared source, some is proprietary. People should be free to choose whatever software they like. Note that GPL software is doing very well, competing against proprietary software. We don't need centralized government control of software, and I for one don't want it.
I, as a developer, feel that RMS's viewpoint is the healthier one in the long run.
I, as a developer and as a libertarian, feel that RMS's desire for control over developers is not healthy. It's one thing to promote free software. I am all in favor of Linux continuing under the same license it has now, for example, and I would rather use Linux than a non-free OS. But I am not willing to use government to force all software to be released under the GPL.
RMS has said that the developer's ability to choose any license he or she wishes is actually an exercise of power over the users. This is a bizarre concept of power, and it is not a libertarian idea.
steveha
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:5, Insightful)
"Intellectual property" is a government granted monopoly. It's not compatible with the libertarian edict that that government which governs least, governs best. Property is defined by scarcity. Information itself is not scarce, it is the ability to create information that is scarce. Hence, in a truly free market, information would cost nearly nothing but the scarce commodity, the ability to create useful software, would be highly prized and sought after, and coders would refuse to deliver the goods unless they were paid in advance. But this is not a free market, and corporations (immortal, non-human, property-holding entities) can own information and keep humans from using it to make society better, or profit, or whatever.
Socialists would want to allow for communal ownership of everything. That means you can't sell information. That's not what the GPL says- it just says you can't sell it exclusively, just like you can't sell sunlight exclusively. The GPL is most definitely a libertarian document. The GPL attempts to correct the Government interference into the marketplace represented by copyright and patent law by accepting copyright and refusing the freedom-reducing priveleges that go along with a government granted monopoly.
In a free market, all this would be unnecessary.
For extra credit, what inefficiencies are introduced into a market when the free flow of information is hindered?
Bryguy
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:3, Insightful)
Just to elaborate on your thoughtful words with a bit of incoherent rambling...
Interesting to note that RMS is careful to mention that he is speaking about software, not books, not music, etc. As a programmer, of course, I'm sure software freedom has more direct personal relevance to him. It's also an issue he can speak to with the confidence of someone who's "been there", so to speak.
Perhaps there is a real qualitative difference between software and music that warrants maintaining a distinction, however. Just to take a stab at it
Of course the same might be said of music (when is a sample more than a sample?), and to a lesser degree, literature. In literature, we have the practice of using other people's words, but in quotes, and giving attribution. I don't know that I feel as comfortable saying that "all music should be free" as I do saying "all software should be free", though. Kind of hard to put my finger on it. I just really haven't given it the same amount of thought.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
I believe letting the users decide is exactly what Tim O'Reilly is advocating. RMS does not believe that licenses other than GPL (and maybe a choice few others, but I'm not sure) are right. He does not believe you or I or anyone should have the right, or power as he puts it, to choose how to license our software.
Re:O'Reilley : RMS :: Libertarianism : Socialism (Score:2)
i believe his says this directly in his essay.
Not much wrong with the GPL in and of itself. (Score:3, Interesting)
The FSF also recommends that developers give the original copyrights to the FSF. You don't have to do that either. Basically, using the GPL does not morph a developer into a slack jawed Stallmanite.
It sounds like the GPL as used on the Linux kernel may be what you are looking for. The kernel developers also permit proprietary kernel modules but feel no obligation to maintain module compatibility across kernel releases. It is up to the proprietary vendor to track the kernel in that case. So you may or may not want to remove that addition as well.
Re:Not much wrong with the GPL in and of itself. (Score:2, Informative)
Can you please stop? (Score:4, Insightful)
Copyright is a brilliant compromise. It encourages people to make things available that they wouldn't otherwise, knowing that they still have some control over these things. Now, I grant freely that the huge extension of copyright duration works solidly against users - but other aspects of the law have done a very good job of balancing these things, such as the fair use rules.
In the absence of copyright, how exactly do you think games would get written? How would John Carmack earn a living?
Re:Can you please stop? (Score:2)
We're only talking about copyright for computer programming code here. Most games have significant non-code parts, i.e. artwork and so on. Games like Myst (to use an extreme example) would barely notice if their code was no longer copyrighted, as all the copyrighted artwork would still be there.
Now, that's a valid point. Carmack's games are an exception to the rule above. However, I've thought of this too. Read my argument here. [slashdot.org].
Finally, I object very strongly to the idea that some kinds of creative work should be unable to get the same protections as others. Programming is creative work, which, once done, is physically easy to reproduce the results of; we should protect it the same way we protect writing and music..
Wrong. Half-Life's engine is a licensed and modified Quake II engine. Unreal's engine is a built-from-scratch engine. Licensing the engine makes patent infringement a moot point; building your own engine doesn't do anything that Id could patent in the first place.
Re:Can you please stop? (Score:2)
Patents 101 -- A patent covers an idea, in any implementation. Take for example Amazon's 'one-click' patent -- B&N could write their ecommerce software any way they wanted to, but simply implementing it to support one-click confirmation places them in violation of Amazon's patent. Remember that -- concept, not code, is what is protected by patents. Actual code is what is protected by copyrights.
So to bring this back to the Quake/Half-Life/Unreal example, if id gained a patent on, say, 3d game architectures, then Unreal would run afoul of that patent -- even though they used no id code.
See the problem??
Re:Can you please stop? (Score:2)
Patents don't protect ideas. No intellectual property can be held in a simple idea. Patents do, however, come the closest.
Patents protect implementations of ideas. There are several kinds of patents; the ones that apply most commonly in computer software are patents on techniques, where the patented invention usually consists of a series of steps that the user has to take in order to perform the actions.
Also, let's be clear about what happens in the world of patents that engineers live in. If id gained a patent on, for example, true 3D game architectures (unlikely but possible, given their achievements in the past), and Unreal managed to develop an advance that was really important in the 3D world, they could patent the advance - without consent from id, and even without a license for the underlying patent of id's. id would then be forced to negotiate with Unreal, or be blocked from using the technique that Unreal developed.
You appear to overly obsess with code. I think algorithms are way more important. Monkeys can make code once the algorithm is worked out; perhaps this is where we disagree, and if so, prolly this argument is over.
:)
Re:Can you please stop? (Score:2)
Re:Can you please stop? (Score:2)
This is a problem of intellectual property law in general. We've all seen how copyright law gets manipulated to serve the interests of large corporations, patent law's no different. However, while I acknowledge this problem with both structures, I also think that the real problem is more of a structural problem with the legal system: the little guy can't get effective legal help.
As you point out, in the real world, none (or very little). In the copyright world, however, the main type of copyrighted code that's commercially available is binary executables. It's the binary executables that I'm against copyright protection for; I think that if a corporation decides to release its computer programs in binary form alone, it should be forced to seek patent protection for its programs. Releasing programs in binary form alone allows the copyright holder to claim intellectual property protection without making any reasonable disclosure of the article. This hits at the foundation of intellectual property; every other form of intellectual property requires some kind of reasonable disclosure in order to get protection. Why are binary programs different?
Re:Can you please stop? (Score:2)
Re:Can you please stop? (Score:2)
Okay. A couple things.
One: Patent law makes nobody a criminal. Copyright is the only form of intellectual property protection that intersects with the criminal law. Patent and trademark are purely civil matters. This is one of the big reasons why I'm against copyright on binaries; copyright in some respects is stronger than patent.
Two: I'm not sure what you mean by "overly broad." Patent law in some respects is narrower than copyright; the main one is the term.
Re:Can you please stop? (Score:2)
So I'd like to fix the laws. You might want to help.
Re:Can you please stop? (Score:2)
I hate to sound so reactionary, but I believe it's only a matter of time, especially with Ashcroft in power. So why protest weak laws and go to prison when I can pretend to be a good little sheep and then come out swinging when it's time? At least if I die then, it will be for a real cause, instead of massive rectal hemmhorage from being somebody's bitch.
Re:Can you please stop? (Score:3, Insightful)
When you are down in the dirt stuffing shells into cartridges and digging your best friend's teeth out of your thigh with a knife, please remember that there is a great deal less suffering brought about by getting involved in a political process rather than waiting for some kind of apocalyptic dream war fed by fantasy novels and jigoistic nationalist retroactive "history".
Get the hell out of your trench and get involved. A fatalistic refusal to participate in your own democracy is the most lazy and weak political stance anyone can take.
"[Guns] will secure more freedom than any laws." -- really? After your 'revolution', what are you and your buddies going to do for social structure? No laws, just shoot every third person who annoys you? I encourage you to go to Sierra Leone and research this idea -- but exercise first, cannabalism is back in style over there, and everyone needs to eat better.
Re:Can you please stop? (Score:2)
Correct on Half-Life (and the legion of other games that licensed one or another of the Quake engines).
Possibly incorrect on Unreal. Patents protect against reverse engineering. If you rebuild the program, using the same techniques, then you infringe on the patent.
Patents mostly protect the techniques used by the invention. If Unreal used a patented technique that Carmack or id software held the patent on, then they'd be hit with patent licensing fees.
The big difference here of course is the timeframe: 20 years, plus there's a requirement for public disclosure of the patent.
Re:Can you please stop? (Score:2)
What about the concept of the automobile? each individual car is patented, but "vehicle with four wheels, auto-mobile" isn't.
Re:Can you please stop? (Score:2)
Once again, completely incorrect.
Cars are not patented. Individual technological advances that are used in cars are patented. For example, suppose somebody comes up with a technique that allows an automobile to be manufactured with a more responsive steering wheel ("power steering"). This technique can be patented.
Also, intent is totally irrelevant to patent law (unlike copyright, where intent is actually fundamental). If you use a patented technique, you're infringing the patent, regardless of whether you meant to or not, or even if you'd never heard of the patent.
An automobile is a sum of a large number of parts. I believe the internal combustion engine was the main patented article in the early days of the automobile, although I haven't done a patent search so don't quote me.
:) The ICE was the main advance that made the automobile possible; other parts of it, like wheels, chairs and steering, had been around for a long time.
In order for somebody to patent The Car, they'd have to come up with something where all the parts were new. This is pretty unlikely.
Re:Can you please stop? (Score:2)
You've been reading Slashdot too long.
What if I render a 3-d view of video game code? Is it art or code?
The other poster in this thread is right, the DURATION of copyright is inappropriate for many mediums, and ultimately leads to the death of the "art" that was created. Videogames & applications written in the 80's are largely extinct, thanks to unreasonable copyrights that prevent others from distributing old works.
Re:Can you please stop? (Score:2)
The rendering code is code.
The visual image (say you save it as a JPEG) is art.
If you don't save the visual image, it's not fixed & there's no copyright.
Pop a Myst disc in your micro sometime. All the
.mov files that pepper the thing are art. The tiny little shreds of binary are code.
This isn't rocket science.
Re:Can you please stop? (Score:2)
Computer programs cannot be though of without considering "similar" works like books. A program represents an investment of time, effort and (usually) money. The final product is of interest or value to a user. This makes a program commercially viable as a trade product.
Given that programs are necessary, to force all of them to be free and/or open source would infringe on the rights of a (large) number of individuals to earn a living. Basically: no commercial viability, not a career option, no programs. Bummer.
Contrary to (unpopular) opinion, source code is not like a book. Binray code is like a book: its a finished, usable product. Should authors be forced to disclose their notes, the design of their plot, their character sketches? No. How about their "formula" for writing a captivating novel?
Points to ponder
...
Re:Can you please stop? (Score:2)
Source code is like a book.
I can read a book. I can page through it, and think about how the author uses various techniques to achieve the ends that are sought.
I might, on occasion, wonder about the author's thought process, but I know that there are probably notes that I'm not reading that the author made, but decided not to include directly in the book because they wouldn't help achieve the ends that the author wanted to achieve.
I can read source code. I can page through it, and think about how the author uses various techniques to achieve the ends that are sought.
I might, on occasion, wonder about the author's thought process, but I know that there are almost certainly bits of source that I'm not reading that the author made, but decided not to include directly in the source because they wouldn't help achieve the ends that the author wanted to achieve.
Okay, I admit, there are programmers (I'm infamous for this) who sometimes sit down and write a program straight up, all the way through, no edits. However, these are invariably very short programs.
I also write short stories the same way sometimes. No notes: the whole story is what I thought of when I typed it up.
Anything big, though, either in writing or in programming, and there are bits that don't go in.
Re:Can you please stop? (Score:2)
The developer releases to the escrow agent. It checks to see that it meets the criteria of the order; perhaps some reviewers would have to find it to be appropriately cool, or having certain features, or it would have to make a certain delivery date to avoid lower payment, etc.
If it all checks out, the escrow agent releases the game (which, because there's no copyright, is public domain) and pays the developer from the money it was holding. The agent itself might be paid from the interest on the money it held.
Personally, I'm not upset about copyright in general, but the present implementation is hardly balanced. It's great that games get written and developers earn a living, but these are probably less important goals than progress in the development of gaming, and the preservation of games for the future. (by letting people patch and port them independently, e.g. arcade ROMS)
You're hardly balancing if the only interests you're worried about are those of authors.
Re:Can you please stop? (Score:2)
Please go and get yourself a clue.
Re:Can you please stop? (Score:2)
Imagine an age of infinite reproducibility -- it shouldn't be too difficult to concieve. Would you pay $30 to download and skim a book from someone you have never heard of? I wouldn't. I'd just take it. Would you pay $30 to your favorite author when [s]he says [s]he is working on something new, and won't release it until [s]he makes $X amount of money? I would. You probably would, too. The concept is similar to that of paying a street performer, wherein you cannot actively moniter each consumer (each person on the street), but willing patrons can contribute to their own desire. The full write-up for the street performer protocol is available at [counterpane.com].
There are some valid attacks on this protocol, but yours is not one of them.
Re:Can you please stop? (Score:2)
Again, who would put money up? Why would they do such a thing??? There would be no game shops and no pretty boxes, because the product is in the public domain.
The Soviet Union tried a similar experiment with farming. Instead of peasants working as defacto slaves for some Nobleman, they would work in collective farms owned by the people.
This worked ok for awhile, since larger farms combined with mechanization made better use of the land. But after a few years, things began to fall apart.
Your proposal makes a few assumptions:
- Developers are not motivated by self-interest.
- Organizations that produce software are happy with a small, fixed amount of compensation for their services (assuming anyone would pay) versus virtually unlimited compensation today.
- Developers are willing to provide a warranty for software that they do not own; guaranteing that it will perform as specified, since the escrow system will withhold payment until everyone is happy.
- Consumers plan software purchases well in advance.
- Consumers are willing to sponsor developers for the "public good"
Your argument is a complete fantasy. As a whole, human beings act in their own self-interest. Society functions around commerce, not charity.
Re:Can you please stop? (Score:2)
The current system serves my needs quite well for video games. No "escrow system" will ever serve the same purpose. As is, each customer can individually say whether the game meets his needs, and only pays when the game is done. The escrow agent would be horribly overpowered, and could easily screw either or both sides.
Re:Can you please stop? (Score:2)
So the publisher can release a free demo, like id did with Doom and Quake.
The escrow agent would be horribly overpowered, and could easily screw either or both sides.
And your bank could decide to confiscate your account and keep your money. They don't, because it would be illegal, they'd get sued, and their reputation would suffer. Ditto for the escrow agents.
The current system serves my needs quite well for video games.
The current system has also resulted in draconian user-hostile laws like the DMCA. If all you're worried about it being able to buy games, then the Street Performer Protocol is not the best system. Looking at the larger picture, there are benefits you're not considering.
Re:Can you please stop? (Score:2)
Finally, there are really 3 parties to copyrights. There is the creator, the user/viewer/reader/listener, and the publisher. The publisher creates nothing, but does perform certain services in return for 90% of the money. This split seems rather unfair, especially as it has stayed constant even in the CD age when the production cost approaches zero. Copyright laws do include some balancing of the rights of the creators and publishers. For instance, unless it is a "work for hire" (which is a true corporate creation), the publisher doesn't own the copyright, but can only rent it for a limited time, eventually it goes back to the creator. But as corporations become more and more blatant at buying what they want from Congress, even this little bit of balance is in danger. Take what happened to musicians as a warning.
(For those born yesterday: some congressional staffer, just before leaving for a new career in the record industry, snuck a clause changing all music to "work for hire" into a bill at the end of session, and it was voted in without being read. Eventually the musicians got Congress to pass another bill giving them their rights back, since no corporation was willing to come out and defend their IP-grab in public. Yet. But AFAIK, every new song on any major-label CD published in one year belongs to the record companies for 95 years, never to the creators.)
US is not the only country... (Score:3, Interesting)
On the point that "all copyright changes" don't take into account the user this isn't the case in the EU where some changes have been done for that reason.
One issue that isn't often addressed is the cultural differences between countries that lead to different approaches being appropriate in different countries. The same is true within different parts of an organisation ("If I can't pay it ain't worth it" to "If its free then it fits in my budget"). Licensing is about the _writer_ of the software or work which may make sense in their environment but not in that of another. Thus a proprietary license and ownership but free distribution (eg Java) may make a lot of sense if it ties in with the aims of the program.
IMO Writers of a work have a right on how it should be used, it is not for _users_ to say how it should be used as it is not their effort that created it. That said the Writer's right does not extend once the users effort has been expended, whether that be by paying cash or by building upon the artefact.
If I buy a brick, I do not expect to pay a regular license for the house.
Cultural differences are just as important. If a certain practice seems strange or odd to you probably means that your approach seems odd to them. Basically tolerance is the important deal, being dictatorial makes you as much as a fool as the guy you are arguing against.
Re:US is not the only country... (Score:2)
This issue of culture is (IMHO) and important factor working AGAINST GPL in the open source arena.
I know several companies (I work for one) that either explicitly or implicitly will not use GPL or LGPL software in their development (think libraries here, not tools). Not using GPL is obvious, but LGPL is more subtle: you don't have the option to integrate the functionality directly into your application, even if you want to.
Many licenses also force you to distribute part of your application in source form (the open sourced libraries, not your own code) or to prominently mention the use of the library, which tramples on poor little corporate egos.
While I believe in giving credit where it is due (especially when you are leveraging effort done by others are no charge to you), most companies can't take the "bad press" or general perceived humiliation of admitting to using free code.
So some think this is good, because they aren't scabbing off your efforts. True. And you're entitled to that opinion.
But it is also bad. OpenSource is about getting more eyes seeing the code, and more hands working on it. Companies that I have had experience with have no qualms committing changes back to open source, as long as they can avoid the marketting problems it may create. So if your library is 90% good for a particular use, corporate time and money may just go to filling in their 10%, which they give back.
Even if its only one in a hundred companies, its worth it. And judging by the project out there working on this basis
... its not that rare.
I liked this better the first time (Score:5, Insightful)
Re:I liked this better the first time (Score:2, Insightful)
*sigh*
Re:I liked this better the first time (Score:3, Insightful)
Re:what do you mean? (Score:3, Insightful)
Actually, one more factual error... (Score:5, Insightful)
That's a far cry from "the only thing you can't do is take away the freedom". It is a lie, and a willful one, to claim that you can take away the freedom of *ANY* free code. If I put code in the public domain, no one can ever make it unfree. They can make their own versions with whatever restrictions they want, but *MY* code remains free, forever. No other license can say as much.
Re:Actually, one more factual error... (Score:2)
A bunch of fanatics beating up an innocent bystandar...
Sorry but that wasn't a violation of the GPL...
You can be harrased for violating the GPL becouse you MIGHT have compiled a Linux kernel the same day you compiled a commertal code..
Dosen't mean a thing...
If harrasment were the law I'd be legally bound to buy the local newspaper
[Stupid phone sales]
If I put code in the public domain, no one can ever make it unfree
Yes they can and do...
The reason there are so many accadental GPL violations is code theft is so natral in software develupment now a days that many develupers steal free software out of habbit.
They can make their own versions with whatever restrictions they want, but *MY* code remains free, forever.
There is the rub....
It's now up to you to prove that your the original author if somebody chouses to clame ownership of your code.
I made ZenToe public domain becouse I saw no benifit in preventing commertal versions of the code.
But some of the projects I am working on would suffer from a Microsoft embrase and extend... unless I had rights to the source code of ALL versions.
Re:Actually, one more factual error... (Score:4, Interesting)
The GPL is only a good thing if power over other peoples' code is more important to you than their freedom to use your code however they want. If that's the case, it's a great license. I'd rather make my moral decisions for myself, and let them make theirs.
Re:Actually, one more factual error... (Score:2)
This is precisely my choice. And an important one.
I can make a library, or implement (say) a protocol I have designed. I know full well that many commercial enterprises will not use even LGPL in their applications (they will want to integrate the protocol fully), and are unlikely to support some arbitrary unknown protocol from Joe Q Public.
So instead I put the code under BSD license, so any individual or organisation can choose to use the code if they want.
Yes, I am at the mercy of MS & partners doing the embrace and extend thing, but MS has shown itself (in cases) to be incapable of that tactic if the market is already sufficiently standardized (they can't change HTTP because of Apache, and the haven't managed to corrupt XML yet).
disservice? (Score:2, Interesting)
1. Stallman Ideas are communist - no relation to way USSR implemented its ideas of communism. Rather an original idea of communism.
2. Other opensource licences are 'socialism' with fragments of communism, here and there. See Finland, other european contries.
Taken that, I think it is a disservice Tim doing for the public, trying to confuse them and make public analyse each of the licences. Why? Because most public is not able, interested or have time to pick apart lawyer made contraptions. Now if he was to say that BSD licence is good, here's why, that would let common programmers understand advantages of either and pick one.
Business being a thing that will consume anything to grow, opensource licences are usable and possibly exploitable under some circuimstances, while GPL is least exploitable - AFAIK.
They're both right. (Score:3, Insightful)
Re:They're both right. (Score:2)
"users" are irrelevant to licensing issues (Score:4, Interesting)
(pure) users can't program thus their "freedom" is a 1:1 coupling to the freedom of the programmer that is their "supplier".
The only freedoms that thus matter are those of programmers (and "users that can program", if you must). But an easier metric to compare licenses would be "Nth level recipient", i.e.:
zero level: the original programmer and licensor
1st level: the programmer that builds on the original code
2nd level and onward: programmer that wants to build on the N-1 level base.
The GPL gives "most freedom" to levels 0 and 2 onwards (the more "selfish" license), whereas the BSD license gives "most freedom" to level 1 (a license giving "most freedom" to all of them can't exist, it will always be a fundamental choice). As soon as a level is occupied by a "user", there won't be any N+1 levels after it, so "freedom" becomes irrelevant.
Re:"users" are irrelevant to licensing issues (Score:2)
Stuff like that, really. Software freedom means being able to use software how you like, whether that means installing it where, how, and as many times as you want, as well as meaning being able to change the code itself to better suit your purpose.
Some software has to be non-free (Score:2, Interesting)
In Stallman's universe, software companies just wouldn't exist. It would be impossible for a bunch of programmers to get together and support themselves by developing great software. They'd have to find some other thing they could sell along with it. But suppose they didn't want to do that. Suppose they just wanted to write software - they're screwed. Those people are no longer free to just write software!
The freedom to decide to charge for some of your software is a freedom, because it allows you to choose your career. Without the ability for anyone anywhere to ever charge for any software, the freedom for programmers to just be programmers disappears.
I'm not saying that Free software is a bad thing. But it has to co-exist with proprietary software for software development as a whole to remain viable.
Re:Some software has to be non-free (Score:4, Interesting)
Since his job & livlihood is funded by gov't grants, charity and tuition, he does not have to worry about actually producing profit.
It can be both (was Re:Some software ...) (Score:3, Interesting)
The copying issue is the problem and what I would love to see is a free license with the following restrictions:
I believe something like this would go a long way to making sure that developers get their due, and can earn a living by charging for software but other developers/users can make copies, share with friends, or learn from the code.
Re:It can be both (was Re:Some software ...) (Score:3, Interesting)
Re:It can be both (was Re:Some software ...) (Score:2)
Re:Some software has to be non-free (Score:2)
Free software - no problem. Probably a great idea for somethings. All software being free - not so great. The problem is that it means you can't have a software business.
You fail to understand the software development business. The vast majority of professional programmers (most estimates I've seen have put it at between 90% and 95%) are employed developing custom software that is seldom distributed beyond the company that employs them (or the company that hired them as consultants). Most web programmers would also fall into this category, their work is visible, but not distributed.
For most programmers, copyright protection is meaningless. It doesn't affect whether or not they make a living. It has no bearing on the software industry as a whole. On the flip side, wide availablilty of high quality Free software makes our jobs easier, and improves our profit margin (if a client refrains from spending $50,000 on Oracle licenses, but instead spends $20,000 on implementing a custom feature in PostgreSQL, both the client and the developer are better off).
There is only one segment of the industry (small in number of programmers, but highly visible) that counts on copyright for survival: The off-the-shelf software producers. Most of these companies are producing bad code at the expense of users. I won't shed a tear if the Microsofts or the Adobes of the world fail to make a profit.
Re:Some software has to be non-free (Score:2)
The vast majority of professional programmers (most estimates I've seen have put it at between 90% and 95%) are employed developing custom software
If this is true, then how can proprietary software be a burden to anybody? Why don't companies just write their own OS for internal use?
At any rate, these are figures that people pull out of their posteriors. Even if accurate, they ignore the issue of the relative importance of the work being performed. Also, being a minority doesn't make something less important. Far less than 1% of the people in your state are elected officials, but what if we decided to restrict their civil liberties?
These statistics are most often bandied about by people trying to get others to ignore the rights of proprietary software developers by placing them in a minority. If this kind of logic were applied to racial minorities, the person who did so would be justifiably blasted. Yet applying this logic to a professional minority seems to be socially acceptable. Why?
Re:Some software has to be non-free (Score:2)
In fact, in a natural market, you'd be laughed at for trying to sell software, since, in the absence of copyright, it's non-scarce. The only thing you could sell would be the time and effort you expend _writing_ the software in the first place.
Copyright is not a law of nature! It's a way to sell the same thing to suckers over and over again. Some of the greatest works of art and inventions of all time were produced in eras without copyright. My personal opinion is that if copyright were done away with, there'd be far less rubbish produced, but the best stuff would remain, since the best stuff tends to be done regardless of the profit motive.
Michael writes:
I disagree. Copyright Law is an expression of power of the copyright holder over the users of the media. Many licenses (eg, the typical Microsoft EULA) make use of the power of Copyright and Contract Law to claim even more power over the users. It makes sense to say that these licenses are an expression of power.
The GPL, and other Free Software licenses take no additional power over users beyond those already exerted by copyright laws. In fact, they give users additional freedoms that they would not otherwise have. I would call these licenses expressions of freedom, not power.
I take issue with O'Reilly's description of copyright law as a compromise between creators and users. There's absolutely no evidence that the rights of users are considered when copyright laws are made.
Historically, yes, copyright law has had much more to do with balancing the rights of creators with the rights of publishers. In the US, the rights of users are brought into the equation by the doctorine of Fair Use, which is a matter of legal precident in the court system, not by creation of laws.
Fundamentally, however, any law is an agreement between "The People" (being those who permit the government to exist by following the rules and refraining from revolting), and those particular people governed by the law. Copyright law is no exception.
The basis of Copyright Law in the US is in the US Constitution [loc.gov], Article I, Section 8: "The Congress shall have the power
So basically, Copyright Law as it now stands in the US is a compromise between the users (via their representatives in Congress), and the creators (via their lobbyists in Washington). Yes, it is a one-sided compromse (observe that the lengths of copyright have always been set so that Mickey Mouse stays out of the public domain [asu.edu]).
If you want a different balance, make sure your will is known to your representatives. If your representatives ignore your will, vote for someone else. If enough people get involved, our government still won't be perfect, but at least it will better represent the will of people, and look less like the will of lobbyists.
For those of you in other nations, the basic theory is the same, the mechanisms are different.
a reply to michael... (Score:2, Insightful)
Either I have misunderstood what you have said (most likely) or you have little understanding of the idea behind copyright law.
Copyright law is (in most, some would argue all, cases) the only thing which
prevents you from making a copy of another person's intellectual property.
It presupposes that you accept the concept of "intellectual property" as valid.
Why would you want to accept the concept of intellectual property; the concept that someone else "owns" an idea, and has property rights to it?
You accept it because of the benefit it brings to you to do so. Or at least you do if you're smart.
The idea behind copyright law is that we agree as a society that
the benefit we derive from having Authors and Inventors share their ideas
is worth more than the cost of granting to them a limited
monopoly of control over the use of those works.
If you feel that this deal is no longer working to your benefit, you can agitate for a renegotiation. If we as a society
feel the same way, then we should re-write the terms of that deal.
We should all understand that whenever the terms of this deal
are changed, either to the benefit of the Authors and Inventors, or to the
benefit of the public, these changes will have repercussions.
I agree with you; since the establishment of copyright law in the United States, the terms of this
agreement have consistently been re-adjusted in favor of the Authors and Inventory.
(Or rather, in favor of the publishers. Was that intentional?)
Perhaps there is a need to re-evaluate the terms of this agreement once more.
Perhaps we need a Federal oversight comittee to manage the
national Intellectual Property and Copyright issues for the benefit
of the society in the same manner that the Federal Reserve
system manages the money supply for the general benefit of the society?
Re:a reply to michael... (Score:2)
"All licensing is power" (Score:2)
While strictly true, this is a blatantly unfair claim. If we accept that actions are expressions of either freedom or power (as per Kuhn and Stallman's definition [oreillynet.com]), we must also accept that expressions of power either limit others' freedom, or limit others' power. Using power to limit freedom, we can all agree is evil. Using power to limit power, however, must be allowed in some form, unless you feel that no-one may stop thieves and murderers.
If you acknowledge that software licensing is a form of power (and it is RMS's primary contension that proprietary licensing is an exercise of power that deprives users of essential freedoms), then it follows that GPL licensing uses power to limit power. It becomes a question of whether it's acceptible for individuals to limit others' power in this way. But you can't simply vilify all forms of power.
Believe only in Real Things (Score:2)
So to avoid this in a talk about software licenses, I ask you to believe only in real things. The words Power and Freedom don't decide arguments. We're talking about the words "software" and "can do" which are real. What can you do with your software and why can or can't you? And are the reasons just?
Yes...I suppose "just" is another trap in unreality. But its an opinion that I can't say how you answer. I guess Stallman has been asking you to ask yourself a question for a long time now. Is it okay to be fined for pirating software. Or should I say sharing software. Uh! the English language is such a mess. Don't trust your language to win arguments. You must depend on the reasoning of the reader to know what it is you are really talking about and not simply respond to rhetoric on vague words. The readers who do this are most probably not the same people who win arguments or who become President.
George Orwell warned us about this as well. He said "Political language [...] is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind." This is about political language but it applies equally here.
However you believe, believe only in real things. Honesty is greater than wit
Liked the line... (Score:4, Flamebait)
Hello, RMS.
I like the project I'm working on. I want to share the source code, because I think a lot of other people might apply it in groovy ways that don't suggest themselves to me.
But YOUR viewpoint is brick for brick the same prison as the Redmond Institute for the Monopolistically Inclined.
Mr. O'Reilly, your moderate view is a breath of fresh air.
Re:Liked the line... (Score:2)
"Hello, RMS.
I like the project I'm working on. I want to share the source code, because I think a lot of other people might apply it in groovy ways that don't suggest themselves to me."
</snip>
While you're at it, do you think RMS will let me GPL this?
#include <iostream.h>
void main()
{
cout << "Hello World";
cout << endl;
}
Can't see eye to eye (Score:5, Insightful)
O'Reilly says
But that's not the same goal as RMS. RMS has repeatedly stated that he'd accept an inferior piece of software, if the superior product was non-free. RMS expects the right to copy the software, read the software, learn how it works, and make modifications to it. RMS wants the software to be unencumbered at to how you use it, where you use it, why you use it, who uses it, when you use it, EXCEPT for the tiny encumberment that you don't deny anybody else the same freedoms.
Until O'Reilly argues on the same wavelength as RMS - which means either attacking the stated goals of RMS, or attacking the means RMS uses to achieve those goals - then O'Reilly won't have an essay worth reading. When you watch a debate you expect PRO and CON for the SAME argument, not PRO and PRO for DIFFERENT arguments.
On GPL and "less free" (Score:2)
Some people argue that GPL is "less free" because it cannot be turned proprietary by a third party, as with the BSD license. However, this argument stems from the belief that it's OK for some software to be proprietary--and proprietary software is clearly less free.
In some sort of ideal utopian society without copyright, these issues would be mute because software would be incapable of being sold and thus no economic advantage would be had from closed source. The only way software could be commercially produced in such a society would be by paying programmers / software companies for their focused labor instead their end product. And in fact, this is the ultimate goal of true proponents of Open Source software. Though copyright may be with us for awhile, GPL is a huge step towards reducing its power in the software industry.
Sick of political bickering in software... (Score:2)
Both should shut the fuck up and let developers release software with whatever license they choose and let the developers (and by extension, users) decide which method wins out...or, more realistically, allow both methods to exist in parallel.
Re: Sick of political bickering in software... (Score:3, Insightful)
Many people claim that true freedom is the right to impose whatever restrictions you want on something that belongs to you, a part of property law, and since software is intellectual property this naturally applies to software too. Unfortunately this does not hold water.
If I own a piece of land then I am well within my rights to restrict your access to it. If I sell you a piece of land my right to restrict your use of it goes away. Why is this not so with software? Furthermore, if I buy a Honda, why should I expect the Honda corporation to restrict my rights to open the hood of the car, fix the engine if it breaks, etc.?
What RMS is doing is challenging our notion of "intellectual property". Intellectual property rights are not natural rights. Congress was given the power to grant limited monopolies to creators and inventors, to encourage them to develop the arts and sciences in this country. What is happening, particularly in the proprietary software field, is exactly the opposite. Companies are using these rights to stifle innovation and competition in the field, to ensure that customers must purchase whatever software they sell no matter how bloated/buggy/outdated it may be (*cough*Windows*cough*). This has the potential effect of creating a sort of "software illuminati". Much as the guilds of the past kept information about their trade secret, both to remain in business and as a form of protection from a Church that banned the acquisition of certain types of scientific knowledge, the proprietary software companies of today keep their source code, which is nothing but information about their craft, secret. The Renaissance and the Industrial Revolution both came about during periods of relative intellectual freedom, when anyone could acquire the scientific knowledge necessary to invent and develop new technologies, art, etc.
Free software promises to do the same thing in the software world. RMS realizes this, and so does Bill Gates which is why he's so afraid of it.
RMS has not called for a law banning any form of proprietary software (that I can tell), and we all know that he has very strong opinions on what kinds of policy should be implemented in his version of a free society. I can't speak for him but it looks like he's taking the correct approach in a free society, putting his money where his mouth is. He believes that free software is so damn good from a freedom standpoint that it will eventually win out over proprietary software in the end, and that makes the GNU project a sort of social experiment to determine if this is the case. So far the outcome seems hopeful despite the landscape being littered with the decaying husks of open source dot-coms: Linux, Mozilla, Apache, etc. usage is still growing.
This is why I use free software: because proprietary software is a car with the hood welded shut.
Freedom without power. (Score:2)
I don't agree with RMS on this. I think he's off base. He advocates freedom--to a point, which isn't freedom at all. I understand where he's coming from, the freedom to license software has become abused, but is this reason to remove it? I don't think so.
With any freedom there are responsibilities. When people abuse the freedom of speech, I can't advocate removing that freedom because they are using it to thier own advantage. I can use my own freedoms to combat their misuse, however. That is the challenge, to combat misuse of freedoms. This is a duty of the masses, and not the elite. We have a responsibility to use our freedom of choice to combat what we see as misuse.
Limiting freedom of any sort isn't the answer. Freedom without power really isn't freedom.
Re:Check your cache please. (Score:3, Informative)
Re:Check your cache please. (Score:2)
Re:Anarchy (Score:2)
|
http://slashdot.org/story/01/11/27/1537219/freedom-or-power-redux?sdsrc=next
|
CC-MAIN-2015-06
|
refinedweb
| 12,648
| 62.98
|
KILLPG(3) Linux Programmer's Manual KILLPG(3)
killpg - send signal to a process group
#include <signal.h> int killpg(int pgrp, int sig); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): killpg(): _XOPEN_SOURCE >= 500 || /* Since glibc 2.19: */ _DEFAULT_SOURCE || /* Glibc <= 2.19: */ _BSD_SOURCE).
On success, zero is returned. On error, -1 is returned, and errno is set to indicate the error..
POSIX.1-2001, POSIX.1-2008, SVr4, 4.4BSD (killpg() first appeared in 4BSD).
There are various differences between the permission checking in BSD-type systems and System V-type systems. See the POSIX rationale for kill(3p).. C library/kernel differences On Linux, killpg() is implemented as a library function that makes the call kill(-pgrp, sig).
getpgrp(2), kill(2), signal(2), capabilities(7), credentials(7)
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2021-03-22 KILLPG(3)
Pages that refer to this page: kill(2), sigaction(2), signal(2), credentials(7), signal(7)
|
https://man7.org/linux/man-pages/man3/killpg.3.html
|
CC-MAIN-2021-43
|
refinedweb
| 187
| 60.31
|
The following code should build just fine on modern versions of swift-corelibs-xctest:
import XCTest class PassingTestCase: XCTestCase { static var allTests: [(String, PassingTestCase -> () throws -> Void)] { return [ ("test_passes", test_passes), ] } func test_passes() { XCTAssert(true) } } XCTMain([ testCase(PassingTestCase.allTests), ])
However, it currently fails with:
/swift-execution/code-tmp.swift:3:7: error: type 'PassingTestCase' does not conform to protocol 'XCTestCaseProvider' class PassingTestCase: XCTestCase { ^
XCTestCaseProvider hasn't been included in XCTest in any snapshot since March 1, so I assume the Sandbox is running fairly old version of XCTest.
This seems odd, though--the Sandbox uses Swift 3, but swift-corelibs-xctest wasn't migrated to Swift 3 until *after* the commit that removed `XCTestCaseProvider`. So what's the deal? Which version of swift-corelibs-xctest is being used on the Sandbox? Is it a fork?
Answer by Karl_Weinmeister (1) | May 18, 2016 at 01:54 PM
Hi @modocache, with the latest Swift snapshot available on the Swift Sandbox, you will now see:
Test Case 'PassingTestCase.test_passes' started. Test Case 'PassingTestCase.test_passes' passed (0.0 seconds). Executed 1 test, with 0 failures (0 unexpected) in 0.0 (0.0) seconds Total executed 1 test, with 0 failures (0 unexpected) in 0.0 (0.0) seconds
57 people are following this question.
Why is there an error 404 message when logging on to swift sandbox through github? 2 Answers
A weird result while getting address from array elements in swift sandbox 1 Answer
simple program iin Swift Sandbox crashes 2 Answers
Getting Error while running HelloWorld Program in IBM Swift Sandbox 1 Answer
What is the mechanism for reporting bugs in the IBM Swift Sandbox? 1 Answer
|
https://developer.ibm.com/answers/questions/261810/$%7BprofileUser.profileUrl%7D/
|
CC-MAIN-2019-43
|
refinedweb
| 273
| 54.83
|
Running and Scheduling QGIS Processing Jobs¶
You can automate a lot of tasks in QGIS using Python scripting (PyQGIS) and the Processing Framework. Most of the time, you would run these scripts manually while QGIS is open. While that is helpful, many times you need a way to run this jobs via the command-line and without needing to open QGIS. Fortunately, you can write standalone python scripts that use QGIS libraries and can be run via the command-line. In this tutorial, we will learn how to write and schedule a job that uses the QGIS Processing framework.
Description de l’exercice¶
Let’s say we are working on some analysis using shapefiles of a region. The shapefiles are updated on a daily basis and we always need the latest file. But before we can use these files, we need to cleanup the data. We can setup a QGIS job that automates this process and runs it daily so you have the latest cleaned up shapefiles for your work. We will write a standalone Python script that downloads a shapefile and run topological cleaning operations on a daily basis.
Récupérer les données¶
Geofabrik provides daily updated shapefiles of OpenStreetMap datasets.
We will use shapefiles for Fiji for this exercise. Download the fiji-latest.shp.zip and unzip it to a folder on your disk.
Data Source [GEOFABRIK]
Procédure¶
We will first run through the process of cleaning the shapefile manually to note the commands that we will use in the python script. Launch QGIS and go to.
Browse to the folder containing the unzipped shapefiles and select the
roads.shpfile and click Open.
First we must re-project the roads layer to a Projected CRS. This will allow us to use meters as units when performing analysis instead of degrees. Open.
Search for the Reproject layer tool. Double-click it to launch the dialog.
In the Reproject layer dialog, select the
roadslayer as Input layer. We will use
EPSG:3460 Fiji 1986 / Fiji Map GridCRS as the Target CRS. Click Run.
Once the process finishes, you will see the reprojected layer loaded in QGIS. Go to.
In the History and Log dialog, expand the Algorithm folder and select the latest entry. You will see the full processing command shown in the bottom panel. Note this command for use in our script.
Back in the main QGIS Window, click at the CRS button in the bottom-right corner.
In the Project Properties | CRS dialog, check the Enable on-the-fly CRS transformation and select
EPSG:3460 Fiji 1986 / Fiji Map Gridas the CRS. This will ensure that our original and reprojected layers will line up correctly.
Now we will run the cleaning operation. GRASS has a very powerful set of topological cleaning tools. These are available in QGIS via the
v.cleanalgorithm. Search for this algorithm in the Processing Toolbox and double-click it to launch the dialog.
You can read more about various tools and options in the Help tab. For this tutorial, we will be using the
snaptool to remove duplicate vertices that are within 1 meter of each other. Select
Reprojected layeras the Layer to clean. Choose
snapas the Cleaning tool. Enter
1.00as the Threshold. Leave the other fields blank and click Run.
Once the processing finishes, you will see 2 new layers added to QGIS. The
Cleaned vector layeris the layer with topological errors corrected. You will also have a
Errors layerwhich will highlight the features which were repaired. You can use the errors layer as a guide and zoom in to see vertices that were removed.
Go todialog and note the full processing command for later use.
We are ready to start coding now. See the A Text Editor or a Python IDE section in the Building a Python Plugin tutorial for instructions to setup your text editor or IDE. For running standalone python scripts that use QGIS, we must set various configuration options. A good way to run standalone scripts is to launch them via a
.batfile. This file will first set the correct configuration options and then call the python script. Create a new file named
launch.batand enter the following text. Change the values according to your QGIS configuration. Don’t forget to replace the username with your own username in the path to the python script. The paths in this file will be the same on your system if you installed QGIS via the
OSGeo4W Installer. Save the file on your Desktop.
Note
Linux and Mac users will need to create a shell script to set the paths and environment variables.
REM Change OSGEO4W_ROOT to point to the base install folder SET OSGEO4W_ROOT=C:\OSGeo4W64 SET QGISNAME=qgis SET QGIS=%OSGEO4W_ROOT%\apps\%QGISNAME% set QGIS_PREFIX_PATH=%QGIS% REM Gdal Setup set GDAL_DATA=%OSGEO4W_ROOT%\share\gdal\ REM Python Setup set PATH=%OSGEO4W_ROOT%\bin;%QGIS%\bin;%PATH% SET PYTHONHOME=%OSGEO4W_ROOT%\apps\Python27 set PYTHONPATH=%QGIS%\python;%PYTHONPATH% REM Launch python job python c:\Users\Ujaval\Desktop\download_and_clean.py pause
Create a new python file and enter the following code. Name the file as
download_and_clean.pyand save it on your Desktop.
from qgis.core import * print 'Hello QGIS!'
Switch to your Desktop and locate the
launch.baticon. Double-click it to launch a new command window and run the script. If you see
Hello QGIS!printed in the command window, your configuration and setup worked fine. If you see errors or do not see the text, check your
launch.batfile and make sure all the paths match the locations on your system.
Back in your text editor, modify the
download_and_clean.pyscript to add the following code. This is the bootstrap code to initialize QGIS. These are unnecessary if you are running the script within QGIS. But since we are running it outside QGIS, we need to add these at the beginning. Make sure you replace the username with your username. After making these changes, save the file and run
launch.batagain. If you see
Hello QGIS!printed, you are all set to do add the processing logic to the script.
import sys from qgis.core import * # print 'Hello QGIS!'
Recall the first processing command that we had saved from the log. This was the command to re-project a layer. Paste the command to your script and add the surrounding code as follows. Note that processing commands return the path to the output layers as a dictionary. We are storing this as the
retvalue and printing the path to the reprojected layer.
roads_shp_path = "C:\\Users\\Ujaval\\Downloads\\fiji-latest.shp\\roads.shp" ret = processing.runalg('qgis:reprojectlayer', roads_shp_path, 'EPSG:3460', None) output = ret['OUTPUT'] print output
Run the script via
launch.batand you will see the path to the newly created reprojected layer.
Now add the code for cleaning the topology. Since this is our final output, we will add the output file paths as the last 2 arguments for the
grass.v.cleanalgorithm. If you left these blank, the output will be created in a temporary directory.
processing.runalg("grass:v.clean", output, 1, 1, None, -1, 0.0001, 'C:\\Users\\Ujaval\\Desktop\\clean.shp', 'C:\Users\\Ujaval\\Desktop\\errors.shp')
Run the script and you will see 2 new shapefiles created on your Desktop. This completes the processing part of the script. Let’s add the code to download the data from the original website and unzip it automatically. We will also store the path to the unzipped file in a variable that we can pass to the processing algorithm later. We will need to import some additional modules for doing this. (See the end of the tutorial for the full script with all the changes)
import os import urllib import zipfile import temp
Run the completed script. Everytime you run the script, a fresh copy of the data will be downloaded and processed.
To automate running on this script on a daily basis, we can use the
Task Schedulerin Windows. Launch the Task Scheduler and click Create Basic Task.
Note
Linux and Mac users can use cron jobs to schedule tasks.
Name the task as
Daily Download and Cleanupand click Next.
Dailyas the Trigger and click Next
Select a time as per your liking and click Next.
Choose
Start a programas the Action and click Next.
Click Browse and locate the
launch.batscript. Click Next.
Click Finish at the last screen to schedule the task. Now the script will automatically launch at the specified time to give you a fresh copy of cleaned data everyday.
Below is the full
download_and_clean.py script for your reference.
import sys from qgis.core import * import os import urllib import zipfile import tempfile # # Download and unzip the latest shape print 'Downloaded file to %s' % roads_shp_path # Reproject the Roads layer print 'Reprojecting the roads layer' ret = processing.runalg('qgis:reprojectlayer', roads_shp_path, 'EPSG:3460', None) output = ret['OUTPUT'] # Clean the Roads layer print 'Cleaning the roads layer' processing.runalg("grass:v.clean", output, 1, 1, None, -1, 0.0001, 'C:\\Users\\Ujaval\\Desktop\\clean.shp', 'C:\Users\\Ujaval\\Desktop\\errors.shp') print 'Success'
|
http://www.qgistutorials.com/fr/docs/running_qgis_jobs.html
|
CC-MAIN-2021-10
|
refinedweb
| 1,516
| 67.55
|
1. Roll Present 12/9 BEA Systems, Mark Nottingham Canon, Herve Ruellan IBM, David Fallside IBM, John Ibbotson IONA Technologies, Suresh Kodichath (scribe) Canon, Jean-Jacques Moreau IBM, Noah Mendelsohn Oracle, Jeff Mischkinsky SAP AG, Volker Wiechers Regrets Microsoft Corporation, Martin Gudgin Absent Microsoft Corporation, Jeff Schlimmer 2. Agenda review: [Chair] other items of business, i18n policies, Yves ? [yves_lafon] yes [chair] will be covered in item 5 3. Minutes of 29 September 2004 approved without any objection 4. Review action items [davidF] 1 modification, MarkN with rec. issue 25 is done [yves_lafon]rec. item media type recommendation is done [davidF] Noah draft. Item 6 will take care. Didn't generate boiler plate [davidF] 501 & Gudge other one is pending 5. Status reports and misc [scribe]XOP media type registration. [yves_lafon] No feedback. I wll ask for approval. Plan to do it this or next week [davidF] Will that take care of mediatype registration ? [yves_lafon] yes ACTION Yves to seek IESG review (via Dan Connolly and Martin Duerst) for XOP media type [davidF] change MarkN name on existing action item to Yves [scribe] XMLP/WSD Task Force and WSDL Media Type document [AnishK] discussions in TF, not lot of activity, current doc. reflects resolution of all issues, WSG/Jonathan to ask WSG for Last Call. [davidF] who volunteers to review this doc. with Anish and MarkN? [marcH] volunteers to review [AnishK] Name of doc. is misleading. It is schema based, content-type wording is not correct [MarkN] Add a line or 2 to the document [Davidf] Marc Hadley, when will you be able to review it? [MarcH] end of next week (10/15/2004) ACTION: Marc to review Media Type document by Oct 15 [davidF] new item (SOAP/HTTP with IETF) [yves_lafon] input is needed from WG members who worked on SOAP/HTTP [scribe] Future of XMLP WG after MTOM etc are published as Recommendations [davidF] one possibility is to stay and meet occasionally, meeting every month or two. [davidF] Per its charter, the WG exists until May 2005. One proposal is to "meet occasionally" until then, are there any other ideas? [davidF] Yves will look into extension of charter [yves_lafon] should it be a new or edit the existing one? [yves_lafon] I will look into next week ACTION: Yves to figure out how to have a charter extension until June [davidF] will put the question through WG and put a request 6. Candidate Recommendation [scribe]Test status report, implementation page [davidF]there are now sufficient interop traces. Thanks to implementers for making it happen. The traces needs sanity checking, is there a volunteer? [JohnI] volunteers ACTION: JohnI to run a sanity check on the implementation trace by next week [davidF] 501, where we are with this? Noah drafted a text, inputs from members [davidf] no objection from members, much agreement to send Noah's draft text to Andrea with 501 response ACTION: Suresh to send Noah's draft email to xmlp-comment and originator to re-close issue 501 (point 1) [davidF] please send it to Andrea, i18n and xmlp-comments [davidF] regarding point 2, the precedence rules in [yves_lafon]don't deal with it, just give information precedence [davidF] Gudge had an action to seek clarification on this question, but it is not yet done, so we cannot be sure about yves' assumption [davidF] second choice is for the application to decide Anish comments that the HTTP and Mime information do not conflict [yves_lafon]asking I18n what they want to see [davidF]can you formulate a question [yves_lafon]yes ACTION: yves to ask Vine for clarification on precedence comment on 501 reply (and 502 about IRI) [davidF] 505 is now done. Nilo had made the edits [scribe] Issue 506 [davidF] anish suggests we need to correct the namespace. [scribe] no objections from WG memebers [davidF] anish to update the editor copy ACTION: Anish to send email to xmlp-comments to close the issue 506 with the proposed resolution (and edit the edcopy) [scribe]Review comments [scribe] 1. Shall we reference XML 3rd edition? [davidF]3 of them are fairly minor [HerveR] what is the difference, doesn't it impact us [davidF] in principle, changes are mostly editorial, so we can [MarcH] third edition semantically equivalent. [davidF]lack of energy in WG, so going to demand less AIs [davidF] does anyone object to referencing the third edition? [scribe] no objections from members [scribe]XOP comments [marcH] commenats are editorial, only non-editorial is final one [marcH] should be able to reconstruct, one that is included by us and one done by the user [marcH] will make the message not able to reconstruct [anishK] cause no harm to clarify ACTION: Marc to write a clarification sentence for Section 2 by next week [scribe] WG agrees that other comments in Marc's email are editorial ACTION: XOP Editors Incorporate the first 3 comments on [MarcH] take 3: During infoset reconstruction a processor is unable to differentiate between xop:Include elements inserted during XOP package construction and those that were part of the original infoset. ACTION: MTOM editors to insert the above "take 3" text [AnishK] does the statement to precede [MarcH] that is the intent [scribe]MTOM comment [marcH]editorial [davidF] WG agrees this is editorial, and with MarcH text ACTION: MTOM Editors Incorporate proposal [davidf] 501 and 502 to be closed down (?)
|
http://www.w3.org/2000/xp/Group/4/10/06-minutes.html
|
CC-MAIN-2015-22
|
refinedweb
| 891
| 55.78
|
Opened 8 years ago
Closed 3 years ago
#11448 closed Bug (fixed)
Defining relationships after querying a model does not add a reverse lookup to the referenced model
Description (last modified by )
Ok, that sounds vague but I don't know how to better describe it. Basically it boils down to having this in models.py:
from django.db import models import os class c1(models.Model): name = models.CharField("Name", max_length=30) default_c1 = c1.objects.get(name="non_existant") class c2(models.Model): other = models.ForeignKey(c1, default=default_c1)
Querying c1 later with
c1.objects.filter(c2__pk=0) will fail:
FieldError: Cannot resolve keyword 'c2' into field. Choices are: id, name
Minimal testcase project attached (models.py is slightly bigger than above). You can reproduce the problem with:
# Create the database (clobbers test.db in the current dir) ./manage.py syncdb # See that without querying in between it works echo -e "from proj1.app1.models import c1\nc1.objects.filter(c2__pk=0)" | ./manage.py shell # See that with querying in between it fails echo -e "from proj1.app1.models import c1\nc1.objects.filter(c2__pk=0)" | BREAK_ME=1 ./manage.py shell
Found on 1.0.2, confirmed with trunk (fresh checkout, less than 30 minutes ago)
Attachments (4)
Change History (23)
Changed 8 years ago by
comment:1 Changed 8 years ago by
Manually deleting the _related_objects_cache (and for good measure the _related_many_to_many_cache) works around the problem. I'd rather see django do that when defining a new model. Patch to follow soon.
Changed 8 years ago by
comment:2 Changed 8 years ago by
Attached patch clears the relevant _related_*_cache in contribute_to_related_class of OneToOneField, ManyToManyField and ForeignKey. This makes new relationships visible even if a query that triggers filling this cache has been executed beforehand. Tested against the test project (and another, proprietary, one).
comment:3 Changed 8 years ago by
Couple things:
- del is a statement, so no need for the parentheses
- I think the patch reads a little better as a hasattr() test instead of catching the exception. Also put a comment next to each of these saying if the cache is populated we clear it out because it needs to be repopulated to include the attr we're about to assign.
- Can you put a testcase in the Django tests that demonstrates that this has been fixed.
Otherwise the patch looks good to me.
comment:4 Changed 8 years ago by
- That's my coding style slipping though, will fix
- I followed the style in get_all_related_objects_with_model, but agree that a hasattr() reads better.
- I'm very unfamiliar with django's test setup. Can you point me to some documentation?
comment:5 Changed 8 years ago by
1) Yeah, I understand, but we try to follow PEP8 where possible.
3) Here are the docs on Django's test framework:, here's some info on getting setup to run the test suite:, lastly take a look at the tests/regressiontests directory of the source. Each directory in there is a set of self contained tests. However, the nature of this problem makes me think it will be very difficult to tests, so if it seems impossible I wouldn't waste a ton of time on it.
Changed 8 years ago by
updated patch, including testcase
comment:6 Changed 8 years ago by
Updated patch: fixing del(), adding comments and adding a test case. ./runtests.py -v2 query_between_definitions fails without the (rest of the) patch applied and succeeds otherwise. I cheated a little bit by not actually running a query, but the calling init_name_map(). This is the bit that actually causes the problem and running a query would call this function too.
comment:7 Changed 8 years ago by
comment:8 Changed 8 years ago by
comment:9 Changed 7 years ago by
I retract this ticket, I just found that this creates issues elsewhere as well, which cannot be solved this easily. A warning in the docs about not doing models.py-level queries or mixing forms and models would be appreciated though.
Changed 7 years ago by
Patch to rebuild cache if key error
comment:10 Changed 7 years ago by
I've upload a patch that rebuilds the cache if there is a key error in the options class. This should fix any instances where bad cache values cause a key error; also, there is minimal performance hit, as once a good cache is built, it is used.
comment:11 Changed 7 years ago by
comment:12 Changed 6 years ago by
comment:13 Changed 6 years ago by
comment:14 Changed 6 years ago by
The tests would need to be rewritten using unittests since this is now Django's preferred way. Also there seems to be multiple directions suggested for fixing this issue. One needs to clarify which approach suits best.
comment:15 Changed 6 years ago by
Edit: didn't touched anything yet apparently somehow unset the "easy" flag. Turning back on, sorry for troubles.
comment:16 Changed 6 years ago by
comment:17 Changed 6 years ago by
comment:18 Changed 6 years ago by
comment:19 Changed 3 years ago by
This is "fixed" by the app-loading refactor, in the sense that you get a
RuntimeError: App registry isn't ready yet. with this definition of models. Making SQL queries at import time has never worked correctly.
Minimal project to demonstrate the bug
|
https://code.djangoproject.com/ticket/11448
|
CC-MAIN-2017-13
|
refinedweb
| 904
| 64.41
|
Creating components is a great way to remove redundancy in Ember.js apps. For example, you might have a custom button that is used over and over in many different views but is defined only once. This is great, but what if you want to reuse an entire nested page layout instead? It’s easy to do with yields and some Ember magic.
The first step is to create a new component for the page.
Your
.js file should look something like the code below, and you should have a property for each dynamic section.
import Ember from 'ember'; export default Ember.Component.extend({ title: {isTitle: true}, formGroups: {isFormGroups: true}, error: {isError: true}, footer: {isFooter: true}, pageFooter: {isPageFooter: true} });
Your
.hbs file should look something like this:
{{title-bar}} <div class="page-wrapper"> <div class="content"> <div class="custom-header horizontal-box"> <div class="large-title"> {{yield title}} </div> </div> <form> <div class="vertical-box horizontal-box"> <div class="custom-body"> <div class="custom-form"> {{yield formGroups}} {{yield error}} </div> </div> </div> <div class="form-footer"> {{yield footer}} </div> </form> </div> {{yield pageFooter}} </div>
Using this page layout is easy. Inside your view’s
.hbs template file, just add the following code:
{{#page-layout as |section|}} {{#if section.isTitle}} "Add New Comment" {{else if section.isFormGroups}} <div class="form-group"> <label for="comment-name">Name</label> {{input value=model.name <label for="comment-body">Comment</label> {{textarea value=model.body id="comment-name" class="form-control" autofocus="autofocus"}} </div> {{else if section.isError}} {{#if hasSwearWords}} {{validation-error error="Comments must not contain swear words."}} {{/if}} {{else if section.isFooter}} <button {{action "submit"}}>Submit</button> {{/if}} {{/paged-dialog}}
You can use this page layout in many different view templates. The benefit here is that the page layout guarantees the exact same HTML structure, and any tweaks will be applied to all views.
The page layout is intentionally pretty dumb–it only cares about the nested structure of the elements. It leaves the view templates in charge of all business logic and data management.
Hope this approach is helpful for you.
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy6 Comments
Nice! I was thinking about how to do “multiple named yields” in the layout components for some time now. This is kinda hacky (with the {{#if}}s) but it solves the problem. Thanks for this, I’m gonna try it right away :)
I’d recommend taking a look at the ember-block-slots addon, which solves this issue with less boilerplate, a clean syntax, and support for yielding back different values per targeted block
Hey Steven,
That’s pretty sweet – thanks for linking to it.
Jeff
BTW, you can use the `hash` helper, so you don’t have to implement any `title: {isTitle: true}` stuff in your `component.js`.
Example:
“`js
{{yield (hash isTitle=true)}}
“`
Hey Ivan,
Thanks for checking out my post. Also, thanks for the suggestion – that’s pretty sweet!
Jeff
Minor thing you could also yield `contextual` components or “sub-components” to remove the need for the `if`s.
|
https://spin.atomicobject.com/2016/05/03/reusable-page-layouts-ember/
|
CC-MAIN-2019-51
|
refinedweb
| 518
| 56.15
|
Here is my problem: in a variable that is text and contains commas, I try to delete only the commas located between two strings (in fact
[ and
]). For example using the following string:
input = "The sun shines, that's fine [not, for, everyone] and if it rains, it Will Be better."
output = "The sun shines, that's fine [not for everyone] and if it rains, it Will Be better."
I know how to use
.replace for the whole variable, but I can not do it for a part of it.
There are some topics approaching on this site, but I did not manage to exploit them for my own question, e.g.:
import re Variable = "The sun shines, that's fine [not, for, everyone] and if it rains, it Will Be better." Variable1 = re.sub("\[[^]]*\]", lambda x:x.group(0).replace(',',''), Variable)
First you need to find the parts of the string that need to be rewritten (you do this with
re.sub). Then you rewrite that parts.
The function
var1 = re.sub("re", fun, var) means: find all substrings in te variable
var that conform to
re; process them with the function
fun; return the result; the result will be saved to the
var1 variable.
The regular expression "[[^]]*]" means: find substrings that start with
[ (
\[ in re), contain everything except
] (
[^]]* in re) and end with
] (
\] in re).
For every found occurrence run a function that convert this occurrence to something new. The function is:
lambda x: group(0).replace(',', '')
That means: take the string that found (
group(0)), replace
',' with
'' (remove
, in other words) and return the result.
You can use an expression like this to match them (if the brackets are balanced):
,(?=[^][]*\])
Used something like:
re.sub(r",(?=[^][]*\])", "", str)
Here is a non-regex method. You can replace your
[] delimiters with say
[/ and
/], and then
split on the
/ delimiter. Then every
odd string in the split list needs to be processed for
comma removal, which can be done while rebuilding the string in a list comprehension:
>>>>> chunks = Variable.replace('[','[/').replace(']','/]').split('/') >>> ''.join(sen.replace(',','') if i%2 else sen for i, sen in enumerate(chunks)) "The sun shines, that's fine [not for everyone] and if it rains, it Will Be better."
If you don't fancy learning regular expressions (see other responses on this page), you can use the partition command.
sentence = "the quick, brown [fox, jumped , over] the lazy dog" left, bracket, rest = sentence.partition("[") block, bracket, right = rest.partition("]")
"block" is now the part of the string in between the brackets, "left" is what was to the left of the opening bracket and "right" is what was to the right of the opening bracket.
You can then recover the full sentence with:
new_sentence = left + "[" + block.replace(",","") + "]" + right print new_sentence # the quick, brown [fox jumped over] the lazy dog
If you have more than one block, you can put this all in a for loop, applying the partition command to "right" at every step.
Or you could learn regular expressions! It will be worth it in the long run.
|
http://m.dlxedu.com/m/askdetail/3/540d7e34e8e0fbd38d867b6e77dbe278.html
|
CC-MAIN-2018-30
|
refinedweb
| 512
| 80.51
|
Created on 2009-12-15 07:31 by kcwu, last changed 2012-10-16 20:00 by pitrou. This issue is now closed.
"flags" is only supported on certain OS. FreeBSD is one of them.
FreeBSD itself support chflags but not all of its file systems do.
On FreeBSD, copystat() will fail on zfs. The exception is OSError and
errno is EOPNOTSUPP. According to manpage chflags(2), the errno means
"The underlying file system does not support file flags"
If the file system doesn't support flags, we should not call
os.chflags() at first or should not raise exception.
In my patch, I just ignore EOPNOTSUPP exception.
This patch looks like the right thing to do. I'm +1 but I don't have a BSD box to test it on.
For the record, this seems to break Mercurial on NFS-mounted repositories:
Barry, I suppose this doesn't warrant being a release blocker for 2.6.5, but in any case you're welcome to advise.
A better approach might be to change the function to:
def copystat(src, dst):
st = os.stat(src)
st_dst = os.stat(dst)
mode = stat.S_IMODE(st.st_mode)
mode_dst = stat.S_IMODE(st_dst.st_mode)
if hasattr(os, 'utime'):
if st.st_atime != st_dst.st_atime or st.st_mtime != st_dst.st_mtime
os.utime(dst, (st.st_atime, st.st_mtime))
if hasattr(os, 'chmod'):
if mode != mode_dst:
os.chmod(dst, mode)
if hasattr(os, 'chflags') and hasattr(st, 'st_flags'):
if st.st_flags != st_dst.st_flags:
os.chflags(dst, st.st_flags)
This avoids the system calls for the (common) case of not having to change anything at all. Given that the flags are normally not set, it also avoids the problem with NFS.
I committed the simple patch in r79299 (trunk), r79300 (2.6), r79301 (py3k), r79302 (3.1). Tarek suggested a test could be added for this, assigning the issue to him.
Tests added in issue14662. This issue can be closed.
Ok, thanks.
|
http://bugs.python.org/issue7512
|
CC-MAIN-2014-42
|
refinedweb
| 323
| 78.75
|
blist 1.1.1 is now available:
The blist is a drop-in replacement for the Python list the provides better performance when modifying large lists..
blist 1.1 introduces other data structures based on the blist::
from blist import blist, btuple x = blist([0]) # x is a blist with one element x *= 2**29 # x is a blist with > 500 million elements y = btuple(x) # y is a btuple with > 500 million elements
We're eager to hear about your experiences with the blist. You can email me at daniel@stutzbachenterprises.com. Alternately, bug reports and feature requests may be reported on our bug tracker at:
-- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC
python-announce-list@python.org
|
https://mail.python.org/archives/list/python-announce-list@python.org/thread/PZ3K63GHNDTDPVVCVJ6JE4UMW37BNWLV/
|
CC-MAIN-2020-29
|
refinedweb
| 120
| 58.62
|
.
In the previous post I wrote an introduction to Category Theory talking about composition, in this post I am going to talk about Types and functions in Category Theory.
Types and Functions
You can compose arrows, but not any two arrows, the target object of one arrow must match the source arrow. In terms of programming languages: a function's output type must match the input type of the next function.
What are Types?
You can think of a Type as Sets, they can be finite (Boolean, Char) or infinite (String, Integer). In Category Theory there is a Category of Sets, called Set. In this category, objects are sets, and arrows are functions from a Set to another.
The above is defined in the mathematical world, in the real world you could think of sets as types in a programming language and functions in the Set as functions in a programming language. The problem is, a mathematical function just knows the answer, but in a programming language you must write the code of that function, and that function may never return. To solve this, many programming languages declare a Type called Bottom type, all types extends the bottom type. Haskell bottom type is denoted by
_|_, in scala is denoted by
Nothing (See Nothing API documentation). A function that returns bottom is called a Partial Function.
Help me keep writing
The Mathematical Model
If you are a developer, I am sure you have found yourself running an interpreter in your mind while debugging. We Humans aren't very good at this, since it is difficult to keep track of all variables. There is an alternative to know if a program is correct, it's called Denotational Semantics. In short, Denotational Semantics is an approach of formalizing the meanings of a programming language, it is concerned with finding mathematical objects called domains that represent what programs do.
Opposed to Denotational Semantics is Operational Semantics. Operational Semantics tries to proof certain properties of a program (such as correctness) by constructing logical proofs, this is often too complex.
By having a mathematical model (Denotational semantics) you can write formal proofs proving your software correctness.
Pure & Impure functions
Pure functions are those who always return the same result for the same input and without side effects. For example, mathematical functions are always pure. On the contrary, impure functions have side effects.
Examples of types
Lets see now a few types, starting from the Empty set.
Which type would define an Empty Set? Think about it a moment, I've mentioned it above. In haskell this type is
Void, in Scala
Nothing. This Set has no elements. Previously I said there is a Category called Set, in which Objects are sets and Arrows are functions. I this context, if
A is a set, the empty set, only one function
f exists from
{} to
A, the Empty Function.
Could you ever define a function that takes as parameter an object of type
Void (an empty set)?, yes, you can, but you won't be able to call it, since you can't pass it a parameter which type is
Void. However, the return type of this function could be any. This types of functions (Those who can return any type) are called polymorphic in the return type, here are some examples:
cantCallMe :: Void -> a
A lower case letter in a function's declaration in haskell means
a can be of any type. Here are examples in scala:
def cantCallMe(a:Nothing) = 1 def cantCallMe(a:Nothing) = "str"
Moving on, what Type would be the one corresponding to the Singleton Set?, that is, a type with only one element (one possible value). In C++ this type is
void, not to be confused with Haskell's
Void,
Void is the empty set, whereas
void in C++ is a singleton set, because its a set with only one element, in fact, you can call functions receiving
void arguments. An example of such functions is
int f314() { ret 314 }, you can call this function, and it will return always 314.
Although it may seems this function is not taking any arguments, it is. Because if you can't pass it an argument, you could not call it. So it is taking a dummy value with only one instance (a singleton set, in this case 314). Lets see the same example in Haskell and Scala:
f314 :: () -> Integer -- from Unit to Integer f314() = 314
Here it becomes clearer that
f314 is taking a parameter, the
Unit type (allowing only one value). You call this function with
f314(), which denotes more explicitly this function is taking one parameter.
In Scala this type is also called Unit, and its unique value is denoted also by
(), as in Haskell:
def f314() = 314 /* from () => Int */
All this may be seems like nonsense, but we are building the concepts bottom up, as you delve more deeply into Category Theory, it will gain more and more sense. For example, with this knowledge you can avoid mentioning explicitly the elements in a set, now you can reference them with Arrows (Functions in this case, since we are in the Category of Sets). Functions going from Unit to any type A are in one-to-one correspondence with elements in that set A.
What about functions returning
void (C++), or
Unit (Haskell, Scala)? Usually this kind of functions have side effects, but if they are pure what they are doing is mapping elements in a set A to a singleton, so, all elements in a set A will be mapped to the same value. Lets see a few examples:
fInt :: Integer -> () fInt x = ()
The special declaration using
_ means it does not matter what argument you pass in to
f, as the argument type doesn't matter, you can define the function above in a more generic way:
unit :: a -> () unit _ = ()
It won't matter what type you pass to this function, it will always be mapped to
Unit. Here is the scala equivalent:
def unit[T](a:T):Unit = ()
The next logical type to see is a set with 2 elements, which corresponds with
bool in C++,
Bool in Haskell and
Boolean in Scala. Functions to booleans are called predicates, examples of this functions:
isDigit, isLower, isLetter and so on.
Challenges
Now I want to share with you two of the Challenges Bartosz proposes on his site that I solved. Please consider that they might be wrong or can be improved, I would like to hear your take on this challenges, so please comment below. You can see the complete list of challenges on Bartosz website (Linked in the refernces), I've only solved #1 and #6.
- Challenge #1Here is what I've done, I tried to do it with an immutable Map, but couldn't get it to work:
case class Memoize[A, B](f: A => B) { private[this] val values: mutable.Map[A,B] = mutable.Map.empty def apply(x: A) = values getOrElseUpdate(x, f(x)) }
you can test it with:
def f(a:Int) = { Thread.sleep(5000) a*a } val b = Memoize(f) b(10) // Takes 5 secs b(10) // immediate
- Challenge #6
References
Spot a typo?: Help me fix it by contacting me or commenting below!
|
https://elbauldelprogramador.com/en/scala-category-theory-types/
|
CC-MAIN-2018-43
|
refinedweb
| 1,213
| 67.59
|
Styling React Using Sass
What is Sass
Sass is a CSS pre-processor.
Sass files are executed on the server and sends CSS to the browser.
You can learn more about Sass in our Sass Tutorial.
Can I use Sass?
If you use the
create-react-app in your project, you can easily
install and use Sass in your React projects.
Install Sass by running this command in your terminal:
Now you are ready to include Sass files in your project!
Create a Sass file
Create a Sass file the same way as you create CSS files, but Sass files have the
file extension
.scss
In Sass files you can use variables and other Sass functions:
my-sass.scss:
Create a variable to define the color of the text:
$myColor: red; h1 { color: $myColor; }
Import the Sass file the same way as you imported a CSS file:
index.js:
import React from 'react'; import ReactDOM from 'react-dom/client'; import './my-sass.scss'; const Header = () => { return ( <> <h1>Hello Style!</h1> <p>Add a little style!.</p> </> ); } const root = ReactDOM.createRoot(document.getElementById('root')); root.render(<Header />);
|
https://www.w3schools.com/react/react_sass_styling.asp
|
CC-MAIN-2022-21
|
refinedweb
| 186
| 66.94
|
I am using rasp pi 3 model b, and i want to add simple analog photoresostor but i cant choose gpio
Thanks for help guys
Hi @tomislavpisk02 and welcome to the Cayenne Community.
This is because the Photoresistor is an analog sensor, and the Raspberry Pi does not have any Analog pins by default. If you add one of the supported Analog to Digital converter extensions from Add New > Device / Widget > Extensions to your project and wire the photoresistor through that, then you'll see that extension show up in the 'Connectivity' menu and you'll be able to select one of its analog pins.
If you look at our tutorial for wiring a Photoresistor to the Raspberry Pi, we show an example where it is wired to the MCP3008 Analog to Digital convertor.
thanks for fast answer, but i can use it with my raspberry when i simply start python program. So data from it can be read without extensions.
I'm kind of curious how you're doing this if you don't mind sharing the code. Is it a simple photoresistor or some sort of digital light sensor? Maybe something like this?
Regardless, if you can access the data on your Pi's command line, then you can definitely pass it into Cayenne if you remove it from your dashboard as a 'Raspberry Pi' device and re-connect it to Cayenne as an MQTT device using our Python MQTT client.
This way, you can pass any generic sensor data into Cayenne and a widget will automatically be created for it - (rather than the process of adding and configuring the Photoresistor widget to your Cayenne dashboard).
For example, you can use")
and so on.
I will try your way when i come home, but this is the code, it is reallysimple as i said
import RPi.GPIO as GPIO, time, os
DEBUG = 1GPIO.setmode(GPIO.BCM)
def RCtime (RCpin):reading = 0GPIO.setup(RCpin, GPIO.OUT)GPIO.output(RCpin, GPIO.LOW)time.sleep(0.1)
GPIO.setup(RCpin, GPIO.IN)
while (GPIO.input(RCpin) == GPIO.LOW):reading += 1return reading
while True:print RCtime(18) # Read RC timing using pin #18
sri, 7. lip 2017. 17:51 Rsiegel mydevices@discoursemail.com je napisao:
It is possible to wire a luminosity sensor to the raspberry pi. I have done it a couple of times I was following this publication:
|
http://community.mydevices.com/t/i-cant-add-simple-photoresistor/3625
|
CC-MAIN-2017-39
|
refinedweb
| 398
| 61.46
|
National Pension Scheme (NPS) as a retirement planning tool has been gaining traction since it was thrown open to all classes of investors in July 2009.
National Pension Scheme (NPS) as a retirement planning tool has been gaining traction since it was thrown open to all classes of investors in July 2009. In the previous article on this subject, we looked at the returns generated by the NPS in Tier I option since July 2009 till June 2018.
We had considered the case of a 40-year-old investor in 2009 (current age 49), who invests Rs 25,000 every July and January (starting July 2009), in NPS Tier 1 option, managed by one of the biggest pension fund managers in India with moderate (auto) asset allocation mode. And this was compared with the investments of similar nature in large-cap, mid-cap and an equity-linked savings scheme (ELSS) for the same period.
It was observed that in the period 2009 –18, total investment of Rs 4.5 lakh in NPS had grown to `7.14 lakh (touching a high of `7.23 lakh) and registering an internal rate of return (IRR) of 9.6%. The overall volatility, however, has remained quite benign; a total of four drawdowns greater than 5% and only one instance of drawdown greater than 10%.
Analysis from 2014 to 2018
Further to the above analysis, we also studied the same approach to investments but in a different period: January 1, 2014 to June 25, 2018. In this study, we again take an investor of age 45 (as of January 1, 2014), investing Rs 25,000 every July and January (starting January 2014) in NPS Tier 1 option with moderate (auto) asset allocation mode. The total investment of Rs 2.25 lakh had the value of Rs 2.78 lakh as of June 25, 2018, with IRR at 8.66%. The interesting observation in this case was the absence of volatility.
The portfolio never dipped below 5% in this period, with a maximum drawdown being limited to 4.56%. The story is a bit different for a mutual fund allocation portfolio. In case of mutual fund portfolio, the total investment of Rs 2.25 lakh grew at the rate of 10.33% to `2.89 lakh, while hitting a peak of `2.93 lakh.
NPS less volatile
There have been four instances of drawdowns greater than 5%, but not once did the portfolio cross the 10% drawdown mark. Maximum drawdown registered was 6.97%. The volatility performance was comparable to that of an NPS asset allocation (only slightly volatile than NPS), but the returns delivered were superior.
In case of SIP investments in large-cap, mid-cap and ELSS, return expectations continue to remain high, but with added volatility. While the IRR for large-cap and mid-cap were 12.37% and 15.82%, respectively, ELSS investment registered 9.92%, not even at par with the MF asset allocation, which delivered 10.33% IRR in the period considered. In terms of maximum drawdowns as well, large-cap and mid-cap were at similar levels, dropping in value by a maximum of 13.6%.
It is observed that NPS continues to remain least volatile, but nonetheless, wealth creation is also subdued. The MF asset allocation strategy has increased volatility as compared to the NPS, but only marginally (NPS has highest drawdown of 4.56% whereas MF allocation has 6.97%). The IRR for the MF allocation is 10.33%, as compared to 8.66% for that of NPS.
Higher returns from pure equity
Pure equity allocation have better returns in the range 12.37-15.82% (excluding ELSS, which has only a return of 9.92%), but also have increased volatility. The maximum drawdown was 13.6% as compared to 4.56% for NPS. So, as an investor if you are looking at lower volatility and standard deviation in your investment portfolio, NPS drives home the point..
|
https://www.financialexpress.com/money/sip-investments-in-elss-more-volatile-than-nps-in-long-term/1273004/
|
CC-MAIN-2019-47
|
refinedweb
| 661
| 65.93
|
How to Make Your Fortune at Cards
Introduction: How to Make Your Fortune at Cards
So how does an honest guy or gal, who hasn't found an heiress or a sugar daddy and can't face doing five to ten in the joint, make their fortune. Well this instructable shows, how with a little bit of ingenuity and a tiny amount of work, you can make your own card game up and sell lots of packs of it to make big money. It's not that difficult, and sell a million packs and BINGO, before you know it, you'll be a mutimillionaire and despite the odds of that being slim, they are million's of times better than the odds of getting hitched with the ex-Mrs McCartney or pulling off a successful bank heist.
And if you doubt me, well I've done it and so I know what to do and what not to do and you can see the result of my labours (and what could be similar to yours) at dadcando - Plop Trumps. This instructable might just be the thing that sets you off down a path of a successful entrepreneur... I think there's room for a few more millionaires, don't you?
All you will need is:
- An idea
- Some start up cash, can be as little at $40, but more is better, up to $20,000
- Good blagging skills
- Digital camera
- Some design skills (but not much)
- Computer drawing package and photograph manipulation applications
- About 3 to 6 months
- Courage
Step 1: Have the Idea
Three little words... have the idea... probably the biggest stumbling block that you're going to have to overcome.
But it's not as hard as you might think, given that one of the world's greatest inventors, Edison said: "Genius is one percent inspiration and ninety-nine percent perspiration", and he was the guy that came up with the light bulb, which as we all know has been very useful as the universal signal for "having an idea" amongst other things, so it's simple maths, if the idea is only 1 percent of the problem, then coming up with the idea shouldn't take long. Ideas are all around you, it's picking the right one that counts.
If you want to make a new card game, play loads of different card games. Ask people that you play with what they like about the particular game you are playing, and what they don't like about it. Ask your friends, ask kids, ask your kids (if you have them. Listen to what people say and think about it. When I was trying to come up with a new game to sell on dadcando, I originally had the idea to make a new type of Trumps game and because Top Trumps is famous in the UK, I was going to call mine Pop Trumps and make it about famous dads. I was talking to my kids about it and they said it was a bit lame. The 11 year old said,
"Why don't you call it Plop Trumps and make it about poo."
And there it was, a brilliant idea thrown out by the creative mind of a child.
Plop Trumps is perfect as an idea because it has what experts call the Anchor and the Twist. Niche, funky products (and in fact most new products) work when they are recognisable but at the same time do something clever or new. People like what they know, but they need a new spin on it to be really excited. In the UK, and perhaps a few other countries, everyone knows Trump card games and the leading brand is Top Trumps, so a parody of that brand and the whole genre is bound to be fun and interesting, especially if it tackles a fascinating subject that is vaguely taboo.
So there we have it, simple, recognisable but different... recognising the idea was the key, it will be the same for you.
Ideas can come at any time and in all shapes and sizes and can strike you at any time, here's a picture of me having an idea for a new board game called "Capsize" where you have to get round the course capsizing as many times as possible in what looks like perfect conditions. (All new ideas should be ecologically friendly ones).
It pays to have a small notebook at hand to write ideas down, so that you can capture them. The act of writing them down makes it easier to process and move on to the next idea, or build on that one, rather than having to keep remembering the first idea.
NOTE of CAUTION:
Be careful though of ideas that only you think are brilliant but nobody else does.
You'd be surprised how many people design a new product that only they and a handful of others actually want or need. Sometimes budding entrepreneurs spend thousands and end up selling only one or two products. It is difficult, because truly revolutionary products and games have no real precedent and so it can prove very difficult to accurately gauge the potential success of a product. One way to do this is to consider other similar things that people like and buy and think of the reasons why they buy them. If your idea fits with these then it has a better chance than one that doesn't.
Step 2: Make Sure You Can Do What Is Needed to Make the Idea a Reality
For me and the creation of Plop Trumps, there was one major potential block to it all coming out... photographing the subject matter... poo.
Our equipment might let us down, the subject matter might not look any good, or we might not be able to find enough examples of poo as we needed.
For starters the picture on a typical trump card deck is quite small so almost any digital camera these days should be able to take an acceptable picture. I have a compact 8 Mpix one, so the pictures looked like they would be more than good enough for what we needed.
Could we take enough photos of poo and make them look good enough to work as a set of cards? We did a little bit of research and found that unlike standard playing cards, Trump card games have between 36 to 48 cards per pack. We reckoned about 40 pictures would do it, so the first test was to take a picture and see if the results were good enough to publish.
Luckily we have a pet leopard gecko that produces a rather benign, dry and inoffensive poo. We got a piece of this put it on a little sand from it's cage and did a trial run.
Here we are taking the first picture and the result (this is a big file. Eewww! you can bits of his locust lunch in there... eh up, that's another poo we could photograph... locust poo!
Part of testing the idea is telling a few friends. be careful if you have an idea that you think can be copied, or should be patented, then telling a whole load of people opens you up to be copied and makes it impossible to patent (one of the criteria necessary to meet when making a patent application is that the idea must not have been told to anyone, i.e. NOT MADE PUBLIC). Still it is a good idea to test out what you are thinking a bit, just in case you have missed some vital thing that stops you doing what you want to.
As part of the testing I phoned up the company who make the leading brand of Trump card games, Top Trumps, and asked their marketing director a few questions.
I told him that I had an idea for a new trump card game and was he interested. He said no they weren't interested, and it was not possible that I had a new idea that they hadn't though of already. I then asked if he minded other people doing trump card games ( I personally believed that it was difficult if not impossible for him to stop people making trump card games as many types exist in the world and such games were in existence before his company made Top Trumps the leading brand. I just wanted to know what his policy might be. Of course at this point I did not reveal the exact idea. I quoted a couple of competitive examples. Again he replied in the negative. They understood that other trump card games existed and seemed ok with it.
Great, now all I had to do was take another 39 pictures of poo and I would be ready to clean up.
Step 3: Start With the Easy Things
Now begins the 99percent of the effort, so you may as well get going on the easy bits first.
With my kids and any of my friends who weren't too grossed out to talk about it we brainstormed a list of animals to act as a guide for all the poos we would need.
At the same time we started to discuss the criteria, what sort of things would be easy to categorize poos under and what sort of things would make the game fun to play. With trump type games you compare category statistics of cards one at a time and see who wins. No card should be able to be beaten on everything, because whoever holds that card will always loose, by the same token, no card must be able to win on everything, because the person who holds that card will always win.
A good game as millions of permutations so that each time it is played it can be played differently. As our list of possible poos grew, I started to think about the challenge of taking poos of wild or exotic animals either found a long way away from where I lived or otherwise in a zoo. Given that a zoo for humans is the reverse of a prison, and the inmates are either rare or dangerous, I knew that trying to blag my way into on of them was going to be hard. I started with the easy poos.
Locust (the live gecko food) and then cricket (a bit small but still gecko food, so we had plenty of them), dog (loads of that about), horse (my partner has one), worm (loads of those all over the lawn and the flower bed), rabbit, sheep, cow, hamster and rat quickly followed, and before I knew it I was one quarter of the way towards my goal.
As I started taking pictures, I also started researching some factoids. Great trump card games not only give you the criteria to judge each card, but also give you a few snippets of information about the subject, so I started looking things up that related to the animals in question and where possible their poos. I bought a few books on the subject to see if I could get any facts and figures from them, but discovered that despite living in a world almost covered in poo, there is very little written about it.
I also drew up a list of definitive criteria so that I could measure each poo against them when I was taking the photographs. Some trump games seem to think that more criteria is better, but for a good game about 6 criteria seems to be the best amount. The criteria we all agreed on were:
Frequency, length, width, smelliness, hardness and yuk factor.
For your game you will think of others no doubt.
Step 4: Work Out Your Manufacturing Route
This is where the whole project gets interesting. A very important part of any new product is the market, and establishing the market need is vital to the success of any new product. Apart from the market need, price is also a big factor in determining success. However low price is NOT as big a factor as you might think. People want value which does not translate as cheap.
Look at the price of other similar products. In this case, packs of cards range in price between nothing (free promotional give aways) and a few tens of dollars (very exclusive sets for collectors), but within our market prices range from between $4 and $12, which included nice standard playing cards at the bottom end and quality tarot cards at the top end. In the UK the price break comes at about five pounds (5 GBP) and that means that a 4.95 price sounds like it will really fly, especially if the idea is strong.
Now the only way to really trial the market is with product. You can get one pack of cards made but they will cost you $30 to $50, and will be digitally printed, so not quite as good as the real thing, and much too expensive. Prices quickly drop and if you order 100 or 200 packs then prices come down to a respectable figure and low enough for you to test the market. At this point you are not going to make a killing, just test the market, but then you will only be investing a few hundred dollars.
If you can afford it and you are confident that you can sell a few hundred packs, then you are much better off having a few thousand made, the cost comes down dramatically and you can have the highest quality, properly printed cards manufactured, which will then allow you to sell them to retail stores and on-line merchants and so increase your market reach.
The web is a brilliant tool for matching up entrepreneurs and manufacturers. before the web people like you and me had little or no access to any manufacturer that wasn't listed in our local phone book. Now you can (and should) search on the internet and get a number of suppliers to quote for your project. To do this it might be handy to have at least one card designed (but be careful not to give away your idea) which you can use to get quotes.
In most cases the important features are:
- how many cards in each deck
- how many deck you want in your first manufacturing run
- how many colours on the reverse
- how many colours on the face
- do you want one of their standard reverse patterns
Step 5: Take the Rest of the Photos
Problems to overcome:
- You sound, at best weird, and worst vaguely perverted
- The animals are dangerous and possibly expensive
- What's in it for them?
- How can you talk to the right person that can make the decision?
- Explain what you are doing and without saying too much, why you are doing it. If there is a legitimate reason then there is no way they can think they are dealing with a crank.
- In the case of animal poo, all you want to do is take a few photographs of something that is going to be thrown away and so you don't really need to interact with the animals, likely as not they will be in a separate part of the enclosure while the keepers do their business clearing up the animals' business, so that's your opportunity, and you are going to be very quick and unobtrusive.
- Nothing guaranteed is in it for them, but you can say that you will make sure that when you promote your product, their kind assistance will be recognised in any publicity material... everyone likes a bit of cheap or free promotion.
- The secret to cold calling to get to the right person is to do your homework. Find out who the person is that you need to speak to. Look on the web or go to the place beforehand and research it. Be polite and courteous at all times, never leave a voice mail unless you have rung at least 10 times, learn when NO means no, and work out a script before you call so that you actually sound like you know what you are talking about. The best thing is to be able to get your initial point and request across in as few as two or three sentences.
The places I looked for poo were:
- The street
- The garden
- Animal rescue centre
- Pet shop
- Friends houses
- Zoo
Step 6: Design the Packs and Merchandisers
Poo is a potentially dirty subject, so I wanted my packs to be as fun and as clinical as possible. I also wanted them to look nice and be a real gift that you would get a lot of fun and pleasure out of. I thought white corners would look good and a bright (although poo coloured) background would be perfect. I wanted the design to be completely different from Top Trumps, the leading trump card brand, because I didn't want to be in trouble for passing off (that is when you make a product that other people might mistake for another brand).
If you are selling only on the web, to individuals, then you will only need to design packs, if you intend to sell in stores, you will have to design a merchandiser, or box to hold more than one pack. Check out stores near you for the sort of thing that would be acceptable. in most cases the packs are arranged in 12s.
Most dozen boxes are square and flat with the packs arranged in three rows of four, naturally I wanted to be a little different, so I packed them in a stack. I figured that this could go next to a till or with the hanging slot, hang on a display next to other products.
Make sure you check that your Idea will fit with any standard point of sale (POS) furniture and hangers etc.
Step 7: Sort Out Any Regulatory Issues and Start Spending Your Money
As an entrepreneur, before you start making any money, you have to spend some, and boy are there a load of opportunistic people out there ready to take it off you. It's as if when you want to make something, everyone pricks up their ears and decides that you're fair game for their little entrepreneurial activity in the fee for service department.
You might have already had patent fees, trade mark registration fees, lawyer's bills, but if you have managed to get this far on your own, you'll certainly start spending the folding stuff now. If you intend to sell via any retailer, they will require a barcode. This is a stock controlling number that is unique to your product - worldwide ! and allows the retailer to manage their stock. For this you will need to be registered with an authorized barcode number supplier (only one in the UK, called GS1) they'll want $200 off you to register and then $200 for your first year's fees and then guess what, you need a special piece of software to convert the barcode number (of which you now have 10,000) into those little bars (it can't be that hard I was thinking) well, there are only a few authorized barcode creation software houses and guess what... they'll have $200 off you thanks very much for their tiny bit of software.
If you're selling via the web, you'll need stickers, envelopes, stamps and of course a verified paypal account or some other form of etailing means of cash collection, which can take time to sort out.
This is also a good time to start finding out what Customs and Excise boxes you have to tick. Remember that your own country's customs and excise departments are (in most cases) paid for by your taxes and are there to help you. They often have helplines and loads of helpful literature. Government in general would like to encourage entrepreneurs and commerce so don't be scared, ask and make sure that you have filled in all the right forms so that you are not doing anything illegal.
Step 8: Place Your Order....!
Yikes this is it. If you are ordering a fair few cards, this is when you spend your lifesaving on the crack pot idea you have been boring everyone with for the last few months. It is a thrilling moment. Check and double check the artwork before you send them off and pay. If you make mistakes and typographic errors (my favourite trick) it can be expensive to repair later. Ideally get someone else you trust to go over the work one last time before you click send!
Make sure that you have agreed the delivery terms with the manufacturer and where appropriate the acceptance criteria. Make absolutely sure you have agreed the specification of what you are getting. The right number of colours, cellophane wrapping, tear tab, full colour merchandiser, the weight of the card, the exact number of cards and packs you have ordered. If you have selected proofs, check these very carefully, it might be a pain to change something after you've ordered it, but it's better to do this at proof stage than when you have taken delivery.
Step 9: Start Talking to Retailers, Get Your Own Retailing Up and Running
My intention was always to sell via dadcando. The web is a brilliant tool for individuals to sell to individuals.It has opened up commerce to the masses in the way that Gutenberg movable type printing presses opened up information and book reading over 500 years ago.
Before eBay, if you wanted to make and sell your product you either needed a shop or a wholesaler and then you had to beg them to take your product and supply it to them at a price that they wanted to pay. Now days in the big wide world of high street retailing it's not much different. A typical high street chain will want to tell you how much they think they can sell you product for and then they will want to buy it off you for half of that, minus the VAT (or other sales taxes) and any other discounts they can dream up... early payment discount (still only paying for the product 60 days after you delivered it to them), placement discount (i.e. you pay them to put it on the shelf), discounts for ordering larger orders, sales or return (i.e. they won't pay you for any product they can't sell) and to cap it all, they may well want you to pay for the shipping to each of their outlets. PHEW! it's a wonder that anyone can afford to do business with the high street retailer.
Enter the web: brilliant, you put an advert up, people find it by searching and then you ship it to them. The buyer pays for shipping or contributes towards it and pays the full price. The buyer usually gets a cheaper deal because he or she is buying direct... everyone wins.
While you are waiting for you cards to arrive, you can be sorting out your eBay advert and phoning up any web based retailers that you think might be able to sell your product.
I was able to get Firebox.com interested in stocking Plop Trumps. They loved the idea and were happy with the price I quoted them. But I also have my own website dadcando, so I was able to make up a nice selling page there as well.
Step 10: Take Publicity Shots of You and the New Product
You need to be able to publicise your new product and to do that you'll need some really nice photographs of it. Playing cards are hard to take photos of because they are thin and if they are white then they can look a bit weak, but with a bit of care you can make them look as nice as they are in real life.
It's time to get your digital camera out again and use all the skills you learned taking pictures of the poo to take some nice pictures of the packs. You'll probably benefit from being able to retouch the images on a package like Photoshop to make sure that they look their best.
Get a friend to take some nice pictures of you holding the cards or some kids paying with them. Most newspapers like the human touch, they know that their readers will need a person that they can engage with.
You will want to write a nice bit of copy to go with the pictures. For me it was tough, poo is so ripe for gags that I had to really stop myself. The cards themselves contain no offensive language so I had to stop myself making all the jokes and puns that I wanted to, because that would give the wrong message, and let's face it I am doing enough to challenge the world with the whole concept in the first place!
Step 11: Get Some Publicity
Publicise you product. I used to work in advertising, but for me and the little guy, advertising is not such a good deal. Think how much it costs to place an advert and then ask yourself how many units you are actually going to sell off that one advert. Post on line, tell your story wherever you can.
Go to your local press, TV station or newspaper, local press has a vested interest in writing stories about local people and it is an obvious thing for them to write a story about a local entrepreneur making a real product, especially if it is interesting.
Step 12: Mail Out to Happy Customers
This whole instructable has been about making dreams come true. You want to make some cash, you need to get your ideas out there and realise them as real products, but don't forget your customers, they see a product they like and they want it and you are fulfilling their dreams of ownership, or if they are giving the product as a gift, your are making them feel good by supplying a product that they think will bring happiness to someone they love.
Make the shipping something that they will enjoy as well.
I was always impressed by Firebox.com and the way that they included a free pack of sweets in every pack they shipped. Every time I ordered and received a parcel from them, I felt special in that they had thought to give me a little gift, it made me feel loved as a customer. Because my product is quite cheap, I couldn't afford to put sweets in with every parcel so I thought of something else that could make the parcel special. I have to print an invoice with every pack so instead of making this boring I printed a special project on each one so that if the recipient opened up the invoice label they would find a little fold up origami shirt inside with instructions. The shirt was designed to be like a zoo keeper's shirt but the badge on the pocket says Poo Keeper.
Take great care when shipping to make sure that the customer gets their stuff quickly and it is in good condition when they get it.
Remember if you build a brand then the next product you sell will be even more successful because you will have all those happy customers from your first time round, ready to come and buy things from you.
Step 13: Make Your Fortune
Well maybe. Get happy selling stuff to people that you know will make them smile and maybe just maybe will pay the bills and have a huge amount of fun doing it.
Becoming an entrepreneur is easier than you think these days, so what are you waiting for.
If you want to sample the results you can get your own pack of Plop Trumps here,here, hopefully it will inspire you to go out and make your fortune on the cards.
Good luck.
your a sick man.
surely you mean you're a sick man... and no not really, just a scientist with a childish sense of humour.
actually im sorry for what i said. i meant a sick idea for top trumps. childish... i dont know, im a kid right now, but i dont think about feces...
No worries. My choice to make and sell packs of cards with animal poo on them so I guess I've got take whatever comes my way on the chin!
Can't believe I only just saw this. We should have gotten you involved in the instructables top trumps! (It's not too late if you can think of anything to do to help out)
what a sick and weird idea for top trumps. couldnt think of anything better, or do you just love feces?
what's sick about it. Kids are fascinated about it. For me it was just building on an idea to sell a different and innovative product in a genre where everyone thought that it had all been done. In the first place it sold (firebox.com) it was their best selling product at Christmas, so clearly there's lots of people who find it a bit of fun, quite scientific and a tiny bit reactionary... all good things, but I agree not everyone's cup of tea.
The chances of winning the lottery (excluding the bonus ball) is 1 in 40,000,000 The chances of dying on the spot in the next hour add up to 1 in 9,000,000 So the funny thing is that if you were holding your lottery ticket in front of the TV 1 hour before the numbers were drawn you would be more likely to die before the results rather than have the winning numbers!
I went on website I think it's very groovy but do you think konami would publish a card game,and konami is the company tht makes yu-gi-oh cards
What program did you use to design the cards?
I designed them initially in Macromedia Freehand, which is a similar programme to Adobe Illustrator, but that was only for the basic type, the bulk of the design was done in Adobe Photoshop. In reality it would have been batter to have done it all in Illustrator.
a nice picture, i like it
ooohhh... it's pooo!!
poooooooooo
indeeeeeeed
indubedibly
to quote chris from family guy: "brown is the colour of poo!"
Hello, cool idea! May I ask what company you used to creating the cards, or rather if there IS a company specializing in making card games or if you just used any print shop? Thanks, /Kristian
yo i wus reading ur thingy and i got a great idea myself ...since i live in ireland and good skateboards are hard 2 come by and you cant get a board that suits ur style im makin my own boards at less dan 1/2 da price jus mail me the wooden bit and a picture ov the design you want and itl be made in less dan a week
sounds like the beginning of a great business. All you need now is a funky sounding web address and a cool website and you'll be away.
This seems like something an adolescent or immature teenager would come up with not an adult. I like it! (too bad I'm not allowed to buy stuff off the web) : (
You're never to old to have a bit of fun... You'll just have to put it on your Christmas list.
I could, or I could put something else (ie, PS3, X-box 360) ha!
Yes absolutely I go with that, but then I didn't say Plop Trumps should be the ONLY thing on your list to santa
Have you ever seen a cassowary poo? Endangered North Queensland bird that is vector to many rainforst trees. Fast digestive system....awesome results.
take me a really nice picture of it and I'll include it in the next product that I do, which you never know, could be Plop Trumps Number 2s ... if you know what I mean!
Could do Plop 2 as an add in. Can be a complete game in itself, or can be shuffled in with plop trump original for a doubly fun game of poo identifying, lol.
If you would like to preview cassowary poo...go to or Google up cassowary poo and its under Austrop about five items down. It actually looks better on the road so will have to try and get a snap of some there. Regards, Dale
I see what you mean, that is one evil looking poo, perfect... :-p if you are able to get a good defined picture then great. BUT be careful taking pictures on the road!
Will do..have to warn you that some items such as Quandong seeds are incompletely digested but this adds to the rich fabric of texture. Hope the neighbours aren't watching while I photograph. Cassowary
Top stuff, that would be brilliant, nice and sharp now... and yes I got some very funny looks. When when questioned, all I said was, well I am doing this piece for National Geographic Kids and that shuts up most people! My partner's neighbours asked her if I was a wildlife photographer because they saw me crouching down in the field taking pictures, of, as they told her, "a vole or something..." as if? She said no he's just taking pictures of rat poo for a pack of cards he's designing. There's not much you can say to that either!
Thats a cool idea and takes a lot of courage. With cards you can also bet with your friends..haha take a look at this..really cool bet
Win a wine glass betWin a wine glass bet
Excellent idea, thanks for passing it on.
One question, though.
What are blagging skills? lol
Blag is British or more probably London / cockney slang for conning or scamming, although in common usage it implies a less aggressive approach than either of those. If you've seen any Guy Richie British mobster films a blag is usually a robbery. If you like that sort of thing then I recommend "Lock Stock and Two Smoking Barrels and the much better acted (but not so critically acclaimed), and very funny "Snatch" which by its very name would be a blag.
Cool. Nice to learn something new. I had a friend who studied cockney for a character he played in our theater troope. He never used that term though. Thanks for the honest answer. I actually thought it was a typo for blog, as I know some folks have used blogs as means to spreadthe word about themselves or their product/ideas.
I'm not a true cockney (a true cockney has to be born within the sound of the Bow Bells (East London) but I was born in East London and lived there for a bit, so I know a little bit of the slang. Due to that and media generally I guess some words and phrases have just moved into common parlance. I went to see the new Guy Richie film Rocknrolla, and I must say, once you get over the moderate violence, the film was very funny and about as East End as it gets. I love language and its flexibility for various needs. Every country has it but I guess cockney (the real variety, not the Mary Popins kind) is the closest thing us Brits have to a no messing subculture nostalgia trip.
I didn't figure you were, by the way you referred to it. But I still appreciate the ethnic knowledge. Always like to learn things about other people and their cultures/beliefs. It makes the world seem more real to me. Even if it is just a bit of slang. lol
What a brilliant idea. My two boys like the Top Trumps games, but this novel twist will give them that extra bit of kudos. Poo is always fascinating! (Pack on order.)
Thx, it'll be in the post first thing tomorrow morning
. . . and safely received. Many thanks - the boys love it.
How about a *SUPER DELUXE* version with a scratch 'n' sniff feature ;¬)
Yes, could be done. If it is sucessful I might certainly consider Plop Trumps No. 2s ... don't laugh, I mean it. Glad you got them safe and sound.
Thx for your nice comments. Well I encouraged my son do a sketches and to come up with the criteria and the list of animals etc, but I was a designer for 10 years so I had a head start with the design and layout. The key I have found (and it's true with all my stuff on dadcando ) is that it is hard to get the right balance between instructing my children by showing them how to do things and letting them make mistakes and experiment by actually doing it themselves but accepting a lesser outcome. It's a fine line to walk because I want them to see what's possible and end up with a result of which they feel really proud and ALSO for which they feel they had a significant amount of input. In cases like this, embodying their ideas pulls off the trick neatly. In this case my son suggested and designed icons for each of the attributes. I was going to re-draw them, but with the factoid we couldn't fit them in so I drew up an icon for the carnivore / herbivore bit at the top of the card and did a trade on that so we used his idea but in a different way that fitted better with the cards.
BTW the picture of William and me taking a picture of the gecko poo was taken by my eldest son and was used as a press shot, so he was proud of that even though he had had no real involvement with the rest of the project.
As for the program, I used Macromedia Freehand (similar to Adobe Illustrator) although there are other free versions available for the line work and Adobe Photoshop for the bitmap work, which in this case was mostly all of it.
BRILLIANT! I've gone straight to your site and ordered a pack. Wish I'd thought of it! :)
thx, hope you like them, everyone I show them to thinks they are really fun, so you should be ok.
Have you ever heard of the german saying "aus Scheiße Gold machen"? It means "turning poo (s***) into gold (money)", which is quite exactly what you did. Amazing!
Thx, I have never heard the German version, but that makes sense, we have a similar saying in the UK which goes something like... "Where there's muck there's brass" which is typically a northern England saying, hence the use of the word brass for money.
You are the best, Kaptin! Not sure if my girls would like this or not. They are really into animals, but poo might be a bit much. They would either be totally grossed out or they would laugh so hard they'd pee their pants. However, that is not the point. Nice instructable on getting a product to market. Getting ready for the new Harry Potter movie? Only a few months away. -David
Thx, nice of you to say. But i think as far as the HP movie is concerned, you'll have to wait till next year. I had heard that it is being put off till next summer to maximise box office take... as if they need to.
interesting idea how many have sold :O?
in the first two weeks since the PR broke, enough to cover half my costs, but I am hoping to sell a load more in the run up to Christmas and I have been told that a TV programme is interested, which should boost sales.
|
http://www.instructables.com/id/How-to-make-your-fortune-at-cards/
|
CC-MAIN-2017-30
|
refinedweb
| 6,741
| 75.13
|
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of New status.
Section: 17.10 [support.initlist], 23.7 [iterator.range], 99 [iterator.container] Status: New Submitter: Richard Smith Opened: 2014-11-11 Last modified: 2016-02-10
Priority: 3
View other active issues in [support.initlist].
View all other issues in [support.initlist].
View all issues with New status.
Discussion:
These sections define helper functions, some of which apply to initializer_list<T>. And they're available if you include one of a long list of header files, many of which include <initializer_list>. But they are not available if you include <initializer_list>. This seems very odd.
#include <initializer_list> auto x = {1, 2, 3}; const int *p = data(x); // error, undeclared #include <vector> const int *q = data(x); // ok
Proposed resolution:
|
https://cplusplus.github.io/LWG/issue2453
|
CC-MAIN-2019-51
|
refinedweb
| 143
| 58.48
|
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
When creating URLs in an applet I experience 5 second delays. This is because the JVM is attempting to do a DNS lookup of my webproxy but my webproxy doesn't have a DNS entry. This is a fairly typical setup for corporate web proxies.
The problem is in sun.net. which makes a call to InetSocketAddress.getHostName().
This is a new issue that has appered in 1.5.0. I can't see any reason why the JVM would need to resolve the name of the proxy. So I think this is a bug.
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
1. Configure your web browser to use an HTTP proxy which you specify by ip and port (not by hostname).
2. Host the applet below on a webside accessed through the web proxy.
3. Arrange some method of detecting network lookups.
4. Load the applet in the browser
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
I'd expect the URL connections to be instantanious. I'd expect no name lookups to take place.
ACTUAL -
Name lookups of the web proxy take place. On an incorrectly configured system this causes the URL connection to take 4 seconds or longer.
REPRODUCIBILITY :
This bug can be reproduced always.
---------- BEGIN SOURCE ----------
import java.applet.Applet;
import java.io.IOException;
import java.net.URL;
import java.net.URLConnection;
public class Minimal extends Applet {
public void start()
{
try {
for (int i = 0; i < 5; i++) {
long time_start = System.currentTimeMillis();
URLConnection uc;
URL url = new URL("");
uc = url.openConnection();
uc.connect();
long time_end = System.currentTimeMillis();
System.out.println("Time to open socket (ms): " + (time_end - time_start));
}
} catch (IOException e) {
System.out.println(e);
}
}
}
This issue is separate to 5092063 even though it looks similar. I have
recently tested the 1.5.0_08 JVM and the issue is still there.
The good news is, thanks to SUN's open source code policy, I can tell
you exactly where the bug is: It is in sun/net/www/HttpClient.java in
the privilegedOpenServer() method. In here there is a call to
server.getHostName() which forces the InetSocketAddr object to resolve
its name (if it hasn't done so already).
Backtracking a little... The problem is when someone is using a java
applet to connect through a web proxy. On some networks the proxy has
been specified by ip address but does not resolve into a host name (I
reproduce this by using a Perl proxy for an ipaddress which doesn't have
a DNS record on our LAN). I suspect the issue exists on many platforms
but it is particularly accute on windows since when a DNS lookup fails
windows attempts to lookup via NetBIOS (or something simlar) which
results in a failed lookup taking 4.5 seconds!
The problem is that with the new Proxy code in 1.5, the
priviledgedOpenServer() method is called whenever an applet makes a
connection. It isn't possible to bypass this call or bypass / alter the
Proxy code without generating security exceptions.
There are two obvious fixes that could be applied:
1. Fix priviledgedOpenSever() so that it doesn't call getHostName(). I
see no reason why it should resolve the name, it seems like a bug to me.
If the proxy is specified by ip address then ip addresses only should be
used for connecting to the proxy.
2. Fix the ProxySelector stuff so that it returns the same
InetSocketAddress object each time it is queried. I couldn't find the
source for this since some of it is native, but it appears that each
time you query the ProxySelector for the proxy details it returns a new
InetSocketAddress object. InetSocketAddress objects cache their name if
you attempt to look it up, so if the ProxySelector returned the same
object each time then the lookup failure would only need to be performed
once (I still can't see any reason why it should be performed at all,
but this would at least be an acceptable workaround).
SUGGESTED FIX
Up to, and including, jdk 6 there is no method to extract the hostname from a InetSocketAddress without triggering a reverse lookup. However it is possible to use the following code instead of calling getHostName():
// Use getAddress().toString() to avoid reverse lookup
String s = server.getAddress().toString();
int pos = s.indexOf('/');
if (pos == 0) {
// extract the IP litteral
s = s.substring(1);
} else {
// extract the hostname
s = s.substring(0, pos);
}
this will give you the hostname if the address was resolved or, the IP litteral if it was not.
A better solution will require a new public API in InetSocketAddress and will have to wait until jdk 7.
WORK AROUND
A workaround that will always fix the timeout issues is to make sure the proxy servers are in the DNS tables.
EVALUATION
Yes, this is similar to 5092063 where a reverse lookup is triggered and can take a long time if the entry is not in the DNS maps.
We need to apply a similar fix (i.e. avoid calling getHostName() unless the reverse lookup is actually necessary).
|
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6469803
|
CC-MAIN-2015-18
|
refinedweb
| 856
| 65.83
|
?
Q8: Analyze the indeterminate Frame given in Figure 6 by Moment Distribution Method and draw
Shear Force and Bending Moment diagram. The Concentrated load given in Figure should be
multiply by 899 and should be divided by 45. The UDL given in Figure
should be multiply by 899 and should be divided by 400.Take EI= Constant
i want to sent a response for my form submit from server to client, that means python flask to javascript. My javascript code is given follows
document.addEventListener('DOMContentLoaded', function() {
chrome.tabs.getSelected(null, function(tab) {
d = document;
var f = d.createElement('form');
f.action = '';
f.method = 'post';
var i = d.createElement('input');
i.type = 'hidden';
i.name = 'url';
i.value = tab.url;
f.appendChild(i);
d.body.appendChild(f);
f.submit();
});
$(".button").click(function(){
request = new XMLHttpRequest();
request.open("POST","",true);
request.send();
request.addEventListener("readystatechange", processRequest,false);
function processRequest(e)
{
if(request.readyState==4 && request.status == 200)
{
var response = JSON.parse(request.responseText);
a=response.result
alert(a);
}
}
});
},false);
And my Python server code is follows
from flask import Flask, flash, redirect, url_for, request, render_template,jsonify
import json
import UrlTest
import trainingSet as ts
app = Flask(__name__)
user=""
s=0
@app.route('/Get Form/',methods = ['POST'])
def GetForm():
request.method == 'POST'
url=request.form['url']
UrlTest.process_test_url(url,'test_features.csv')
s=ts.main_caller('url_features.csv','test_features.csv')
print s
return str(s)
@app.route('/PutValue/',methods = ['POST'])
def PutValue():
request.method == 'POST'
print s
return jsonify(result=s)
if (__name__ == '__main__'):
app.run(debug=True,host='0.0.0.0', use_reloader=False)
I want to send the value of s to the javascript client. please help me to send this the value of s.
and if u can suggest the complete code in javascipt and python
Would someone let me know how to verify JSON data in python. There are so many modules available to verify XML file, however I didn't find any good module to verify JSON Data.
After searching on the internet I came across JSON module, however it only coverts the JSON data to python. it's good, however the problem comes when JSON response is very large.
Is there any module through which I can verify JSON file like DOM or Object oriented way. ( i.e. data.key)
How can you detect if a key is duplicated in a JSON file? Example:
{
"something": [...],
...
"something": [...]
}
I have a growing JSON file that I edit manually and it might happen that I repeat a key. If this happens, I would like to get notified. Currently the value of the second key silently overwrites the value of the first.
Do you know about a command line JSON validator?
Forgot Your Password?
2018 © Queryhome
|
https://www.queryhome.com/tech/178741/structure-analysis
|
CC-MAIN-2020-50
|
refinedweb
| 454
| 51.24
|
Opened 9 years ago
Closed 9 years ago
#3989 closed (duplicate)
Django seems to parse only the addr-spec production of RFC 2822
Description
The RFC 2822 defines some productions for the grammar of electronic mail adresses. The
mailbox production enables the use of addresses like:
John Doe <john.doe@example.com>
The address within brackets is defined by the
addr-spec production, which by itself constitutes a simpler but valid address format. (see the ABNF rules and their comment in 3.4)
When I used a contact form in a web site powered by Django, an address satisfying the
mailbox production was not accepted, but one satisfying
addr-spec was. I was told on the
#django channel that it should be an issue not specific to this site, but to Django itself.
Attachments (2)
Change History (21)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
comment:3 Changed 9 years ago by
I was under the impression that Django's EmailField was a simple emailfield that works for most purposes, but wasn't RFC2822. It uses a simple regular expression, which is not powerful enough to easily handle the RFC's full grammar. As a quick example, I cannot email axiak@[127.0.0.1] but I should be able to mail axiak@[18.7.21.224] in most cases.
My point is, we can make the Email validation really, really complicated. Do we want to?
comment:4 follow-up: 5 Changed 9 years ago by
Michael: I don't think we want to go completely overboard. Introducing a regular expression similar to the famous one in Friedl's reg-exp book would be an example of going overboard, for example. It would be impossible to debug. The full RFC grammar is a bit of overkill for the practical public Internet.
Covering the common cases is reasonable, though. The example cited in the bug summary might be good to have because it allows cutting and pasting of email addresses and using appropriate titles which may not match the username (or other names in the form). Allowing email to IP addresses seems probably too much for a primarily web-based framework, since the Internet has had DNS for quite a while now. Still, if it can be done easily, it might not hurt to include.
I readily admit that I don't really know in advance where to draw the line on this one, since it could go in many places. It wouldn't be unreasonable to say we stick with what we've got now. That's why I kind of want to see how complex a patch is. If you end up writing a couple of dozen lines of code, it's probably worth thinking about whether that's becoming too complex.
Sorry to be a bit non-specific. Aim for something small and easy to understand.
comment:5 Changed 9 years ago by
Replying to mtredinnick:
[a regular expression similar to the famous one in Friedl's reg-exp book] would be impossible to debug.
What would you have to? Isn't there an existing Python module to do it?
Changed 9 years ago by
Patch to email RE to allow matching "mailbox" production
comment:6 Changed 9 years ago by
The attached patch matches the mailbox production, without Friedl-like thoroughness (addressed for oldforms only - however, it should just be a copy-paste job for newforms). However, I'm not sure it's the whole story - there might be code elsewhere which assumes that the value of the
send_email() need to be changed?
comment:7 Changed 9 years ago by
A couple of comments on the patch:
- Whatever is needed to support newforms also needs to be included. Until we have model-aware validation, we can't completely avoid patching validators.py, so the current patch is partly in the right place.
- Rather than defining a whole bunch of constants that are only used once and then just hang around polluting the namespace, how about using the re.VERBOSE flag and putting the reg-exp back together into a single expression, like it is in the current code? No problems with spreading it out a little bit, but the current patch makes it look like there is more going on than there really is: it takes a bit of study to realise that there are six things going into the one reg-exp and that's the only place they're useful. You can probably keep ADDR_SPEC out as a separate string, because it's used twice. Or maybe there's another way to format things, but it takes more than a couple of seconds to understand what's going on at the moment.
- A documentation update in model-api.txt to give a couple of examples of "valid email field" is now required, since people aren't going to necessarily guess that display names are also allowed.
Patch seems simple enough. Tweak it a little bit and I won't stand in the way.
comment:8 Changed 9 years ago by
Changed component to reflect reality.
comment:9 Changed 9 years ago by
Malcolm, re. your point 2 above, I hear you. I feel that regexes are hard to scan even when one knows what's going on - all those punctuation characters squashed up against each other. So I structured the change in the way that I did because I found it more readable, particularly when one is cross-referencing against the RFC. How about if I just add a
del DOT_ATOM, QUOTED_STRING, DOMAIN, ADDR_SPEC, DISPLAY_NAME
statement after the
line, to clean up the namespace? I'll add a comment above the section, too, to explain what follows and so that it doesn't seem too complicated.
comment:10 Changed 9 years ago by
Vinay, what's the problem with just using
re.VERBOSE?
comment:11 Changed 9 years ago by
Nothing especially - though I find it easier and quicker to test regexes with the approach I used, since I can try different combinations of grouping and alternatives without accidentally stepping on the components of the regex.
We don't even need re.VERBOSE, since we can use the feature that a sequence of string literals is concatenated by the compiler into a single string. Assuming we can keep ADDR_SPEC because of DRY, then the re becomes:
ADDR_SPEC = ( "((?:" r"[-!#$%&'*+/=?^_`{}|~0-9A-Z]+(?:\.[-!#$%&'*+/=?^_`{}|~0-9A-Z]+)*" # dot-atom ")|(?:" r'"(?:[\001-\010\013\014\016-\037!#-\[\]-\177]|\\[\001-011\013\014\016-\177])*"' # quoted-string "))@(" r'(?:[A-Z0-9-]+\.)+[A-Z]{2,6}' #domain ")" ) email_re = re.compile( "^(?:" + ADDR_SPEC + ")|(?:\w[\w ]*)<" + ADDR_SPEC + ">$", re.IGNORECASE)
If it's generally felt that this is equally readable, then I see no problem going with it.
comment:12 Changed 9 years ago by
Pleaes use re.VERBOSE rather than the string concatenation, because (a) it is much easier to edit triple-quoted multi-line strings rather than line-length string fragments that cannot contain newlines and (b) automatic string concatenation like that will eventually be going away from Python, so it saves us a few seconds of porting trouble at a later time if we have an equally useful alternative now.
comment:13 Changed 9 years ago by
OK, here it is:$", re.VERBOSE | re.IGNORECASE)
I'll look at the newforms stuff and model-api.txt and work up a patch.
Changed 9 years ago by
Updated patch with additions to newforms/fields.py and docs/model_api.txt
comment:14 Changed 9 years ago by
comment:15 Changed 9 years ago by
I think this should become a new email field and not built in to the existing field. Usually, in the models where I have an email address, I also have the person's first name and last name too and want the email address to be simply the addr-spec. If the name were included in the email address field, then I would be storing duplicate information and would have to strip out the address part when displaying. But I can certainly see uses for the full mailbox spec too. Different fields for different needs is the way I see this.
comment:16 follow-up: 17 Changed 9 years ago by
Although of course it's conventional to use firstname/lastname as the display name, it's not mandated - for example, a nickname could be used as the display name, or a suffix such as " (Home)" or " (Work)" might be appended to the user's name to form the display name. So the display name is logically a different value from the user's name, even if it has the same value for most users.
For this reason, I don't think having two fields makes sense, though it may be sensible to increase the size of the email field (there's a separate discussion about this on the mailing lists, not related to display name support).
comment:17 Changed 9 years ago by
Yes, I understand that you can use anything for the display name. My point was simply that one might not always want to allow display names, as is the case with every use of the
EmailField in code I've written. I do think that allowing for display names would be a welcome feature, I just think that the developer should be able to specify which format is allowable since each has its valid use cases. I also think that separate fields (i.e.
EmailWithDisplayNameField) would be better than adding an option to the existing
comment:18 Changed 9 years ago by
The DRY principle, if I'm not misapplying it, seems to lead to the conclusion that
would be the best solution - having the default as
False allows the behaviour to remain as it is currently, but which can be modified easily by setting
allow_display_name=True where the use case warrants it. With the two-field solution, any future change to the
EmailWithoutDisplayNameField. And there's not enough difference in behaviour between the two, IMO, to warrant two classes.
Let's see how simple the patch looks. Should be a reasonable change.
|
https://code.djangoproject.com/ticket/3989
|
CC-MAIN-2016-40
|
refinedweb
| 1,696
| 69.62
|
#include '../template.wml'
#include "toc_div.wml"
This is a list of Israeli open-source projects that used to reside on the Hackers-IL wiki.
Note: For Perl projects refer to the Israeli Perl Projects list on the Israeli Perl Mongers web-site. You can also find a list of some more Python Projects at the site of the Israeli Pythoneers.
PHP - a server-side scripting language that is actively developed by Zend and many other volunteers. Zend is an Israeli company.
MOSIX - a clustering-solution for the Linux kernel. Actively developed by Amnon Barak and his team at the Hebrew University. (Note - there may be some licensing issues)
openMosix - a MOSIX-derivative that professes a more open development model than MOSIX. Formerly actively developed by Qlusters and many other contributors.
Nullsoft Scriptable Install System - a tool for creating software installers for Windows programs. Led by Amir Szekely, who is an Israeli. See also its SourceForge “Project of the Month” review.
The Culmus Fonts - a collection of free-as-in-speech Hebrew fonts, by Maxim Iorsh.
“The Open Phishing Database” - an effort to create and maintain an open database of phishing sites, in addition to providing browser extensions which utilise this database. (Appears to be defunct.)
LKVM - The Linux Kernel Virtual Machine - has been developed by Qumranet.
Web Secretary - a program that scans Internet sites and reports when changes are made to them.
Freecell Solver - an ANSI C library and a standalone command line program to solve games of Freecell and similar variants of card Solitaire. Also see this page.
XMMS-Volnorm - a plug-in for XMMS that normalizes the sound volume between the played songs.
XParam - a data serialisation/de-serialisation library for C++.
Hspell - a spell checking program for Hebrew language documents.
Database Super Converter - a cross-database abstraction layer.
MikMod for Java - a player of MOD Files for Java.
Jin - an open source, cross-platform, graphical client for playing chess with other people around the world.
Syscalltrack - a Linux kernel module and accompanying user-land software for tracking system calls. (historical)
ChkTex - a Program for checking the typographical validity of TeX documents.
LibHdate - A small C,C++ library for Hebrew calendar, dates, holidays, and reading sequence. (python, perl and pascal bindings available)
hocr - A Hebrew character recognition c/c++ library. Command line, GNOME and QT graphical user interfaces available.
Krusader - An advanced twin panel (commander style) file manager for KDE led by Rafi Yanai & Shie Erlich.
FoxyTunes and other Firefox Extensions by Alex Sirota - see also the mozilla.org page and an Ha’aretz article.
rsyncrypto - rsync friendly file encryption. By Lingnu.
PgOleDb - an OLE DB back-end for PostgreSQL. By Lingnu.
Cube/Tesseract - a 4 dimensional hypercube (Tesseract) game that behaves like Rubiks Cube. By Michael Brand and Shachar Shemesh.
Geresh - a simple, multi-lingual console-based text-editor.
Vamos - On demand computing platform, based on Debian. By Hadar Weiss
Webilder - a Flickr wallpaper downloader. By Nadav Samet.
ELF Statifier - a program to create a completely static ELF executable out of an executable which uses shared libraries. By Valery Resnic.
KBDE - a keyboard emulator for x86 Linux. By Valery Resnic.
E-Book Tools - tools for accessing and converting different file formats of Electronic Books. By Ely Levy.
Mazrim (hebrew for “Streamer”) - a Python-based project that aims to help the web-surfers listen to Israeli streaming media.
CppCMS - a C++ Web Development Framework - by Artyom Tonkikh.
BiDiTeX - Bi-Directional Support for LaTeX.
Open Text Summarizer - a library and a program for Automatic Text Summarising for many languages. See also the Linux.com feature on “Condensing with Open Text Summarizer”. By Nadav Rotem.
Open Knesset Codebase - The source code behind the site for making the Israeli parliament’s procedures more transparent. (BSD licensed).
hcal - Hebrew Calendar for Gtk+
Hdate applet - Hebrew date applet and desktop calendar using the GNOME toolkit.
Hspell GUI - Hspell graphical user interface using the GNOME toolkit.
Autofw - An automatic Firewall script by Baruch Even.
SpacePong - a cool space game by Shlomi Loubaton
pyFribidi - PyFribidi is a simple fribidi binding to python.
sshpass - Non interactive ssh password authentication. By Shachar Shemesh.
NetChat - a network version of the “chat” utlity. By Shachar Shemesh.
FLMLS4PG - Field Level Multilingual Support for PG.
radio.py - easily listen to radio stations from the command line.
“GNU/Linux Kinneret” - a Knoppix based bootable LiveCD which was translated to Hebrew. (Defunct)
Linbrew - a GNU/Linux distribution that is intended for new Israeli users who are looking for built-in Hebrew support and Israeli localization.
“Kazit” - a Knoppix-derived LiveCD. (defunct).
“Ehad” - a remastering of the Mandriva Linux distribution so it will be suitable for the Israeli audience and fit on one CD. (defunct)
DebianHebrew - a remastering of the Debian GNU/Linux distribution so it will be suitable for the Israeli audience and fit on one CD.
Gentoo Israel - The Gentoo distribution community in israel. Includes wiki, forums and other help for Gentoo users.
Xorcom Rapid - Asterisk Installation from a bootable CD.
CentOS Israel - an Israeli portal for CentOS, the Community Enterprise Operating System.
The Better SCM Site - A web site for discussion and advocacy of Version Control and Source Configuration Management systems.
Limon - Free Hebrew-English-Hebrew dictionary web site.
Open Knesset - a site for providing a more accessible interface (and one capable of analysis) to the information in the site of the Knesset, the Israeli parliament.
|
https://bitbucket.org/shlomif/shlomi-fish-homepage/raw/8a8871137e93bf3529e2ffc81ff921569586cf6e/t2/open-source/resources/israel/list-of-projects/index.html.wml
|
CC-MAIN-2016-07
|
refinedweb
| 895
| 60.41
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.