text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
;
}
}
}
I've tested Starling content on my Nexus 10 without issues. I'm not sure what that could be.
I think it may be related to the initial stage size vs. the Nexus 10's huge resolution, maybe there is some connection to this thread:
The SWF attributes for width/height of the stage are 480x678 (portrait). To fill the screen width-wise with the content, the 480x678 background .png basically ends up 3.33x times larger in scale. I wonder if there is potential problem with this or some correlation to contentScaleFactor technically being above 2 when the content is scaled to fit?
I will test doubling the initial stage size (and background graphic) to see if that helps.
Tried several tests swapping out assets, changing up the stage & scale sizes, etc- but nothing changes. There must be something else going on. Will continue to look into it.
Ultimately I never found out why the code above would cause the display to glitch only on the Nexus 10, but wonder if it has something to do with attempting to alter Starling's viewPort after establishing creating it at a small/standard-ish size- then forcing it up to an extreme 1600x2560.
I did test against the latest beta AIR SDK (and a few older ones) just in case, but the blank space still appeared only on a Nexus 10. It's definitely possible that I am doing something wrong (or ill-advised) in the code above- but I could not determine what.
For a workaround I'm avoiding any resize of the viewport rectangle, instead I just set it once- at the start- and leave it. This is closer to what the demo/scaffold projects do, and the code below scales the content reliably & without issues on any devices I've tested on (including the Nexus 10).
package
{
import flash.display.Sprite;
import flash.geom.Rectangle;
import screens.SpriteSheetExample;
import starling.core.Starling;
import starling.utils.RectangleUtil;
import starling.utils.ScaleMode;
[SWF(frameRate="60", backgroundColor="#333333")]
public class SpriteSheetMobile extends Sprite
{
private static const APP_WIDTH:int = 480;
private static const APP_HEIGHT:int = 678;
public var starlingInstance:Starling;
public function SpriteSheetMobile()
{
var viewPort:Rectangle = RectangleUtil.fit(
new Rectangle(0, 0, APP_WIDTH, APP_HEIGHT),
new Rectangle(0, 0, stage.fullScreenWidth, stage.fullScreenHeight),
ScaleMode.SHOW_ALL);
starlingInstance = new Starling(SpriteSheetExample, stage, viewPort);
starlingInstance.stage.stageWidth = APP_WIDTH;
starlingInstance.stage.stageHeight = APP_HEIGHT;
starlingInstance.start();
}
}
}
Consider taking a look at how the Feathers examples do it.
Thanks Josh, I just checked out some of the examples from Feathers. I noticed the special sauce/extra parameters with the RESIZE listener- definitely seems like a more intentional & efficient way to work with events. Helpful to know and I will keep this (and Feathers) in mind for future projects. | https://forum.starling-framework.org/d/6396-unexpected-behavior-on-nexus-10-black-box-over-some-of-the-content | CC-MAIN-2021-17 | refinedweb | 459 | 58.38 |
Friso, We had a similar problem on our project, Candlepin [1]. We have a number of items that end up being Quartz jobs that get kicked off via a POST call. Our POST methods return a JobDetail object [2], and we have an Interceptor [3] that on postProcess looks at the returned entity, if it's a JobDetail it returns HTTP.ACCEPTED [4] (202) and a status object that has a url to get status. The HTTP 202 code tells the client we got the request but it's not done yet.
Advertising
So far we've had pretty good success with this paradigm. I know we looked at the async feature in resteasy but I don't remember why we didn't end up using it. Sincerely, jesus [1] [2] [3] [4] On Fri, Apr 5, 2013 at 9:34 AM, Friso Vrolijken <friso.vrolij...@caci.nl>wrote: > Not sure why this isn’t showing up in the list, so resending it... > > Van: Friso Vrolijken > Verzonden: vrijdag 5 april 2013 13:10 > Aan: resteasy-users@lists.sourceforge.net > Onderwerp: RE: [Resteasy-users] [repost]: Long task, short request > > Hi Li, > > No, I’ve not tried that, as from the documentation I gather that the > request will not return (i.e. the client will be kept waiting). One of > my goals is to free up the client as soon as possible. Please let me > know if my assumption (that the client needs to keep waiting) is > incorrect. > > Groeten, > > Friso > > Van: Weinan Li > Verzonden: vrijdag 5 april 2013 13:06 > Aan: Friso Vrolijken > CC: resteasy-users@lists.sourceforge.net > Onderwerp: Re: [Resteasy-users] [repost]: Long task, short request > > Have you tried the new async feature in servlet 3.0? RESTEasy supports it. > > -- > Weinan Li > Sent with Sparrow > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > > _______________________________________________ > Resteasy-users mailing list > Resteasy-users@lists.sourceforge.net > >
------------------------------------------------------------------------------ Minimize network downtime and maximize team effectiveness. Reduce network management and security costs.Learn how to hire the most talented Cisco Certified professionals. Visit the Employer Resources Portal
_______________________________________________ Resteasy-users mailing list Resteasy-users@lists.sourceforge.net | https://www.mail-archive.com/resteasy-users@lists.sourceforge.net/msg00461.html | CC-MAIN-2016-50 | refinedweb | 371 | 66.44 |
The exact code I posted above is what I compiled and ran and it ran with no errors
Type: Posts; User: jonathon6017
The exact code I posted above is what I compiled and ran and it ran with no errors
What do you mean? The program runs without error so if I may ask, can I see an example and what will the parenthesis do? I don't mean to sound like a know-it-all, I'm just wondering why I performed...
Well I was up till 1 last night trying to get it working and I think I finally did. It will ask for "C or F" Then if C or F is entered it will ask for the temperature and then do the conversion and...
Alright I got frustrated and decided to do a complete rewrite. This is what I have so far and it finally compiled and ran successfully.
import java.util.*;
public class convert {
public...
Per Wikipedia,
Isn't a bug usually a programmer error and since this is my error it would be considered a bug?
Sorry for replying so late. At the time of my last post I was away from my computer and was working on my phone through a VNC connection to my computer. Anyways Line 17 is
if (String) usrInpt = "C"...
Do you mind giving me an example on how to use the equals() method
Here is the full code
import java.util.Scanner;
import java.io.*;
public class converter {
private static final boolean...
I done what you said but now I'm getting errors on the if and the else if statements here is my revised code. hat did I do wrong?
if (String) usrInpt = "C" {
Scanner temp = new...
Hello,
I'm new to java and I am trying to learn it as my first language and eventually move to Programming for Android. To get myself started for practice, I am working on a simple... | http://www.javaprogrammingforums.com/search.php?s=3a66fbe96d32739885e522b16636ba4b&searchid=1724793 | CC-MAIN-2015-35 | refinedweb | 326 | 80.82 |
Glue alternatives and similar packages
Based on the "Web Frameworks" category
Gin10.0 8.2 Glue VS GinGin is a web framework written in Go! It features a martini-like API with much better performance, up to 40 times faster. If you need performance and good productivity.
Beego10.0 5.3 Glue VS Beegobeego is an open-source, high-performance web framework for the Go programming language.
Iris9.9 8.9 Glue VS IrisA very minimal but flexible and high-performance golang web application framework, providing a robust set of features for building web applications.
go-kit9.9 6.9 Glue VS go-kitA Microservice toolkit with support for service discovery, load balancing, pluggable transports, request tracking, etc.
Echo9.9 7.1 Glue VS EchoA fast and unfancy micro web framework for Go.
httprouter9.8 7.0 Glue VS httprouterA high performance router. Use this and the standard http handlers to form a very high performance web framework.
Revel9.8 0.0 Glue VS RevelA high-productivity web framework for the Go language.
mux9.8 4.5 Glue VS muxA powerful URL router and dispatcher for golang.
negroni9.6 1.3 Glue VS negroniIdiomatic HTTP middleware for Golang.
chi9.6 7.3 Glue VS chiSmall, fast and expressive HTTP router built on net/context.
GoSwagger9.5 8.3 Glue VS GoSwaggerSwagger 2.0 implementation for go
Buffalo9.5 7.6 Glue VS BuffaloBringing the productivity of Rails to Go!
Fiber9.4 9.8 Glue VS FiberAn Express.js inspired web framework build on Fasthttp.
web.go9.3 0.0 Glue VS web.goA simple framework to write webapps in Go.
gqlgen9.3 9.1 Glue VS gqlgengo generate based graphql server library
goa9.2 8.4 Glue VS goaFramework for developing microservices based on the design of Ruby's Praxis.
go-socket.io9.2 5.8 Glue VS go-socket.iosocket.io library for golang, a realtime application framework.
go-json-rest9.1 0.0 Glue VS go-json-restA quick and easy way to setup a RESTful JSON API.
Macaron9.0 5.3 Glue VS MacaronMacaron is a high productive and modular design web framework in Go.
Gizmo9.0 7.9 Glue VS GizmoMicroservice toolkit used by the New York Times.
fasthttprouter8.8 0.0 Glue VS fasthttprouterA high performance router forked from httprouter. The first router fit for fasthttp.
utron8.6 0.0 Glue VS utronA lightweight MVC framework for Go(Golang).
alice8.5 1.6 Glue VS alicePainless middleware chaining for Go.
Tollbooth8.4 5.4 Glue VS TollboothRate limit HTTP request handler.
Faygo8.2 5.4 Glue Glue VS CORSEasily add CORS capabilities to your API.
gocraft/web8.2 0.0 Glue VS gocraft/webA mux and middleware package in Go.
melody8.2 0.0 Glue VS melodyMinimalist websocket framework for Go
render8.1 2.3 Glue VS renderGo package for easily rendering JSON, XML, and HTML template responses.
pat7.9 0.0 Glue VS patSinatra style pattern muxer for Go’s net/http library, by the author of Sinatra.
Bone7.9 0.0 Glue VS BoneLightning Fast HTTP Multiplexer.
tigertonic7.6 0.0 Glue VS tigertonicA Go framework for building JSON web services inspired by Dropwizard
tango7.6 0.0 Glue VS tangoMicro & pluggable web framework for Go.
REST Layer7.6 2.2 Glue VS REST LayerA framework to build REST/GraphQL API on top of databases with mostly configuration over code.
Goji7.5 0.0 Glue VS GojiGoji is a minimalistic and flexible HTTP request multiplexer with support for net/context.
Limiter7.5 7.7 Glue VS LimiterDead simple rate limit middleware for Go.
go-server-timing7.2 0.0 Glue VS go-server-timingAdd/parse Server-Timing header.
aah7.0 3.2 Glue VS aahA scalable, performant, rapid development Web framework for Go.
rye6.9 0.0 Glue VS ryeTiny Go middleware library (with canned Middlewares) that supports JWT, CORS, Statsd, and Go 1.7 context
xujiajun/gorouter6.9 2.2 Glue VS xujiajun/gorouterA simple and fast HTTP router for Go.
traffic6.8 0.0 Glue VS trafficSinatra inspired regexp/pattern mux and web framework for Go.
neo6.7 0.0 Glue VS neoNeo is minimal and fast Go Web Framework with extremely simple API.
golongpoll6.7 1.3 Glue VS golongpollHTTP longpoll server library that makes web pub-sub simple.
ozzo-routing6.6 3.5 Glue VS ozzo-routingA high-performance HTTP router and Web framework supporting routes with regular expressions. Comes with full support for quickly building a RESTful API application.
httptreemux6.6 4.5 Glue VS httptreemuxHigh-speed, flexible tree-based HTTP router for Go. Inspiration from httprouter.
gongular6.6 0.0 Glue VS gongularA fast Go web framework with input mapping/validation and (DI) Dependency Injection
Goyave6.5 9.4 Glue VS GoyaveFeature-complete web framework aimed at clean code and fast development, with powerful built-in functionalities.
mango6.4 0.0 Glue VS mangoMango is a modular web-application framework for Go, inspired by Rack, and PEP333.
lars6.4 0.0 Glue VS larsIs a lightweight, fast and extensible zero allocation HTTP router for Go used to create customizable frameworks.
Siesta6.3 2.5 Glue VS SiestaComposable framework to write middleware and handlers
Do you think we are missing an alternative of Glue or a related project?
Popular Comparisons
README
Glue - Robust Go and Javascript Socket Library
Glue is a real-time bidirectional socket library. It is a clean, robust and efficient alternative to socket.io. This library is designed to connect webbrowsers with a go-backend in a simple way. It automatically detects supported socket layers and chooses the most suitable one. This library handles automatic reconnections on disconnections and handles caching to bridge those disconnections. The server implementation is thread-safe and stable. The API is fixed and there won't be any breaking API changes.
Socket layers
Currently two socket layers are supported:
- WebSockets - This is the primary option. They are used if the webbrowser supports WebSockets defined by RFC 6455.
- AjaxSockets - This socket layer is used as a fallback mode.
Support
Feel free to contribute to this project. Please check the [TODO](TODO.md) file for more information.
Install
Client
The client javascript Glue library is located in [client/dist/glue.js](client/dist/glue.js).
You can use bower to install the client library:
bower install --save glue-socket
Server
Get the source and start hacking.
go get github.com/desertbit/glue
Import it with:
import "github.com/desertbit/glue"
Documentation
Client - Javascript Library
A simple call to glue() without any options will establish a socket connection to the same host. A glue socket object is returned.
// Create and connect to the server. // Optional pass a host string and options. var socket = glue();
Optional Javascript options which can be passed to Glue:
var host = ""; var opts = { // The base URL is appended to the host string. This value has to match with the server value. baseURL: "/glue/", // Force a socket type. // Values: false, "WebSocket", "AjaxSocket" forceSocketType: false, // Kill the connect attempt after the timeout. connectTimeout: 10000, // If the connection is idle, ping the server to check if the connection is stil alive. pingInterval: 35000, // Reconnect if the server did not response with a pong within the timeout. pingReconnectTimeout: 5000, // Whenever to automatically reconnect if the connection was lost. reconnect: true, reconnectDelay: 1000, reconnectDelayMax: 5000, // To disable set to 0 (endless). reconnectAttempts: 10, // Reset the send buffer after the timeout. resetSendBufferTimeout: 10000 }; // Create and connect to the server. // Optional pass a host string and options. var socket = glue(host, opts);
The glue socket object has following public methods:
// version returns the glue socket protocol version. socket.version(); // type returns the current used socket type as string. // Either "WebSocket" or "AjaxSocket". socket.type(); // state returns the current socket state as string. // Following states are available: // - "disconnected" // - "connecting" // - "reconnecting" // - "connected" socket.state(); // socketID returns the socket's ID. // This is a cryptographically secure pseudorandom number. socket.socketID(); // send a data string to the server. // One optional discard callback can be passed. // It is called if the data could not be send to the server. // The data is passed as first argument to the discard callback. // returns: // 1 if immediately send, // 0 if added to the send queue and // -1 if discarded. socket.send(data, discardCallback); // onMessage sets the function which is triggered as soon as a message is received. socket.onMessage(f); // on binds event functions to events. // This function is equivalent to jQuery's on method syntax. // Following events are available: // - "connected" // - "connecting" // - "disconnected" // - "reconnecting" // - "error" // - "connect_timeout" // - "timeout" // - "discard_send_buffer" socket.on(); // Reconnect to the server. // This is ignored if the socket is not disconnected. // It will reconnect automatically if required. socket.reconnect(); // close the socket connection. socket.close(); // channel returns the given channel object specified by name // to communicate in a separate channel than the default one. socket.channel(name);
A channel object has following public methods:
// onMessage sets the function which is triggered as soon as a message is received. c.onMessage(f); // send a data string to the channel. // One optional discard callback can be passed. // It is called if the data could not be send to the server. // The data is passed as first argument to the discard callback. // returns: // 1 if immediately send, // 0 if added to the send queue and // -1 if discarded. c.send(data, discardCallback);
Server - Go Library
Check the Documentation at GoDoc.org.
Use a custom HTTP multiplexer
If you choose to use a custom HTTP multiplexer, then it is possible to deactivate the automatic HTTP handler registration of glue.
// Create a new glue server without configuring and starting the HTTP server. server := glue.NewServer(glue.Options{ HTTPSocketType: HTTPSocketTypeNone, }) //...
The glue server implements the ServeHTTP method of the HTTP Handler interface of the http package. Use this to register the glue HTTP handler with a custom multiplexer. Be aware, that the URL of the custom HTTP handler has to match with the glue HTTPHandleURL options string.
Reading data
Data has to be read from the socket and each channel. If you don't require to read data from the socket or a channel, then discard received data with the DiscardRead() method. If received data is not discarded, then the read buffer will block as soon as it is full, which will also block the keep-alive mechanism of the socket. The result would be a closed socket...
// ... // Discard received data from the main socket channel. // Hint: Channels have to be discarded separately. s.DiscardRead() // ... // Create a channel. c := s.Channel("golang") // Discard received data from a channel. c.DiscardRead()
Bind custom values to a socket
The socket.Value interface is a placeholder for custom data.
type CustomValues struct { Foo string Bar int } // ... s.Value = &CustomValues{ Foo: "Hello World", Bar: 900, } // ... v, ok := s.Value.(*CustomValues) if !ok { // Handle error return }
Channels
Channels are separate communication channels from the client to the server of a single socket connections. Multiple separate communication channels can be created:
Server:
// ... // Create a channel. c := s.Channel("golang") // Set the channel on read event function. c.OnRead(func(data string) { // ... }) // Write to the channel. c.Write("Hello Gophers!")
Client:
var c = socket.channel("golang"); c.onMessage(function(data) { console.log(data); }); c.send("Hello World");
Broadcasting Messages
With Glue it is easy to broadcast messages to multiple clients. The Glue Server keeps track of all active connected client sessions. You can make use of the server Sockets, GetSocket or OnNewSocket methods to implement broadcasting.
Example
This socket library is very straightforward to use. Check the [sample directory](sample) for more examples.
Client
<script> // Create and connect to the server. // Optional pass a host string and options. var socket = glue(); socket.onMessage(function(data) { console.log("onMessage: " + data); // Echo the message back to the server. socket.send("echo: " + data); }); socket.on("connected", function() { console.log("connected"); }); socket.on("connecting", function() { console.log("connecting"); }); socket.on("disconnected", function() { console.log("disconnected"); }); socket.on("reconnecting", function() { console.log("reconnecting"); }); socket.on("error", function(e, msg) { console.log("error: " + msg); }); socket.on("connect_timeout", function() { console.log("connect_timeout"); }); socket.on("timeout", function() { console.log("timeout"); }); socket.on("discard_send_buffer", function() { console.log("some data could not be send and was discarded."); }); </script>
Server
Read data from the socket with a read event function. Check the sample directory for other ways of reading data from the socket.
import ( "log" "net/http" "github.com/desertbit/glue" ) func main() { // Create a new glue server. server := glue.NewServer(glue.Options{ HTTPListenAddress: ":8080", }) // Release the glue server on defer. // This will block new incoming connections // and close all current active sockets. defer server.Release() // Set the glue event function to handle new incoming socket connections. server.OnNewSocket(onNewSocket) // Run the glue server. err := server.Run() if err != nil { log.Fatalf("Glue Run: %v", err) } } func onNewSocket(s *glue.Socket) { // Set a function which is triggered as soon as the socket is closed. s.OnClose(func() { log.Printf("socket closed with remote address: %s", s.RemoteAddr()) }) // Set a function which is triggered during each received message. s.OnRead(func(data string) { // Echo the received data back to the client. s.Write(data) }) // Send a welcome string to the client. s.Write("Hello Client") }
Similar Go Projects
- go-socket.io - socket.io library for golang, a realtime application framework. | https://go.libhunt.com/glue-alternatives | CC-MAIN-2020-24 | refinedweb | 2,213 | 52.36 |
How to Build a Lyrics Website with Laravel Scout and Algolia.
By the end of the tutorial, we’ll have a product like this:
Bootstrapping the Application
I assume you already have your development environment up and running. However, If you need a good development environment to get into action right away, you should use Homestead Improved.
Stranger to Vagrant? Read this. Want to go more in depth? Buy our book on PHP Environments!
Feel free to skip this part if you already have a similar application or you have enough experience to build one relatively quickly.
CRUD Application
The most convenient way to download the pre-built CRUD project is to clone it:
git clone git@github.com:lavary/lyrics-crud.git coolyrics cd coolyrics composer install
Setting up the Database
Now, let’s create a MySQL database. The settings below apply to the Homestead Improved environment mentioned above. Change as needed.
mysql -h localhost -u homestead -psecret mysql> CREATE DATABASE lyrics
After the database has been created, we make a copy of
.env.example (located in our project’s root directory) and name it
.env. This is where we put our database credentials:
#... DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=lyrics DB_USERNAME=root DB_PASSWORD=password # ...
Again, apply to your own preferences as needed.
Now, we run the
migration command to create the tables:
php artisan migrate
Filling up the Database with Sample Data
A lyrics website contains information about musicians and their work, and, of course, their songs’ lyrics. To make a bare minimum data structure, we have created only two Eloquent models for this project, namely
Artist and
Song. The
Artist model has a one-to-many relationship with the the
Song model. This means each artist can have many songs and each song belongs to an artist in our database.
Before moving forward to the next section, you may go ahead and insert a few records into the database, starting with your favorite artists and then adding a few songs for each.
This is what we have so far:
You can also use the SQL file included with the project files and dump it into your database with the following command:
mysql -h localhost -u {USERNAME} -p{PASSWORD} lyrics < /path/to/the/sql/file
You can also import the file by using your favorite MySQL management application, like Sequel Pro, MySQL Workbench or PHPMyAdmin.
Installing Scout
Let’s continue by installing Scout:
composer require laravel/scout
Then, we add the service provider to
$providers in the
config/app.php file:
Laravel\Scout\ScoutServiceProvider::class,
Now we need to generate the Scout’s configuration file using the
publish:config artisan command:
php artisan vendor:publish --provider="Laravel\Scout\ScoutServiceProvider"
As a result, a configuration file named
scout.php is generated inside the
config directory. We’ll edit this file later.
To make a data model searchable, we need to use the
Laravel\Scout\Searchable trait inside the respective model class. That’s the
Song model in our case:
<?php namespace App; use Illuminate\Database\Eloquent\Model; use Laravel\Scout\Searchable; class Song extends Model { use Searchable; protected $fillable = ['title', 'album', 'lyrics', 'youtube_link']; public function artist() { return $this->belongsTo('App\Artist'); } }
Setting up Algolia
As planned, we’ll use Algolia as our search engine API.
First, let’s create an account to obtain our application ID. Scout requires Application ID and Admin API Key to operate. After the registration is complete, we can find our credentials under API Keys in the left menu.
Now, we open the
config/scout.php configuration file and put our credentials there:
<?php 'algolia' => [ 'id' => env('ALGOLIA_APP_ID', ''), 'secret' => env('ALGOLIA_SECRET', ''), ],
It’s a good practice to keep the keys in
.env and load them into
scout.php using the
env() or
getenv() functions.
To use the Algolia’s API, we need to install Algolia’s SDK for PHP, which is also available as a Composer package:
composer require algolia/algoliasearch-client-php
Indexing Our Data
At this point, we need to create our index on Algolia. Each record in the index is a schema-less JSON object (each one represents a record in our database) with a set of attributes that can be used for searching, displaying, ranking and filtering data.
Rather than indexing the whole record, we only need to index the data needed for the above operations. This helps keep our index clean and optimized.
Apart from that, the index is not a relational database, meaning when searching through the index, we cannot use complex
where clauses or SQL joins. To work around this limitation, we should define a custom structure for our index records. On the other hand, we should join all the needed tables, preparing a customized JSON object before indexing.
To do this, we override the
toSearchableArray() method in the
Song model (this method is added to the class by the
Laravel\Scout\Searchable trait). By default, the
toSearchableArray() method returns the
$this->toArray() output as our index object (when sending the index record to Algolia), whereas we need additional data, like artist’s name, genres and the image URL, which reside in another table – the
artists table.
Here’s how we do it:
<?php namespace App; use Illuminate\Database\Eloquent\Model; use Laravel\Scout\Searchable; class Song extends Model { use Searchable; protected $fillable = ['title', 'album', 'lyrics', 'youtube_link']; public function toSearchableArray() { $genres = array_map(function($item) { return trim($item); }, explode(',', $this->artist->genres)); return array_merge( $this->toArray(), ['artist' => $this->artist->name, 'photo' => $this->artist->photo, 'genres' => $genres]); } public function artist() { return $this->belongsTo('App\Artist'); } }
Since the
genres field may contain a comma-separated value in our database (it has a simple text field in our CRUD app), we separate genres using
explode. Then, we iterate over the results, stripping off any unwanted spaces before and after each part – using the
map() function:
<?php // ... $genres = array_map(function($item) { return trim($item); }, explode(',', $this->artist->genres)); // ...
Finally, we merge the output of
$this->toArray() with our desired attributes, returning the final array.
<?php // ... return array_merge( $this->toArray(), ['artist' => $this->artist->name, 'photo' => $this->artist->photo, 'genres' => $genres]); // ...
Why do we need to index a randomly generated string like image URL you may be wondering. We’ll get to this shortly.
For the existing records in the database, we can import the index at once with the
scout:import artisan command, like this:
php artisan scout:import "App\Song"
When the indexing process is completed, we can see the index on Algolia by going to Indices and choosing our index name from the drop-down menu:
Indexing is not a one-off task. After the initial import, the index needs to be kept in sync with the database. Since we’re using Scout, everything is already taken care of. From now on, any time a record is inserted, updated, or deleted from the database, the respective index record will be affected accordingly – thanks to Laravel’s model observers.
To learn more about how Scout manages the indexing process, have a look at the documentation.
Configuring Algolia
The next thing to do is to configure our Algolia index for optimal operation. These settings can be modified either from the dashboard or programmatically using Algolia’s API.
The most important configuration options are the Searchable attributes and the Custom Ranking Attributes.
The Searchable attributes setting defines which record attributes are used for searching. The order of these attributes also matters as those at the top are considered more important and control the ranking.
The Custom Ranking Attributes option indicates the popularity of each record which also affects the ranking. They can be anything from the number of likes, views, downloads, to comments. That said, we need to include this information in the index.
Algolia comes with plenty of configuration options. Going through each setting is beyond the scope of this tutorial. To learn more about the configuration options, you should have a look at Algolia’s FAQ on configuration and relevance.
The Website
The last thing to do is to implement the search interface within our lyrics website. The good thing about Algolia is that it doesn’t restrict us to any interface implementation.
Traditional search implementations tend to have the search logic on the backend. To achieve this, we can use Eloquent to search through the records, using the
search() method (provided by the
Searchable trait). This method is a bit slower as the search request goes through different levels of abstraction. It can be done like this:
<?php // ... use Illuminate\Http\Request; Route::get('/search', function (Request $request) { return App\Song::search($request->search)->get(); }); // ...
The other way is to directly issue the search requests from the user’s browser to Algolia’s search API, providing a find-as-you-type experience for our users. This method is much faster than the former as there’s no interface involved. This means every attribute we want in our search results should be present in the index itself.
In this tutorial, we’ll take the second approach. We use Scout for indexing and keeping our index synced with our database, then we use Algolia’s API to do the searching.
By taking the second approach, we will have many options to display the results. We can use AngularJS, Vue.js, or Algolia’s two popular libraries, namely Autocomplete and Instantsearch.js.
For this project, we’ll use
Instantsearch.js, which is a library of UI widgets based on React that makes creating search interfaces a breeze.
Our lyrics website consists of two pages, the main searching page, and a single page to display the details of a song.
First, let’s create the routes and the controllers for these two pages. Then we’ll create the views.
File:
routes/web.php
<?php //... Route::get('/', 'LyricsController@search'); Route::get('song/{id}', 'LyricsController@lyric'); // ...
And the controller:
php artisan make:controller LyricsController
File:
app/Http/Controllers/LyricsController.php
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Song; class LyricsController extends Controller { public function search() { return view('search'); } public function song(Request $request, $id) { $song = Song::find($id); return view('lyrics', compact('song')); } }
The
song() and
search() methods render our website pages.
First, let’s create a master layout template for our pages. Inside
resources/views/layouts, create a file named
basic.blade.php with the following content:
File:
resources/views/layouts/basic.blade.php
<!DOCTYPE html> <html lang="{{ config('app.locale') }}"> <head> <meta charset="utf-8"> <meta http- <meta name="viewport" content="width=device-width, initial-scale=1"> <title>{{ config('app.name', 'Coolyrics') }}</title> <!--"> <!-- Styles --> <link rel="stylesheet" type="text/css" href="{{ asset('css/styles.css') }}"> </head> <body> @yield('content') @section('javascript') @show </body> </html>
Having individual Blade sections for the CSS and JavaScript files allows us to have better control over global and page-specific assets as well as internal Javascript code for child templates.
There’s also a CSS file loaded into the page – to make the website a little bit more customized. Please feel free to change the styles if you don’t like the look and feel of it.
For the time being, let’s create a file named
styles.css under
public/css with the following content:
File:
public/css/styles.css
html, body { height: 100%; } h2 { color: #888; font-size: 30pt; } #header { background: #34495e; color: #f1c40f; height: 80px; } #header .input-group { margin-top: 20px; } #header h1 { margin-left: 50px; font-size: 20pt; font-weight: bold; } #header h1 sup { font-size: 8pt; font-weight: normal; display: inline-block; } .dropdown-menu .ais-menu { padding: 5px 5px; } .dropdown-menu .ais-menu--item { border-bottom: #ccc 1px dotted; padding: 5px 25px 5px 15px; } .dropdown-menu .ais-menu--item:last-child { border: 0; } .container { padding: 35px; } .container-fluid { padding: 40px; } #hits-container { } .ais-hits__empty { height: 100%; font-size: 15pt; font-weight: bold; color: #888; } .ais-hits__empty p { font-size: 12pt; font-weight: normal; color: #999; padding-top: 5px; } .song { border-bottom: #ccc 1px dotted; padding: 20px; } .song a.song-link { color: #000; font-size: 15pt; display: block; } .song a.song-link em { font-style: normal; color: #000; font-weight: bold; } .song span { color: #888; display: block; } .song span.song-artist { margin: 3px 0; } .song-youtube-link { margin-top: 10px; } #pagination-container { text-align: center; font-size: 11pt; margin: 20px 0; } #login-link { font-weight: bold; text-transform: uppercase; line-height: 80px; } #login-link a { color: #fff; } /* to control the size of the thumbnail adjust the .thumbnail width and height */ .band-thumbnail { width: 90px; height: 90px; background-position: center; background-size: cover; margin-right: 30px; border-radius: 3px; } #lyrics-container { } #lyrics-container p { font-size: 13pt ; } h1 { font-size: 30pt; } #lyrics-container h1 span { color: #000; } #lyrics-header { background: #f5f5f5; padding: 60px 0; } #lyrics-album-details { font-size: 11pt; font-weight: normal; display: block; color: #888; margin-bottom: 20px; } #lyrics-album-details .glyphicon { color: #ccc; margin-right: 5px; } #lyrics-youtube-link { position: absolute; left: 0; bottom: 0; background: #cc181e; width: 100%; height: 30px; line-height: 30px; } #lyrics-youtube-link a { color: #fff; } #lyrics-thumbnail { position: relative; width: 370px; height: 220px; margin: 0 auto; } #post-meta { margin: 15px 0; } #post-meta span { color: #ccc; font-size: 10pt; font-weight: normal; margin: 5px 0; } #lyrics-content { background-color: #f1f1f1; padding: 40px 0; } .btn-xs .glyphicon { color: #888; font-size: 8pt; }
The Lyrics Page
Now, we create a file named
song.blade.php inside
resources/views with the following content:
File:
resources/views/song.blade.php
@extends('layouts.basic') <div id="lyrics-container" class="text-center"> <div id="lyrics-header"> <h1>{{$song->title}}</h1> <span id="lyrics-album-details"> {{$song->artist->name}} - {{$song->album}} Album </span> <div id="lyrics-thumbnail"> <img src="{{Storage::url('artists/' . $song->artist->photo)}}"> <div id="lyrics-youtube-link"> <a href="{{$song->youtube_link}}"><i class="glyphicon glyphicon-play"></i> Watch on Youtube</a> </div> </div><!--/#lyrics-thumbnail--> </div><!--/#lyrics-header--> <div id="lyrics-content"> <p><strong>"{{$song->title}}"</strong></p> <p>{!! nl2br($song->lyrics) !!}</p> </div><!--/#lyrics-content--> </div>
This page is rendered and controlled in the backend – by Laravel.
In this template, we extend the layout and echo out the lyrics’ attributes. This is how it should look:
The Search Interface
Now we get to the main part for which we’ve created everything so far, the search interface.
To create the search interface, we need to combine several widgets of Instantsearch.js and configure each widget to fit our needs.
Our search interface will consist of three widgets:
- A search field (SearchBox widget)
- A section to display the results (Hits widget)
- Pagination (Pagination widget)
All the widgets are wired together out of the box and require no extra development on our side. That is to say, whenever a user enters a value into the search box, all the widgets (search box, hits, pagination, etc.) will respond accordingly.
Create a new file named
search.blade.php inside the
resources/views directory with the following content:
File:
resources/views/search.blade.php
@extends('layouts.basic') @section('content') <div id="header"> <div class="col-md-2"><h1>Coolyrics</h1></div> <div class="col-md-6"> <div id="search-box"></div> </div> </div> <div class="container-fluid"> <div id="hits-container"></div> <div id="pagination-container"></div> </div> @endsection @section('scripts') <!-- Scripts --> @parent <script language="javascript" src=""></script> <script src=""></script> @endsection @section('javascript') @endsection
In the
content section, we have a container with id
search-box where we’re going to place our
searchBox widget. We also have containers for
hits and
pagination widgets.
In the
scripts section, first, we load the master layout’s scripts using the
@parent directive. Then, we add the scripts specific to this template, namely
jQuery and
Instantsearch.js.
We also have a section (
javascript) for our template’s internal Javascript code. This is where we instantiate Instantsearch.js and add our widgets.
Setting up InstantSearch.Js
We can use Bower or NPM to install Instantsearch.js, or simply use the CDN like so:
File:
resources/views/search.blade.php
@section('css') <link rel="stylesheet" type="text/css" href="" /> @endsection
And the Javascript file:
File:
resources/views/search.blade.php
@section('scripts') @parent <script language="javascript" src=""></script> <script src=""></script> @endsection
Initialization
Now, we need to create a search object with our given
Application Id and
API Key. In the
resources/views/search.blade.php file, add the following JavaScript code inside the
javascript section.
File:
resources/views/search.blade.php
... section('javascript') var search = instantsearch({ // You should put your keys here: appId: 'XXXX', apiKey: 'XXX', indexName: 'songs' }); @endsection ...
In the above code, we need to provide three values to instantiate the search object:
appId,
ApiKey, and
indexName.
Since we’re doing the initialization in a Javascript environment, the keys are visible to everyone. Knowing this, we must use our Search-Only API Key, which is limited to search operations only. You can find it under API Keys in your Algolia profile.
The
indexName attribute is the name of the index we want to search through, which is
songs in our case.
Adding the Search Box
The searchBox widget creates a smart text field to enter the search keyword.
To add the widget, we call the
addWidget method on the
search object (that we just created), passing an instance of
searchBox to it:
File:
resources/views/search.blade.php
... search.addWidget( instantsearch.widgets.searchBox({ container: '#search-box', placeholder: 'Search by artist, song, or lyrics', wrapInput: false, cssClasses: { input: 'form-control' } }) ); @endsection ...
As you can see in the above code, we pass a configuration object when instantiating a widget. This object is used to adjust the widget’s behavior.
The
container is the place where our
searchBox widget sits. The
placeholder is an ordinary HTML placeholder for the text field.
If
wrapInput is set to
true, the text field itself is wrapped by another
<div> element with a class named
ais-search-box.
Finally, the
cssClasses option specifies the additional classes to be added to the widget. In the above settings, we just add a class to the text field itself. We can add classes to the wrapping element as well.
To learn about the other available options for
searchBox, you should have a look at the documentation.
Adding the Hits Widget
The hits widget displays the search results based on a set of defined templates.
Add the following code right after the code for
searchBox:
File:
resources/views/search.blade.php
... search.addWidget( instantsearch.widgets.hits({ container: '#hits-container', templates: { item: $('#hits-temp').html(), empty: 'No lyrics was found! <p>We will be adding more lyrics to the database.</p>', header: '<h2>Lyrics</h2>' } }) ); @endsection ...
Again, we have an object of settings to adjust the widget’s behavior. The
templates option defines the templates used for rendering different parts of the hits (results) section. In the above code, we define a template for item, which is rendered for each item in the result set. We also define a template for any time the search has no results. There’s also another template for the hits header section.
These templates can be either a Mustache template or just a string, returned by an anonymous Javascript function.
In our case, the
item template is a mustache template stored within a
<script> tag, which we fetch with jQuery:
// ... item: $('#hits-temp').html(), // ...
This is our Mustache template. You can place it any where in the
search.blade.php template:
File:
resources/views/search.blade.php
>
In this template, we have access to the attributes of each row within the index, like
objectID,
artist,
title,
photo,
youtube_link, etc.
To prevent Blade from rendering mustache directives, we put a
@ before each curly brace set, letting Blade know that this expression should be remained untouched and is handled by Instantsearch.js later on.
Additionally, Algolia provides an object for each item called
_highlightResult. This object contains highlighted text for each attribute based on the search keyword. This is useful to show which part of the results are matched by the searched keyword.
<a class="song-link" href="song/@{{objectID}}">@{{{_highlightResult.title.value}}}</a>
Please note that we link each item to its details page.
Adding Pagination
The pagination widget generates the pagination links:
File:
resources/views/search.blade.php
... search.addWidget( instantsearch.widgets.pagination({ container: '#pagination-container' }) ); ...
To see all the available options for this widget, you should have a look at the documentation.
Finally, we start the
search object:
File:
resources/views/search.blade.php
// ... search.start(); // ...
The full code should look like this now:
File:
resources/views/search.blade.php
@extends('layouts.basic') @section('content') <div id="header"> <div class="col-md-2"><h1>Coolyrics</h1></div> <div class="col-md-6"> <div class="input-group" id="search-box"> <div class="input-group-btn"> <button type="button" class="btn btn-default dropdown-toggle" data- <span>Genre</span> <span class="caret"></span></button> <ul id="genres" class="dropdown-menu"> </ul> </div><!-- /btn-group --> </div> </div> <div id="login-link" class="col-md-4 text-right"> <a href="{{route('login')}}">Login</a> </div> </div> <div class="container-fluid"> <div id="hits-container"></div> <div id="pagination-container"></div> </div> @endsection @section('scripts') <!-- Scripts --> @parent <script language="javascript" src=""></script> <script src=""></script> @endsection @section('javascript') > <span></span> <script> var search = instantsearch({ appId: 'XI8PV16IK6', apiKey: '63c96991d445cb2de4fff316ac909c1a', indexName: 'songs', urlSync: true }); search.addWidget( instantsearch.widgets.searchBox({ container: '#search-box', placeholder: 'Search by artist, song, or lyrics', wrapInput: false, cssClasses: { input: 'form-control' } }) ); search.addWidget( instantsearch.widgets.hits({ container: '#hits-container', templates: { item: $('#hits-temp').html(), empty: 'No lyrics was found! <p>We will be adding more lyrics to the database.</p>', header: '<h2>Lyrics</h2>' } }) ); search.addWidget( instantsearch.widgets.hits({ container: '#hits-container', templates: { item: $('#hits-temp').html(), empty: 'No lyrics was found! <p>We will be adding more lyrics to the database.</p>', header: '<h2>Lyrics</h2>' } }) ); search.addWidget( instantsearch.widgets.menu({ container: '#genres', attributeName: 'genres', limit: 10, templates: { header: '', footer: '', item: '<li><a href="@{{url}}">@{{name}}</></li>' } }) ); search.addWidget( instantsearch.widgets.pagination({ container: '#pagination-container' }) ); search.start(); </script> @endsection
Now, if we reload the page, we should see an awesome search interface which works right out of the box. No extra coding, no hassle.
Feel free to play with the interface. Search for different terms and open the links to see how it looks. You can also make some typos on purpose while searching, to see the response. Algolia’s typo tolerance algorithms will automatically detect what your users are searching for and return the correct results.
Alright, I think that does it to get started with Scout and Algolia. You can find the full code on Github in case you want to try it for yourself.
To see a working demo of what we built in this tutorial click here.
Wrapping Up
We created our minimal lyrics website with the help of Laravel Scout and Algolia.
Please note that this project was just for educational purposes, implemented in the most basic form possible, and hence should not be used in a production environment. Please feel free to modify the code in any way you want.
To move even further, you may go to your Algolia profile, and change the settings and see the results. You can also add synonyms for each term, in case you don’t have that term in your database.
If you have any questions on the topic or if we’ve missed anything, let us know in the comments below! | https://www.sitepoint.com/build-lyrics-website-laravel-scout-algolia/ | CC-MAIN-2020-10 | refinedweb | 3,906 | 56.66 |
Posts: 2773
Registered: 01-04
Last edited by entryway on Dec 2 2011 at 12:56
Posts: 597
Registered: 08-09
Posts: 13647
Registered: 07-06: // Maes's quick and dirty blockmap extension hack
if (xl>LL.bmapwidth) xl=0; // Broke width boundary
if (xh>LL.bmapwidth) xh=0; // Broke width boundary
if (yh>LL.bmapheight) yl=0; // Broke height boundary
if (yl>LL.bmapheight) yl=0; // Broke height boundary
Last edited by Maes on Dec 2 2011 at 13:06
code: // MAES: extensions to support 512x512 blockmaps.
// They represent the maximum negative number which represents
// a positive offset, otherwise they are left at -257, which
// never triggers a check.
// If a blockmap index is ever LE than either, then
// its actual value is to be interpreted as 0x01FF&x.
// Full 512x512 blockmaps get this value set to -1.
// A 511x511 blockmap would still have a valid negative number
// e.g. -1..510, so they would be set to -2
// Non-extreme maps remain unaffected.
public int blockmapxneg=-257;
public int blockmapyneg=-257;
code: // MAES: set blockmapxneg and blockmapyneg
// E.g. for a full 512x512 map, they should be both
// -1. For a 257*257, they should be both -255 etc.
if (bmapwidth>255)
blockmapxneg= bmapwidth-512;
if (bmapheight>255)
blockmapyneg= bmapheight-512;
code:
// If x is LE than those special values, interpret as positive.
// Otherwise, leave it as it is.
if (xl<=LL.blockmapxneg) xl=0x1FF&xl;
if (xh<=LL.blockmapxneg) xh=0x1FF&xh;
if (yl<=LL.blockmapyneg) yl=0x1FF&yl;
if (yh<=LL.blockmapyneg) yh=0x1FF&yh;
Posts: 8026
Registered: 01-03
Posts: 3564
Registered: 06-09
Although this fudging will resolve the blockmap issues, it still won't make maps work right if coordinates are outside (-16383, 16383).
[...]
The general rule should be that no 2 points in a map are more than 32767 map units apart.
Last edited by Phml on Dec 2 2011 at 15:11.
code:
// link into blockmap
if (!flags(thing.flags, MF_NOBLOCKMAP)) {
// inert things don't need to be in blockmap
blockx = (thing.x - bmaporgx) >> MAPBLOCKSHIFT;
blocky = (thing.y - bmaporgy) >> MAPBLOCKSHIFT;
if (blockx<=blockmapxneg) blockx=0x1FF&blockx; // Broke width boundary
if (blocky<=blockmapyneg) blocky=0x1FF&blocky; // Broke heightboundary
Last edited by Maes on Dec 2 2011 at 16:03
Graf Zahl said:
Distance calculations may overflow and cause all sorts of weird glitches, among them problems with the automap, sounds not playing at correct volume, monsters doing strange things and whatever else needs to do distance checks.
Maes said:
Now, I gotta check if more weird things like sound distances etc. don't work correctly. Other things that use blockmap shifts are archvile iterators, teleporter checks, etc. and any function updating thing blocklinks. They are only a few well-defined ones actually, and the blockmap doesn't seem used outside of the P_ modules anyway.
tempun said:
How about, um, using an unsigned type?.
Edit: thanks.
tempun said:
Have you considered the "telescratching" phenomenon seen in void-glide demos? It must be due to overflows in the distance check. You won't get away with modifying just blockmap functions.
code: public final int getSafeBlockX(int coord ){
coord >>=MAPBLOCKSHIFT;
return (coord <=this.blockmapxneg)?coord &0x1FF:coord ;
}
public final int getSafeBlockY(int blocky){
coord >>=MAPBLOCKSHIFT;
return (coord <=this.blockmapyneg)?coord &0x1FF:coord ;
}
code:blocky=(bbox[BOXTOP] - bmaporgy + MAXRADIUS)>>MAPBLOCKSHIFT;
code:blocky=getSafeBlockY(bbox[BOXTOP] - bmaporgy + MAXRADIUS);
Posts: 2869
Registered: 06-06
Maes said:
In any case, the way I proposed to do it it's an easily toggleable fix, I see no reason why it could not be part of e.g. the next prBoom+ version.
entryway said:
Toggleable fix for toggling between what?
Posts: 2294
Registered: 08-03
Posts: 248
Registered: 05-11
Posts: 6443
Registered: 08-00
Maes said:
For toggling between 0...255 and 0..511 blockmap size limit. In any case, it can be a simple "Enable extended blockmap support" switch hidden somewhere in the menus which is not enabled by default on any of the existing complevels. Anyone wishing to use it will be able to do so, and that's about it.
code: public final int getSafeBlockX(int coord ){
coord >>=MAPBLOCKSHIFT;
if (blockmap_extension){
return (coord <=this.blockmapxneg)?coord &0x1FF:coord ;
} else
return coord;
}
Last edited by Maes on Dec 2 2011 at 19:47
Maes said:
coord & 0x1FF
Maes said:
@DaniJ: even using the extended blockmap that all Boom and MBF derivatives build internally is not enough to remove those limits we're talking about. Similarly, I'm not familiar with how Doomsday works and whether it would be able to run entryway's test272.wad without a problem. Can it? However standard Doom and Boom surely can't, even with a fixed blockmap lump, because the problem is actually in how runtime indexes to the blockmap are computed, and it's all over the damn code. Has Doomsday modified this aspect? E.g. you are not affected by the MAPBLOCKSHIFT bug at all?
That's another interesting part: by adopting the fixes I proposed, you are simply switching between blockmap index interpretations, without changing the blockmap data itself. The loader code stays as it is, but anything that uses the >> MAPBLOCKSHIFT construct gets changed to use the getSafeBlockX/getSafeBlockY functions instead. To disable the "fixing", simply put a global blockmap_extension boolean or something so that you can change index interpretation on demand.
Last edited by DaniJ on Dec 2 2011 at 21:57
> | http://www.doomworld.com/vb/source-ports/58002-immediate-noclip-at-level-start/1/ | CC-MAIN-2015-27 | refinedweb | 906 | 65.22 |
When it is required to update a list of tuple using another list, the 'defaultdict' can be used.
Defaultdict is a container similar to dictionaries which is present in 'collections' module. It is a sub-class of the 'dict' class. It returns a dictionary-like object. The 'defaultdict' doesn't raise a KeyError ever. It provides a default value for the key which doesn't exist.
Below is a demonstration for the same −
from collections import defaultdict def merge_vals(list_1, list_2): my_dict = defaultdict(list) for i, j in list_1 + list_2: my_dict[i].append(j) return sorted([(i, max(j)) for i, j in my_dict.items()], key = lambda x:x[0]) my_list_1 = [('v', 1), ('q', 2), ('o', 0)] my_list_2 = [('q', 5), ('o', 3)] print("The first list of tuple is : ") print(my_list_1) print("The second list of tuple is : ") print(my_list_2) print("After merging, it becomes : ") print(merge_vals(my_list_1, my_list_2))
The first list of tuple is : [('v', 1), ('q', 2), ('o', 0)] The second list of tuple is : [('q', 5), ('o', 3)] After merging, it becomes : [('o', 3), ('q', 5), ('v', 1)] | https://www.tutorialspoint.com/update-a-list-of-tuples-using-another-list-in-python | CC-MAIN-2021-49 | refinedweb | 181 | 61.87 |
Introduction: Aurora LED Lamp
An aurora is a natural light display in the sky, mostly seen in the Arctic and Antarctic regions. They are caused by the solar wind disturbing the magnetospere.
This Instructable is about creating your own aurora LED lamp. The effect is generated by a combination of colored LEDs and an unequally colored rotating disc. Both, the LEDs and the motor, are driven by a single microcontroller.
Step 1: Materials
This aurora light is a mixture of a digital and an 'analog' light effect. It requires some electronic parts and colored glass or plastic.
Electronic parts
- Arduino (compatible) board
- WS2812 LED ring (16 LEDs)
- 28YBJ-48 Stepper motor
- USB charger (IKEA Koppla)
- USB cable
- Breadboard jumper wires
- USB breakout board (optionally)
- Capacitor (100 - 1,000 uF)
- Resistor (470 - 1,000 Ohm)
Rotating Disc:
- Transparent colored glass and/or plastic
- Transparent plate
- Transparent glue
- Old lens / Google Cardboard (optionally)
- Plastic lid (jar, about 3,5 inch)
Housing:
- 4" PVC tube / PVC 45 degree elbow
- Paint
- Wood for internal parts
Other parts:
- Some small LEGO parts for the shaft.
Some caution is required when breaking glass or plastic. Place it in a cloth or plastic bag to prevent splinters from jumping away. And wear safety goggles.
Step 2: Digital Light Effect
The first light effect uses a WS2812 LED ring with 16 LEDs. These are controlled by an Arduino.
The initial code for this part has been made on 'Circuits.io'. And you can simulate an Arduino, with this circuit, directly from their webpage (takes some time to open).
The main loop provides 3 (Red, Green and Blue) values for the LEDs. The values differ each step by using a different cosinus-funtion for each color. I've choosen for a cosinus funtion, in stead of a random, function to allow for smooth color changes.
#include <Adafruit_NeoPixel.h> #define NUM_PIXELS 16 Adafruit_NeoPixel pixels(NUM_PIXELS, 8, NEO_GRB | NEO_KHZ800); int lednum = 0; float R, G, B; float Gplus = 60; float Bplus = 90; < 3599;;} delay (100); } }
The value of 0.0174532925 translates the degrees to radians.
Step 3: 'Analog' Light Effect
The second light effect is caused by a rotating disc. It is made from a transparent disc combined with transparent colored materials.
Start with a plastic lid. Carefully remove the inner side. And replace this with a transparent plastic disc. I've used a fretsaw to create a plastic circle out of a plate of polycarbonate (4mm).
Place different pieces of glass on the plastic disc. Irregular pieces of glass give different refractive angles, and increase the aurora effect. I've also used some parts of a lens. The centre part is a LEGO brick (Round 2 x 2 with Axle Hole).
Glue all parts together with transparent glue. You can use plenty, the glue becomes a part of the light effect. Make sure the axle has an angle of 90 degree.
My first prototype used an upper part of a cd spindle.
The sides of the plastic lid make it impossible to touch the sharp parts of the glass after assembling.
Step 4: Variable Speed Motor
There are 3 types of motors which can be used: DC geared motors, servo motors and stepper motors.
I've chosen a 28YBJ-48 stepper motor with an ULN2003 Driver Board. They are easy to control. And have 2 mounting holes. Which have been placed on a different location than the connectors of the LED ring.
This project uses the standard Arduino stepper library. This library contains 3 functions:
- Stepper(steps, pin1, pin2, pin3, pin4)
- setSpeed(rpm)
- step(steps)
The stepper function returns a new instance of the Stepper motor class. This requires the number of steps in one revolution of your motor and the connected pins on the Arduino. The 28YBJ-48 stepper moves 11.25 degrees per step in full step mode. Or 360/11.25 = 32 steps per single revolution of the internal motor shaft. There is an internal gear with a reduction ratio of 1/64. Resulting in 32 x 64 =2.048 steps per revolution (0.1758 degree per step).
The setSpeed function is used to adjust the motor speed. The input value is defined as rpm (rotations per minute). This value must be adjusted with the gear ratio, because the stepper library doesn't know the motor type. The value of 1 x 64 gives a speed of one revolution per minute.
The following example code controls the stepper motor. The motor speed is adjusted every 100 steps (about 20 speed changes per rotation). The speed is slowly adjusted between 0,5 and 1,5 rpm (the values 32 and 96).
#include <Stepper.h> #define STEPS_MOTOR 32 #define STEPS_OUTPUT 32 * 64 Stepper BY48stepper(STEPS_MOTOR, 8, 10, 9, 11); int Steps = 100; int Speed; void setup() { } void loop() { for (int j = 0; j < 359; j++) { Speed = 64 + (cos (j*0.0174532925) * 32); BY48stepper.setSpeed (Speed); BY48stepper.step (Steps); } }
Connect the stepper motor to the ULN2003 board. The Arduino pins 8-11 are used to control the stepper. Don't forget to connect the power (5 Volt) and Ground wires. Pin 12 is used for the LED ring.
Step 5: Housing
The housing is made of a 4 inch diameter PVC drain pipe. The first idea was to use a straight tube. In the end this has become a 45 degree linkage.
In this a circular wooden triplex plank is placed. Drill 3 holes for the motor, and 3 holes for the LED ring.
It is posible to paint the PVC:
- Work in a well-ventilated or open space.
- Sand the outside and inside. Use fine sandpaper (220+ grit).
- Apply very little accetone on a cloth. And wipe over the PVC. This allows the paint to hold better.
- Place the wooden disc before painting the housing.
- Apply multiple coats of spray paint. Let the paint dry for 10 to 20 minutes between each layer.
- Allow the paint to dry. This takes a few hours.
Step 6: Electronics
The electronics schema is straightforward. The USB breakout board provides power to the Arduino, ULN2003 Stepper Driver and LED ring, using an external power supply. The stepper motor is directly connected to the driver board. And the driver board is connected to pin 8-11 of the Arduino.
The led ring is connected to pin 12 of the Arduino. Use a resistor between the Arduino and the LED ring data input, to protect the data pin. Place a capacitor between the power and ground (near the led ring). The current draw of the LEDS can vary, and the capacitor acts as a temporary power source.
The Arduino can be powered by the USB breadboard OR the Arduino USB port. Use the Arduino port only when updating the software.
Step 7: Arduino Code
The final code combines the WS2812 LEDs and the Stepper Motor examples. Both examples use a for-loop in the main part. And I've combined these pieces of code into a single program.
Some parts of the code can be adjusted to alter the light effect:
- Motor speed: Current speed is between 1 and 1.5 rpm.
- Minimum LED output: Increase the 0 value in the "max (n, 0)" inside the setColor function.
- Maximum LED output: Decrease the 255 value in the "min (n, 255)" inside the setColor function.
- Faster color change: Decrease the Steps value.
#include <stepper.h> #include <Adafruit_NeoPixel.h> #define NUM_PIXELS 16 Adafruit_NeoPixel pixels(NUM_PIXELS, 12, NEO_GRB | NEO_KHZ800); #define STEPS_MOTOR 32 #define STEPS_OUTPUT 32 * 64 Stepper BY48stepper(STEPS_MOTOR, 8, 10, 9, 11); int lednum = 0; float R, G, B; float Gplus = 60; float Bplus = 90; int Steps = -50; int Speed; < 359;;} Speed = 64 + (cos (j*0.0174532925) * 32); BY48stepper.setSpeed(Speed); BY48stepper.step(Steps); } }
Upload and test the code before assembling the Aurora LED lamp.
Step 8: Assembly
After painting the PVC housing, it's time to assemble the Aurora LED Lamp. Start with the stepper motor and the SW2812 LED ring. Attach the motor to the wooden plate inside the housing. Use 3 breadboard wires to connect the LED ring. I've drilled 3 small holes and fastened the breadboard connectors to the wooden plate.
Connect all electronics as mentioned in a previous step. You can solder the wires, use a screw terminal or use an Arduino prototype board (only $1 on aliexpress). It is posible to power al components from the Arduino. But it's better to bypass the Arduino for powering the LEDs and the stepper motor. This requires an altered USB cable or an USB breakout board.
Place all electronic components inside the housing. And attach them to the wooden plate.
Connect the colored disc to the motor shaft. I've used some LEGO parts (bricklink.com):
- Technic, Axle 4 (3705)
- Technic, Liftarm 1 x 3 Thick (32523)
- Technic, Wedge Belt Wheel (4185)
- Technic, Pin with Friction Ridges (2780)
But you can also use a shaft coupler.
Although the LEDs give quite a lot of light, it's hard to make a good quality video. But the Aurora effect is best shown in a dark environment on a light wall. Just like the real auroras, which are most clearly seen at night.
GosseAdema
Aurora Borealis? At this time of year, at this time of day, in this part of the country, localized entirely within my kitchen?! Fantastic!!
I'm not convinced, I'd like to see a video, would you post one please ?
There is a video in the last step. It's a little too dark, the LEDs give enough light. but it's hard to make a video under low light conditions. This video is taken with more light
Dear, please can you tell me what is the music in this clip? Thank you!
The music is from Youtube: Ambient, Reflections
Can you please be more specific?
Thank you!
fantastic!! :^D
Voted for this cool thing
Really nice lighting effects! Bravo!
Quite beautiful, thank you! What is the cost for all the components? | http://www.instructables.com/id/Aurora-LED-Lamp/ | CC-MAIN-2017-39 | refinedweb | 1,662 | 75.91 |
This add-on is operated by 84codes AB
RabbitMQ as a Service
CloudAMQP
Last updated 18 August 2017
Table of Contents
CloudAMQP is an add-on providing RabbitMQ as a service. RabbitMQ is a high performance message broker, built in Erlang, which implements the AMQP protocol.
Messaging is the easiest and most efficient way to decouple, distribute and scale applications.
All AMQP client libraries work with CloudAMQP and there’s AMQP client libraries for almost every platform out there, including Ruby, Node.js, Java, Python, Clojure and Erlang.
Installing the add-on
CloudAMQP can be installed to a Heroku application via the CLI:
Number of nodes can be specified for instances on plan Roaring Rabbit and larger. The default setting for those instancs are two nodes - it will give you two mirrored nodes. If you choose one node it will give you twice the performance and three nodes will half the performace but give you pause minitory on partitions. Number of nodes can be specified when you create your instance with the parameter --nodes.
$ heroku addons:create cloudamqp -----> Adding cloudamqp to sharp-mountain-4005... done, v18 (free)
Example for Power Panda with one node:
$ heroku addons:create cloudamqp:panda --nodes=1
Once CloudAMQP has been added a
CLOUDAMQP_URL setting will be available in the app configuration and will contain the canonical URL used to access the RabbitMQ cluster. This can be confirmed using the
heroku config command.
$ heroku config | grep CLOUDAMQP_URL CLOUDAMQP_URL => amqp://user:pass@ec2.clustername.cloudamqp.com/vhost
After installing CloudAMQP the application should be configured to fully integrate with the add-on.
Local workstation setup
RabbitMQ is easy to install locally for development.
Use with Ruby
Ruby developers has a number of options for AMQP client libraries:
- Bunny the AMQP client, synchronous and very well maintained.
- AMQP an evented AMQP client, good in combination with an evented web server like Thin.
- March Hare AMQP client for JRuby, uses the Java RabbitMQ library underneath
The following example will use the synchronous client Bunny and publish a message and then consume it.
require "bunny" # don't forget to put gem "bunny" in your Gemfile b = Bunny.new ENV['CLOUDAMQP_URL'] b.start # start a communication session with the amqp server q = b.queue 'test1' # declare a queue # publish a message to the queue q.publish 'Hello, everybody!' delivery_properties, headers, payload = q.pop # retrieve one message from the queue puts "This is the message: " + payload + "\n\n" b.stop # close the connection
Here’s another example which uses the evented AMQP library, in combination with Sinatra and Server Sent Events. It shows how to build a simple real time application, using CloudAMQP as the backbone.
Find the Ruby Sinatra SSE sample on GitHub.
Source code or
The application can also be seen live at amqp-sse.herokuapp.com.
Further reading
- RubyAMQP.info is a great resource for working with AMQP in Ruby.
- A high level intro to messaging for rubyists
Use with Node.js
For Node.js we recommend amqplib. We highly advise against the former popular library node-amqp, it’s not very well maintained has some serious bugs.
Add
amqplib as a dependency in your
package.json file.
"dependencies": { "amqplib": "*" }
The following code snippet show how to connect and both publish messages and how to subscribe to a queue:
var q = 'tasks'; var url = process.env.CLOUDAMQP_URL || "amqp://localhost"; var open = require('amqplib').connect(url); // Consumer open.then(function(conn) { var ok = conn.createChannel(); ok = ok.then(function(ch) { ch.assertQueue(q); ch.consume(q, function(msg) { if (msg !== null) { console.log(msg.content.toString()); ch.ack(msg); } }); }); return ok; }).then(null, console.warn); // Publisher open.then(function(conn) { var ok = conn.createChannel(); ok = ok.then(function(ch) { ch.assertQueue(q); ch.sendToQueue(q, new Buffer('something to do')); }); return ok; }).then(null, console.warn);
Use with Clojure
Langohr is a full featured and well maintained Clojure wrapper for Java’s AMQP library.
To use it put
[com.novemberain/langohr "2.9.0"] in your
project.clj file and run
lein deps.
The following code snippet show how to both publish and to consume a message via CloudAMQP:
(ns clojure-amqp-example.core (:require [langohr.core :as rmq] [langohr.channel :as lch] [langohr.queue :as lq] [langohr.consumers :as lc] [langohr.basic :as lb])) (defn -main [& args] (let [url (System/getenv "CLOUDAMQP_URL" "amqp://localhost") conn (rmq/connect {:uri url}) ch (lch/open conn) qname "langohr.examples.hello-world"] (println (format "[main] Connected. Channel id: %d" (.getChannelNumber ch))) (lq/declare ch qname :exclusive false :auto-delete true) (start-consumer ch qname) (println "[main] Publishing...") (lb/publish ch default-exchange-name qname "Hello!" :content-type "text/plain") (Thread/sleep 2000) (println "[main] Disconnecting...") (rmq/close ch) (rmq/close conn)))
The full Clojure sample app is available on GitHub.
Source code or
Use with Python
The recommended library for Python to access RabbitMQ servers is Pika.
Put
pika==0.9.14 in your
requirement.txt file.
The following code connects to CloudAMQP, declares a queue, publishes a message to it, sets up a subscription and prints messages coming to the queue.
import pika, os, urlparse # Parse CLODUAMQP_URL (fallback to localhost) url_str = os.environ.get('CLOUDAMQP_URL', 'amqp://guest:guest@localhost//') url = urlparse.urlparse(url_str) params = pika.ConnectionParameters(host=url.hostname, virtual_host=url.path[1:], credentials=pika.PlainCredentials(url.username, url.password)) connection = pika.BlockingConnection(params) # Connect to CloudAMQP channel = connection.channel() # start a channel channel.queue_declare(queue='hello') # Declare a queue # send a message channel.basic_publish(exchange='', routing_key='hello', body='Hello CloudAMQP!') print " [x] Sent 'Hello World!'" # create a function which is called on incoming messages def callback(ch, method, properties, body): print " [x] Received %r" % (body) # set up subscription on the queue channel.basic_consume(callback, queue='hello', no_ack=True) channel.start_consuming() # start consuming (blocks) connection.close()
The full code can be seen at github.com/cloudamqp/python-amqp-example.
Celery
Celery is a great task queue library for Python, and the AMQP backend works perfectly with CloudAMQP. But remember to tweak the BROKER_POOL_LIMIT if you’re using the free plan. Set it to 1 and you should be good. If you have connection problems, try reduce the concurrency of both your web workers and in the celery worker.
Use with Java
RabbitMQ has developed an excellent Java AMQP library.
Begin to add the AMQP library as an dependency in your
pom.xml file:
<dependency> <groupId>com.rabbitmq</groupId> <artifactId>amqp-client</artifactId> <version>3.3.4</version> </dependency>
Then may a simple publish and subscribe look like this:
public static void main(String[] args) throws Exception { String uri = System.getenv("CLOUDAMQP_URL"); if (uri == null) uri = "amqp://guest:guest@localhost"; ConnectionFactory factory = new ConnectionFactory(); factory.setUri(uri); factory.setRequestedHeartbeat(30); factory.setConnectionTimeout(30); Connection connection = factory.newConnection(); Channel channel = connection.createChannel(); channel.queueDeclare("hello", false, false, false, null); String message = "Hello CloudAMQP!"; channel.basicPublish("", "hello", null, message.getBytes()); System.out.println(" [x] Sent '" + message + "'"); QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("hello", true, consumer); while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(); String message = new String(delivery.getBody()); System.out.println(" [x] Received '" + message + "'"); } }
The full Java sample app is available on GitHub.
Source code or
Java AMQP resources
Use with PHP
For PHP the recommend library is php-amqplib.
Your
composer.json should define both
php-amqplib and
ext-bcmath:
{ "require": { "videlalvaro/php-amqplib": "2.2.*", "ext-bcmath": "*" } }
Here’s an example which publishes to a queue and then retrieves it.
<?php require('vendor/autoload.php'); define('AMQP_DEBUG', true); use PhpAmqpLib\Connection\AMQPConnection; use PhpAmqpLib\Message\AMQPMessage; $url = parse_url(getenv('CLOUDAMQP_URL')); $conn = new AMQPConnection($url['host'], 5672, $url['user'], $url['pass'], substr($url['path'], 1)); $ch = $conn->channel(); $exchange = 'amq.direct'; $queue = 'basic_get_queue'; $ch->queue_declare($queue, false, true, false, false); $ch->exchange_declare($exchange, 'direct', true, true, false); $ch->queue_bind($queue, $exchange); $msg_body = 'the body'; $msg = new AMQPMessage($msg_body, array('content_type' => 'text/plain', 'delivery_mode' => 2)); $ch->basic_publish($msg, $exchange); $retrived_msg = $ch->basic_get($queue); var_dump($retrived_msg->body); $ch->basic_ack($retrived_msg->delivery_info['delivery_tag']); $ch->close(); $conn->close();
An example PHP app can be found on GitHub.
Source code or
For many more involved examples see php-amqplib’s demo directory
Dashboard Dashboard and selecting the application in question. Select CloudAMQP from the Add-ons menu.
Separate applications
Virtual Hosts (vhosts) makes it possible to separate applications on one broker. You can isolate users, exchanges, queues etc to one specific vhost. You can separate environments, e.g. production to one vhost and staging to another vhost within the same broker, instead of setting up multiple brokers.
vhosts can be set up on all dedicated instance.
Migrating between plans
Migrating between shared plans
Plan migrations are easy and instant when migrating between shared plans (Little Lemur and Tough Tiger). Use the
heroku addons:upgrade command to migrate to a new plan.
$ heroku addons:upgrade cloudamqp:rabbit -----> Upgrading cloudamqp:rabbit to sharp-mountain-4005... done, v18 ($600/mo) Your plan has been updated to: cloudamqp:rabbit
Migrating between a shared plan and a dedicated plan
There is no automatic upgrade between a shared plan and a dedicated. Instead we recommend you to create the new plan and point your publishers to the new plan. Let your consumers empty the queues on the old plan and then point them to the new plan and finally delete the old plan.
Migrating between dedicated plans
You can automatically upgrade between dedicated plans. The upgrade process keeps your cluster live under the upgrade process so there is no downtime in the upgrade process.
$ heroku addons:upgrade cloudamqp:rabbit -----> Upgrading cloudamqp:rabbit to sharp-mountain-4005... done, v18 ($600/mo) Your plan has been updated to: cloudamqp:rabbit
Removing the add-on
CloudAMQP can be removed via the CLI.
This will destroy all associated data and cannot be undone!
$ heroku addons:destroy cloudamqp -----> Removing cloudamqp from sharp-mountain-4005... done, v20 (free)
Error codes
We log errors to your heroku log, below we explain the different codes.
410 - Transfer limit reached
You’ve reached your monthly transfer quota. Upgrade to a larger plan or wait until the next calendar month.
210 - Transfer in compliance
You’ve either upgraded your account (got a higher transfer limit) or it’s a new calendar month and your quota has been reseted.
420 - Connection limit reached
You’re using all your connection slots so new connections will be blocked. Either lower your connection count or upgrade to larger plan to accommodate more connections.
220 - Connections in compliance
You can now open more connections again because you’re not using all connection slots.
431 - Max channels per connection
One of your connections was closed because you’d open to many channels on it. This is often due to a bug, so check your code and make sure that you close unused channels.
432 - Max consumers per connection
One of your connections was closed because it had opened more than 12000 consumers. This is often due to a bug, so make sure that you close unused consumers.
Support
All CloudAMQP support and runtime issues should be logged with Heroku Support at support.heroku.com. Any non-support related issues or product feedback is welcome at support@cloudamqp.com or via Twitter @CloudAMQP. For our plans Power Panda and larger, we provide 24/7 critical support with a 30-minutes maximum initial response time and call-in phone numbers. | https://devcenter.heroku.com/articles/cloudamqp | CC-MAIN-2018-05 | refinedweb | 1,888 | 51.24 |
For char, string or short to int, you just need to assign the value. char ch = 19; int in = ch; Same to int64. long long lo = ch; All values will be 19
//Convert char to int in c++ to get correct values of numbers char x='1'; //ascii value=49 int xx=x-'0' //49-48==1 char a='C'; //67 char aa=a-'A' //67-65=2 means third char of Alphabet
There are various ways in which one can convert char to its int value.
- Using ASCII values: This method uses TypeCasting to get the ASCII value of the given character. From this ASCII value, the respective integer is calculated by subtracting it from the ASCII value of 0.
- Using String.
- Using Character.
String to int C++
C++ string to int Conversion We can convert string to int in multiple ways.
C++11 there are some nice new convert functions from
std::string to a number type.
So instead of
atoi( str.c_str() )
you can use
std::stoi( str )
where
str is your number as
std::string.
There are version for all flavours of numbers:
long stol(string),
float stof(string),
double stod(string)
Char to string C++
std::string has a constructor for this: const char *s = "Hello, World!"; std::string str(s); or All of std::string s(1, c); std::cout << s << std::endl; and std::cout << std::string(1, c) << std::endl; and std::string s; s.push_back(c); std::cout << s << std::endl;
How to convert characters of a string to opposite case. Let’s understand the problem. Given a string,we need to convert the characters of the string into opposite case i.e. if a character is lower case than convert it into upper case and vice-versa.
Examples given below will simplify. If given a string like this GeeksforGeeks, we will convert each character to opposite case, ouput will be the same string but each character will have opposite case as the case in input string.
In this example,F & E are in upper case in input while they are in lower case in otput string, whereas other characters were lower case in input and are in upper case in output, similarly in second example, all characters are in lower case and are converted to upper case in output.
We will use ASCII values to convert each character in a string. Capital letters have values from 65 to 90, while LowerCase letters have values from 97 to 122. Now to convert case of a character we need to manipulate their ASCII values.
In ASCII values small a – capital A is 32,i.e. Capital A = small a – 32 and Small a = Capital A + 32. Hence we can convert LowerCase to uppercase by subtracting 32 from ASCII values, similarly by adding 32 to ASCII value, UpperCase character can be converted to LowerCase.
We can also use inbuilt functions in C++, for conversions. Understanding the C++ Code. convertOpposite takes a string as input and has returntype void and converts each character of input string to it’s opposite case. First we store length of input string in integer ln by using str.length() function, which returns length of string.
Next we will iterate through each character of this string using this for loop which goes from 0 to l-1. If str[i] i.e. the present character is in lower case as checked by this if statement str[i] >= small a and <= small z.
Then we subtract 32 from it’s ASCII value for conversion to upper case. else if character is in UpperCase i.e. ASCII value falls between the ASCII value of capital A and capital Z, then we convert it to a lower case by adding 32 to its ASCII value. Now each character of str is in its opposite case.
String char to int C++.
Using the
stringstream class
#include <iostream> #include <sstream> using namespace std; int main() { string str = "100"; // a variable of string data type int num; // a variable of int data type stringstream ss; ss << str; ss >> num; cout << "The string value is " << str << endl; cout << "The integer representation of the string is " << num << endl; }
Using
stoi()
#include <iostream> #include <string> using namespace std; int main() { // 3 string examples to be used for conversion string str_example_1 = "100"; string str_example_2 = "2.256"; string str_example_3 = "200 Educative"; // using stoi() on various kinds of inputs int int_1 = stoi(str_example_1); int int_2 = stoi(str_example_2); int int_3 = stoi(str_example_3); cout << "int_1 : " << int_1 << endl; cout << "int_2 : " << int_2 << endl; cout << "int_3 : " << int_3 << endl; }
Using
atoi()
#include <iostream> #include <string> using namespace std; int main() { // 3 string examples to be used for conversion const char* str_example_1 = "100"; const char* str_example_2 = "2.256"; const char* str_example_3 = "200 Educative"; // using stoi() on various kinds of inputs int int_1 = atoi(str_example_1); int int_2 = atoi(str_example_2); int int_3 = atoi(str_example_3); cout << "int_1 : " << int_1 << endl; cout << "int_2 : " << int_2 << endl; cout << "int_3 : " << int_3 << endl; } | https://epratap.com/char-to-int-cpp-string-char-to-int-cpp-char-to-string-cpp/ | CC-MAIN-2021-10 | refinedweb | 826 | 68.91 |
0
I'm learning C++ from the book C++ Primer Plus. At the end of the chapter on loops, they want us to design a structure, allocate adequate memory for an array of such structures using new, then feed input data to it. I got this code:
#include <iostream> using namespace std; struct car{ char make[20]; int yearMade; }; int main() { int count = 0; cout<<"How many cars do you wish to catalog today? "; cin>>count; car * autocar = new car[count]; for(int i=1; i<=count; i++){ cout<<"Car #"<<count<<": "<<endl; cout<<"Please enter the make: "; cin.get(autocar[i-1]->make).get(); cout<<"Please enter the year manufactured: "; cin>>autocar[i-1]->yearMade; cout<<endl; } cout<<endl<<"Here is your collection: "<<endl; for(int i = 1; i<=count; i++){ cout<<autocar[i-1]->yearMade<<" "<<autocar[i-1]->make<<endl; } delete [] autocar; return 0; }
Now, the error comes on line 21, 24 and 29. The compiler is complaining that the "base operand of '->' has non-pointer type 'car'".
Beats me. I made sure that the type is a pointer, as you can see in the statement in line 16. Any suggestions? | https://www.daniweb.com/programming/software-development/threads/306841/a-problem-with-the-membership-operator | CC-MAIN-2018-39 | refinedweb | 192 | 57.2 |
[Offtopic tag added for reader convenience]> > - it doesn't impact drivers unless the developer chooses to use devfs> If the _user_ uses devfs, the _developer_ has to provide it. A halfway> system is worse than each alternative on its own.yes, just like SMP. and the impact is very trivial.> > - it doesn't impact system adminstration unless you enable the feature> But does when enabled. One more variable to consider on each support call.four options:a) use devfs and don't need to mangle /devb) use devfs and don't agree w/ naming/perms so choose to manually modify /dev;use rc.whatever or devfsdc) don't use devfs and run MAKEDEV and don't need to mangle /devd) don't use devfs and run MAKEDEV and don't agree w/ naming/perms and modrc.whatever or /devab and cd are very similar save that ab has one less step. consider that you_know_ in each support call that the naming and major/minor are current.> I've never said that everybody is like me. I'm careful to talk about my> experience, and what I have seen. As far, nobody at all has stepped forward> telling the grueling story of his machine with hundreds of devices that> change minute by minute, so I'd have to assume that this doesn't exist, or> in any case is so marginal that any impact at all on the kernel used by> millions that don't have any use for the feature is out.in an earlier email you point out that because you have never met anyone thatdoes a certain thing that the point is moot. i am a somebody that does thatcertain thing; hot swap all day long. two nifty and very handy things associatedwith swapping pcmcia cards and usb devices.> > - gives sensible names to devices (c1t3d0s2 instead of sde)> Change MAKEDEV, be my guest.c1t3d0s2 will always be c1t3d0s2 whereas sde will change depending on how manyother drives come before it. and UUID is not a workable solution for non-RWmedia.> > - eliminates scsi ordering problems because of sensible names> /dev/c1t3d0s2 becomes /dev/c2t3d0s2 when you move the controller, and> adding a new disk gets you to /dev/c2t4d0s2. How does this solve the "sdc is> now sdb" problem?moving the controller is one issue that remains, however most people appear tomove drives, in my case, removable media and non-powered media on scsi chainschanges the continuity of my drives.> Check mount(8), options -L, -U for a solution to this.cdroms have different labels and no uuid. devfs cleanly keeps it ordered.> > - completely eliminates major/minor number problems> Can't do that, because it is deeply ingrained into the kernel's way of> handling devices.and devfs is a direction away from them.> > - moves naming complexity INTO USER SPACE (good for usb)> MAKEDEV is user space.manual v.s. automatic> > - user space scripts ran on insertion (just like cardmgr/PCMCIA)> Can be done (sort of) with modules and pre- post- scripts. Not nice.similar to kmod, no? devfsd again can be like quota utils and update an incoremap.> > - UNIX-like /dev without UNIX-like rw fs (good for embedding on romfs)> ROMFS is designed to be _small_, not full-Unix. I'd guess adding device> nodes to ROMFS won't make it much larger. Surely much less than devfs and> its bloat in all devices by itself.devfs is not bloated.> > - provides a proper namespace (no need for recent rash of /proc/*/dev)> If you can't provide a proper namespace in /dev, then doing it as a fake> filesystem is out anyway.let's wonder for a bit. i am a developer of a widget. i change my major/minorfor my widgets. i don't have to go update MAKEDEV and make a big notice toeveryone. /dev will always have the naming constructs that i use in my code.> > Notice that all of these problems can be solved in other ways (for example> > you can solve the sde -> c1t3d0s2 problems using a startup script, similar> > to how Solaris populates /dev) but devfs solves ALL of the problems in one> > fell swoop.> That isn't exactly right. As said above, it does not solve all problems.> Plus the naming problem is still there, it is just shifted from MAKEDEV> (yuck!) to either another configuration file (same yuck!) or the driverin my case, i have never needed devfsd. in an earlier email you managed to makea large action list for the kernel and devfsd to talk and update confs. looktoward quota and see how it interacts with on disk files and in core. start upand shut down are your disk factor. nothing else unless you choose to sync. idon't know of many people that lmbench startup and shutdown.> unless you prefer to live in a vacuum. Also, if something solves several> problems in one fell swoop _without_ adding strange klugdes and needing> extra machinery, it's an elegant solution (the conception behind Unix is ahow many iterations of the F00F bug solution did we run through? how many timeshas the way SMP been done in the kernel been changed, or VM, or VFS?"version 1" normally leads to "version 2". we would be silly to think that devfsis ultimately perfect and it never need be changed again. heck, look at thefirewalling. we have iptables now. in the last five years we have had threemajorly different and incompatible ways of dealing with firewalling. all of itis made easy for userland by scripts and whatnot, and in core is made moreextensible and easy to manipulate. the majority of people are probably satisfiedwith their permissions in /dev and won't use devfsd. for those who aren't likeyourself, there are options. permissions can be easily set in the kernel make aswell. just like cpu selection is or ethernet card is or etc. one can be courseabout it and have "open" and "paranoid" permission groups or can be fine grainedlike soundcard and say specify precise perm values. you can even have a rangeinbetween these.> fine example). If not, it's just exchanging one mess for another one. My> fear here is that devfs exchanges an acknowledged mess, which we know and> over time learned how to handle in a reasonable way; with a much larger> mess, one with unknown quirks that will have to be worked around. All for> no real gain.i compile new kernels several times a day for the hundreds of systems worldwidethat we manage. devfs has enabled me to save a few minutes here and therefrequently and not worry about /dev updates. over time, that translates to a lotof saved time. i've been using devfs for over a year now and i honestly saythat i don't worry about /dev and my troubles with adding devfs to the kernel areby large "patch -p1 < devfs...". that's a trouble i can sleep with most easily.> Yes, I did. But if the costs involved are smaller than the benefits, go for> it. If not, leave it alone. In this case, as no pressing need has surfaced,> and no clear benefits have been shown, leave it alone.many of my peers use devfs and none of us have issues to deal with. the costsinvolved to date are a small amount of time during the week for one man,Richard. for the work that he has done, my benefits mean saving time personallyand a significantly lower accumulated load for that one class of servers thatstats /dev/ a lot.for you the benefits aren't clear and for me the benefits are very clear. itsolves a variety of little issues. little issues compounded by numerous systemsmakes for a lot of frustration and time.we have a bunch of stuff in the kernel such as AX.25 and IPX because it benefitssomeone. i'm probably correct in that they don't benefit you. nor do theybenefit me. but i will staunchly support them just as i do devfs. | https://lkml.org/lkml/1999/10/9/3 | CC-MAIN-2017-30 | refinedweb | 1,335 | 74.19 |
Web App with Python
bluepaperbirds
・3 min read
You can do web development in Python. Make a web app or website with Python? It's easier than you think.
First pick a web development module. There are many like
Install the module and learn how to use it. There are a lot of modules and they all are a bit different.
Flask is the most commonly used. Django is also popular, but it's learning curve can be challenging.
Putting your python app on the web is easy, even no need to maintain servers.
Hello world demo (Twisted)
To get started with building your app, it's recommended that you use a virtual environment. This prevents package conflicts on your system.
First setup your virtual environment and then install one of the many Python web modules.
For this article I'll go with
twisted
pip3 install twisted
Twisted can be used to make web apps, but it supports many other network protocols like SMTP, POP3, IMAP, SSHv2, and DNS.
Create your program, app.py
from twisted.web import server, resource from twisted.internet import reactor, endpoints class Counter(resource.Resource): isLeaf = True numberRequests = 0 def render_GET(self, request): self.numberRequests += 1 request.setHeader(b"content-type", b"text/plain") content = u"Hello World #{}\n".format(self.numberRequests) return content.encode("ascii") endpoints.serverFromString(reactor, "tcp:8080").listen(server.Site(Counter())) reactor.run()
Then open localhost at port 8080 and you'll see the hello world message:
If you want to have your app online instead, you can follow this guide
Hello world demo (Flask)
If you want to use Flask instead, you can use the code below. It starts the server at port 8080 and will do exactly the same.
The difference between Twisted and Flask is, that Flask is designed for web apps only. It supports many things you may need like Templates and URL routing out of the box.
from flask import Flask app = Flask(__name__) @app.route('/') def index(): return 'Hello world' app.run(host='0.0.0.0', port=8080)
Open the browser at that url, and you'll see "hello world" appear. If you are new to Flask, this is a good Flask course.
Hello world in Bottle
Bottle is a simple WSGI micro web-framework for Python. It doesn't have any dependencies other than the Python Standard Library.
It has these features:
- Routing: map URLs to function
- Templates: template engine and support for mako, jinja2 and cheetah templates.
- Utilities: Convenient access to form data, file uploads, cookies, headers and other HTTP-related metadata.
- Server: Built-in HTTP development server and support for paste, fapws3, bjoern, gae, cherrypy or any other WSGI capable HTTP server.
from bottle import route, run @route('/hello') def hello(): return "Hello World!" run(host='localhost', port=8080, debug=True)
Then open localhost:8080 and url
/hello (there is a url route). See the decorator
@route('/hello')
that maps the url to the function
def hello(): return "Hello World!"
Personally I don't think the Bottle framework offers any advantages over the Flask framework, while Flask offers many advantages over Bottle. In short, go with Flask. Bottle is for tiny projects.
Join DEV Now
Open source
Free forever
Level up every day
🔥.
| https://practicaldev-herokuapp-com.global.ssl.fastly.net/bluepaperbirds/web-app-with-python-5chi | CC-MAIN-2020-24 | refinedweb | 542 | 67.86 |
Hey
It summer and I am messing around with a little project of mine.
I am building a program (maybe app later on) to remotely
handle my torrents (only free torrents!).
For example:
If I type this url ()
in the browser the movie gets in my client/starts downloading. I want this to happen by my program.
Code :
import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.net.URLConnection; public class Connection { private String ip; private String port; private String username; private String password; public Connection(String ip, String port, String username, String password) { this.ip = ip; this.port = port; this.username = username; this.password = password; connect(this.ip, this.port); } private void connect(String ip, String port) { try { URL myURL = new URL(""); URLConnection myURLConnection = myURL.openConnection(); myURLConnection.connect(); } catch (MalformedURLException e) { System.out.println("failed connection"); } catch (IOException e) { System.out.println("failed connection"); } } }
This is what I already have, i am sure it connects but it doesn't add the torrent to my program
Does somebody know what I need to add for it to work?
I use it for legal stuff only!
Thanks in advance! | http://www.javaprogrammingforums.com/%20java-theory-questions/17157-connecting-my-torrent-client-i-use-legal-stuff-only-printingthethread.html | CC-MAIN-2015-27 | refinedweb | 193 | 54.18 |
?
For the sake of clarity, let's consider the canonical script
import matplotlib
matplotlib.use('PS')
from pylab import *
plot([1,2,3])
savefig('test.ps')
show()
When run from the shell, it does what you want -- makes a PS with no
popup. It fails in ipython (pops up a window) because you have
already selected a backend and all pylab commands are directed to that
backend.
How to fix it?
* ipython invokes an external python process to run each script. Of
course you pay a performance hit here, and this would likely change
the meaning of the way run is meant to work (eg, are local ipython
shell vars available in a "run" script.
* provide better support for backend switching in matplotlib. Eg,
allowing you at any time to call matplotlib.use. Currently, this
only works before the import of pylab. It may be possible to write
a pylab.use that simply rebinds the 4 backend functions:
new_figure_manager, error_msg, draw_if_interactive, show. At the
end of a "run", you could simply do a
matplotlib.pylab.use(defaultBackend) to rebind. run could be
enhanced to support backend switching
run somescript.py -dPS
much like one can do from the shell.
You know more about python module reloading than I do. How does one
force a module to reload, eg if I wanted to set the rc 'backend'
param and then do, eg
rcParams['backend'] = 'PS'
from backends import new_figure_manager, error_msg, draw_if_interactive, show
to get new symbols?
There may be another way, but those two come to mind. I'll mull it
over.
> ps. Yes, John, I've finally started to use matplotlib
> for my own work. Brace yourself, I'm compiling a pretty
> hefty list of things to do. I hope you don't plan on
> sleeping much in the coming months
Well, I knew it was coming.... Stress tests are usually a good
thing. Plus, I'm sure you can't do anything to interrupt my sleep
that my 3 kids haven't already mastered!
JDH | https://discourse.matplotlib.org/t/savefig-problem-in-interactive-mode/1939 | CC-MAIN-2019-51 | refinedweb | 336 | 75.61 |
Get-NetRoute
Updated: October 17, 2013
Applies To: Windows 8.1, Windows PowerShell 4.0, Windows Server 2012 R2
Get-NetRoute
Syntax
Parameter Set: ByName>]
Detailed Description.
Parameters
-AddressFamily<AddressFamily[]>
Specifies an IP address family. The cmdlet gets IP routes gets IP routes that belong to the interface gets gets only routes in the default network compartment.
-InterfaceAlias<String[]>
Specifies an array of aliases of network interfaces. The cmdlet gets IP routes for the interfaces that have the aliases that you specify.
-InterfaceIndex<UInt32[]>
Specifies an array of indexes of network interfaces. The cmdlet gets IP routes for the interfaces located at the indexes that you specify.
-NextHop<String[]>
Specifies an array of next hop values. The cmdlet gets IP routes that have the next hop values that you specify. A value of 0.0.0.0 for IPv4 or :: for IPv6 indicates that the route is on the local subnet.
-PolicyStore<String>
Specifies a PolicyStore value. The cmdlet gets gets entries that have these values. To obtain a TimeSpan object, use the New-Timespan cmdlet. For more information, type
Get-Help New-TimeSpan.
-Protocol<Protocol[]>
Specifies an array of types of routing protocols. The cmdlet gets entries gets entries that have the publish values.
-RouteMetric<UInt16[]>
Specifies an array of integer route metrics for IP routes. The cmdlet gets entries lifetimes, as TimeSpan objects, for IP routes. The cmdlet gets entries that have the lifetimes that you specify. To obtain a TimeSpan object, use the New-TimespanRoute
The
Microsoft.Management.Infrastructure.CimInstanceobject is a wrapper class that displays Windows Management Instrumentation (WMI) objects. The path after the pound sign (
#) provides the namespace and class name for the underlying WMI object.
Examples
Example 1: Get all routes
This command gets the routes that belong to the IPv6 address family.
Example 3: Get routes for a specified interface
This command gets the IP routes associated with the interface that has an index of 12.
Example 4: Get the next hop for the default route
This command gets all IP routes, and then passes them to the Where-Object cmdlet by using the pipeline operator. The command selects those routes that have a valid lifetime of the maximum value.
Related topics
Find-NetRoute
New-NetRoute
Remove-NetRoute
Set-NetRoute
Get-NetAdapter
Get-NetIPInterface | https://technet.microsoft.com/en-us/library/hh826126(d=printer).aspx | CC-MAIN-2015-11 | refinedweb | 381 | 51.04 |
Smart Holster: Hall Sensor and GPS Sensor for Geo Location
The following instructable describes how to connect your Intel Edison using the Grove GPS sensor, Hall sensor, LED, button, and the Base Shield. The data is then captured an uploaded to the could using Firebase.
Please note that the Grove GPS does not come standard in the Gove Starter Kit Plus.
Step 1: Hardware Needed:
- Intel Edison chip (Included in Grover Starter Kit Plus)
- Arduino Expansion board (Included in Grover Starter Kit Plus)
- Base Shield (Included in Grover Starter Kit Plus)
- Grove GPS sensor
Step 2: Software Requirements
-Eclipse C++ ()
- Drivers (list for various OS...
-Tool Lite (view installation for OS...
-Intel XDK
By now you should be familiar with your Arduino Expansion board parts.
To refresh your memory please visit
Step 3: Assemble of Board
If this is your first using your Arduino Expansion board and Intel Edison please follow the ‘Assembling the Intel® Edison board with the Arduino expansion board’ tutorial to get your board assembled and ready for use. -Assembling the Intel® Edison board with the Arduino expansion board...
-Driver installation and updating firmware (enter link... -Updating firmware with ToolLite...
-Flashing for mac... Once you successfully updated your firmware in a terminal follow the next steps:
~# screen /dev/tty.usbserial 115200 -L
~# root (no password) ~# configure_edison --setup
You will now be prompted to name your Edison. Please ensure that your name is unique as there might be a conflict if the name given to the board is the same as that of another user.Make sure that both the machine you are using to connect to your Edison is using the same network (wifi) as your Edison. If you run into any issues with your wifi and need to re configure it please follow the next steps:
~# configure_edison --wifi
o verify that wifi connectivity is working try pinging a familiar web page such as google.com Also, from another terminal outside your Edison terminal try pinging your the Edison IP. The IP can be found by running ifconfig from within your Edison terminal.
~# ping <ip>
Step 4: Creating Your Project in Eclipse C++
1. Launch your Eclipse C++ and Navigate to “File > New > Intel IoT C/C++ project”
2. Enter a name for your project (note no spaces are allowed).
Step 5: Getting the Code Ready
Below is a snipet of the code used.Please note that there is also an LED and button being used in the code along with integration with FireBase.
Full code for the GPS, Hall sensor and LEDs its at
#include "mraa.hpp"
#include "UdpClient.hpp"
#include <grove.h>
#include <signal.h>
#include <ublox6.h>
#include <a110x.h>
#include <stdio.h>
#include <curl/curl.h>
#include <iostream>
#include <sstream>
#include<string>
using namespace upm; using namespace std;
const size_t bufferLength = 256;
define NODE "localhost"
#define SERVICE "41234"
#define COMP_NAME "temperature"
int main() {
// Create the Grove LED object using GPIO pin 4
upm::GroveLed* ledRed = new upm::GroveLed(4);
upm::GroveLed* ledGreen = new upm::GroveLed(3);
// create an analog input object from MRAA using pin A0
mraa::Aio* a_pin = new mraa::Aio(0);
// Create the button object using GPIO pin 8
upm::GroveButton* button = new upm::GroveButton(8);
// Instantiate a Ublox6 GPS device on uart 0.
upm::Ublox6* nmea = new upm::Ublox6(0);
int gunDrawn = 100;
int magFieldAvg = 0;
int magFieldCurrent = 0;
int magField[10];
int tempIndex = 0;
int numSamples = 2;
string tempData;
// check that we are running on Galileo or Edison
mraa_platform_t platform = mraa_get_platform_type();
if ((platform != MRAA_INTEL_GALILEO_GEN1) && (platform != MRAA_INTEL_GALILEO_GEN2) && (platform != MRAA_INTEL_EDISON_FAB_C))
{ std::cerr << "Unsupported platform, exiting" << std::endl;
return MRAA_ERROR_INVALID_PLATFORM; }
// Read in hall sensor data
if (a_pin == NULL)
{ std::cerr << "Can't create mraa::Aio object, exiting" << std::endl;
return MRAA_ERROR_UNSPECIFIED; }
// GPS Setup
// make sure port is initialized properly. 9600 baud is the default.
if (!nmea->setupTty(B9600))
{ cerr << "Failed to setup tty port parameters" << endl;
return 1; }
// Curl setup
//followed this curl example:
CURL *curl;
CURLcode res;
// In windows, this will init the winsock stuff
curl_global_init(CURL_GLOBAL_ALL);
// get a curl handle
curl = curl_easy_init();
// First set the URL that is about to receive our POST. This URL can
// just as well be a if that is what should receive the
// data.
curl_easy_setopt(curl, CURLOPT_URL, "");
//this is only intended to collect NMEA data and not process it
// should see output on console
char nmeaBuffer[bufferLength];
while(1)
{ uint16_t pin_value = a_pin->read();
magFieldAvg = 0;
magField[magFieldCurrent++] = pin_value;
if (magFieldCurrent >= numSamples)
{ magFieldCurrent = 0; }
for (int i = 0;i < numSamples; i++
{magFieldAvg += magField[i]; }
magFieldAvg /= numSamples;
sleep(1);
if(magFieldAvg < gunDrawn)
{ ledRed->off();
ledGreen->on(); }
else {
if (nmea->dataAvailable()) {
int rv = nmea->readData(nmeaBuffer, bufferLength);
if (rv > 0) {
write(1, nmeaBuffer, rv);
std::cout << nmeaBuffer << std::endl; }
else {
// some sort of read error occurred
cerr << "Port read error." << endl;
break;
}
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, "{\"gunDrawn\":\"true\"}");
// Perform the request, res will get the return code
res = curl_easy_perform(curl);
// Check for errors
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res));
}
// add some LEDs
ledRed->on();
ledGreen->off(); }
if (button->value() == 1)
{ break; }
}
// Delete the Grove LED object
ledGreen->off();
ledRed->off();
delete ledGreen;
delete ledRed;
delete a_pin;
delete button;
delete nmea;
return MRAA_SUCCESS; }
Step 6: Getting the Hardware and Sensors Connected
The GPS sensor:
The GPS sensor is linked to the UART input of the Base Shield (included in the Grove Starter Kit Plus). Once connected you should see the green light on the sensor turned on, this indicates its properly connected and receiving power.
The Hall sensor:
The Hall sensor is connected to A0 input of the Base Shield.
Take it a step further:
In addition, if you are feeling ambitious the LED sensor is connected to D4 and D3 (one for red and one for green) and the button sensor is connected to D8.
Step 7: Seeing the GPS Data in Action
Note that the GPS data is in NMEA format.
Once you have your the sensors connected to the board you are ready to build and deploy the code. Once you deploy the code you should see the following in your console in Eclipse.
Step 8: Getting Intel IOT C++ to CURL JSON to Firebase
While Intel Edison provides instructions for having the Intel IOT Edison perform REST API calls with the python and javascript library, the documentation for C++ is lacking. This example code () along with the instructions below describes how to get the Intel IOT Edison board’s C++ environment to perform CURL calls.
1. Include the .h files at the top of your cpp file:
#include <stdio.h>
#include <curl/curl.h>
2. Ensure curl is linked in the project properties setup.
In order to function, the cURL library must be linked. If you get complier errors, do the following to add the cURL library: Right click on the project and select properties. Then navigate to
C/C++ Build -->Settings --> Cross G++ Linker --> Libraries --> Click the green plus button,and add the “curl” library
3. Set up the code. This will be in your main() cpp code. Note that the highlighted yellow dummy link needs to be replaced with your own Firebase URL.
//CURL Setup
CURL *curl;
CURLcode res;
// In windows, this will init the winsock stuff
curl_global_init(CURL_GLOBAL_ALL);
// get a curl handle
curl = curl_easy_init();
// First set the URL that is about to receive our POST. This URL can
// just as well be a https:// URL if that is what should receive the
// data.
curl_easy_setopt(curl, CURLOPT_URL, "");
4. cURL Call. In the infinite loop portion of the code, perform the CURL
call when the appreciate trigger occurs. You will need to replace the JSON data highlighted in yellow with your own data.
if (button->value()==1){
std::cout << button->name() << " value is " << button->value() << std::endl;
// Now specify the POST data
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, "{\"lat\":23.343,\"long\":234.45345}");
// Perform the request, res will get the return code
res = curl_easy_perform(curl);
// Check for errors
std::cout << "curl output: " << res << std::endl;
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); }
Step 9: Getting All the Parts Together in a Prototype
We used the hall sensor, GPS sensor, LED and button to protoype a smart safety holster. The idea was to trigger when an object its removed and start capturing the GPS location of where the object was removed. The data would then be uploaded to the could.
Once the object its placed back and the hall sensor detects the object is back in position the GPS data will stopped to be collected. This can be applied to any objects to which the hall effect aplies. The GPS and LED functionality can be re used for other projects. | http://www.instructables.com/id/Smart-Holster-Hall-Sensor-and-GPS-sensor-for-geo-l/ | CC-MAIN-2017-26 | refinedweb | 1,441 | 60.45 |
Create more AI possibilities with Grove PiHAT and NVIDIA Jetson Nano
If you want to use Grove sensors with Jetson Nano, grab the grove.py Python library and get your sensors up in running in minutes!
NVIDIA Jetson Nano Developer Kit is an AI computer for makers, learners, and developers that delivers the power of modern AI in a small, easy-to-use platform. At Seeed Studio, not only you can get Jetson Nano quickly in hands , we provide Grove Base HAT for Raspberry Pi and Base HAT for Raspberry Pi Zero to help you create more AI possibilities.
If you want to use Grove sensors with Jetson Nano, the best way is to grab the grove.py Python library and get your sensors up in running in minutes! Currently there are more than 20 Grove modules supported on Jetson Nano and we will keep adding more.
What is Grove?.
There are already more than 280 Grove modules and each one comes with clear documentation and demo code to help you get started quickly.
You can connect Grove modules using Base HAT for Raspberry Pi or Raspberry Pi Zero with Jetson Nano.
For details about how to install the Grove.py library:
Grove Installation Guide
(If you are familiar with the Linux operation system, you can skip this guide and directly refer to the link provided above.)
You only need to perform below steps in a brand new Jetson Nano official Image (click here for Jetson Nano Quick Start Guide), make sure your network works well.
sudo apt-get update
sudo apt-get install curl
Now you can start the installation of grove.py
Step1:
curl -sL | sudo bash -s –
Please pay attention to check the final execution result when installing. If FAILED is displayed and you are unable to solve the failure by yourself, please submit new issue in the current repository. Please describe the issue in detail.
Step2:
git clone
cd grove.py
# Python2
sudo pip install .
# Python3
sudo pip3 install .
Run Grove on Jetson Nano:
After a successful installation, you can use the Grove modules based on the support list of Jetson Nano.
1. First, you need a Grove Base Hat for Raspberry Pi :
The pin header of Jetson Nano is compatible with Pi, however, the function is not completely compatible. Please refer to the table provided by Seeed Studio for the compatibility of specific functions.
The Grove Base Hat is connected like this:
Please pay attention to the pin alignment, do not insert to the wrong position, otherwise it may cause damage to the Grove Base Hat, even the damage of the Jetson Nano Board.
2. Plug in the Grove module.
If it is an I2c module, you can execute it on the Jetson Nano terminal:
sudo i2cdetect -r -y 1
Check if the corresponding I2c address can be scanned. If the scanning failed, please pay attention to the wiring.
3. Execute the corresponding python script command.
Noted that the execution of the python script requires root privileges. For example, when driving an OLED Display 1.12 inch (V1.0) module, you need to do this:
cd grove.py
sudo python grove/display/sh1107g.py
The Grove.py directory is the git repository that was previously downloaded at install time. Usually Oled will display some program-predefined content.
Demo and Software support
We made a demo kit and provided software support for the demo kit.
The Groves included in the kit:
- OLED Display 1.12 inch – v1.0
- 12 key capacitive i2c touch sensor(MPR121)
- 3-Axis digital accelerometer (ADXL372)
- UV sensor VEML6070
- Thumb joystick v1.1
You can download the software here:
Connect to Coral as well!
grove.py Python library for Grove devices also support Coral Dev Board now
This is a blinking button demo with the Coral Dev board. Code can be found here.
import time from grove.gpio
import GPIO
led = GPIO(12, GPIO.OUT)
button = GPIO(22, GPIO.IN)
while True:
if button.read():
led.write(1)
else:
led.write(0)
time.sleep(0.1)
For any new product ideas, please feel free to let us know what you want to see in the forum, we will carefully listen to and take action! | https://www.seeedstudio.com/blog/2019/06/13/create-more-ai-possibilities-with-grove-pihat-for-nvidia-jetson-nano/ | CC-MAIN-2022-27 | refinedweb | 702 | 66.13 |
mechanize — FAQ
Which version of Python do I need?
Python 2.4, 2.5, 2.6, or 2.7. Python 3 is not yet supported.
Does mechanize depend on BeautifulSoup?
No. mechanize offers a few classes that make use of BeautifulSoup, but these classes are not required to use mechanize. mechanize bundles BeautifulSoup version 2, so that module is no longer required. A future version of mechanize will support BeautifulSoup version 3, at which point mechanize will likely no longer bundle the module.
Does mechanize depend on ClientForm?
No, ClientForm is now part of mechanize.
Which license?
mechanize is dual-licensed: you may pick either the BSD license, or the ZPL 2.1 (both are included in the distribution).
Usage
I’m not getting the HTML page I expected to see.
Browserdoesn’t have all of the forms/links I see in the HTML. Why not?
Perhaps the default parser can’t cope with invalid HTML. Try using the included BeautifulSoup 2 parser instead:
import mechanize
browser = mechanize.Browser(factory=mechanize.RobustFactory())
browser.open("")
print browser.forms
Alternatively, you can process the HTML (and headers) arbitrarily:
browser = mechanize.Browser()
browser.open("")
html = browser.response().get_data().replace("<br/>", "<br />")
response = mechanize.make_response(
html, [("Content-Type", "text/html")],
"", 200, "OK")
browser.set_response(response)
Is JavaScript supported?
No, sorry. See FAQs below.
My HTTP response data is truncated.
mechanize.Browser'sresponse objects support the
.seek()method, and can still be used after
.close()has been called. Response data is not fetched until it is needed, so navigation away from a URL before fetching all of the response will truncate it. Call
response.get_data()before navigation if you don’t want that to happen.
I’m sure this page is HTML, why does)
)
Why don’t timeouts work for me?
Timeouts are ignored with with versions of Python earlier than 2.6. Timeouts do not apply to DNS lookups.
Is there any example code?
Look in the
examples/directory. Note that the examples on the forms page are executable as-is. Contributions of example code would be very welcome!
Forms
Doesn’t the standard Python library module,
cgi, do this?
No: the
cgimodule does the server end of the job. It doesn’t know how to parse or fill in a form or how to send it back to the server.
How do I figure out what control names and values to use?
print formis usually all you need. In your code, things like the
HTMLForm.itemsattribute of
HTMLForminstances can be useful to inspect forms at runtime. Note that it’s possible to use item labels instead of item names, which can be useful — use the
by_labelarguments to the various methods, and the
.get_value_by_label()/
.set_value_by_label()methods on
ListControl.
What do those
'*'characters mean in the string representations of list controls?
A
*next to an item means that item is selected.
What do those parentheses (round brackets) mean in the string representations of list controls?
Parentheses
(foo)around an item mean that item is disabled.
Why doesn’t <some control> turn up in the data returned by
.click*()when that control has non-
Nonevalue?
Either the control is disabled, or it is not successful for some other reason. ‘Successful’ (see HTML 4 specification) means that the control will cause data to get sent to the server.
Why does mechanize not follow the HTML 4.0 / RFC 1866 standards for
RADIOand multiple-selection
SELECTcontrols?
Because by default, it follows browser behaviour when setting the initially-selected items in list controls that have no items explicitly selected in the HTML. Use the
select_defaultargument to
ParseResponseif you want to follow the RFC 1866 rules instead. Note that browser behaviour violates the HTML 4.01 specification in the case of
RADIOcontrols.
Why does
.click()ing on a button not work for me?
Clicking on a
RESETbutton doesn’t do anything, by design - this is a library for web automation, not an interactive browser. Even in an interactive browser, clicking on
RESETsends nothing to the server, so there is little point in having
.click()do anything special here.
Clicking on a
BUTTON TYPE=BUTTONdoesn’t do anything either, also by design. This time, the reason is that that
BUTTONis only in the HTML standard so that one can attach JavaScript callbacks to its events. Their execution may result in information getting sent back to the server. mechanize, however, knows nothing about these callbacks, so it can’t do anything useful with a click on a
BUTTONwhose type is
BUTTON.
Generally, JavaScript may be messing things up in all kinds of ways. See the answer to the next question.
How do I change
INPUT TYPE=HIDDENfield values (for example, to emulate the effect of JavaScript code)?
As with any control, set the control’s
readonlyattribute false.
form.find_control("foo").readonly = False # allow changing .value of control foo
form.set_all_readonly(False) # allow changing the .value of all controls
I’m having trouble debugging my code.
See here for few relevant tips.
I have a control containing a list of integers. How do I select the one whose value is nearest to the one I want?
import bisect
def closest_int_value(form, ctrl_name, value):
values = map(int, [item.name for item in form.find_control(ctrl_name).items])
return str(values[bisect.bisect(values, value) - 1])
form["distance"] = [closest_int_value(form, "distance", 23)]
General
I want to see what my web browser is doing, but standard network sniffers like wireshark or netcat (nc) don’t work for HTTPS. How do I sniff HTTPS traffic?
Three good options:
Mozilla plugin: LiveHTTPHeaders.
ieHTTPHeaders does the same for MSIE.
Use
lynx
-trace, and filter out the junk with a script.
JavaScript is messing up my web-scraping. What do I do?
JavaScript is used in web pages for many purposes — for example: creating content that was not present in the page at load time, submitting or filling in parts of forms in response to user actions, setting cookies, etc. mechanize does not provide any support for JavaScript.
If you come across this in a page you want to automate, you have four options. Here they are, roughly in order of simplicity.
Figure out what the JavaScript is doing and emulate it in your Python code: for example, by manually adding cookies to your
CookieJarinstance, calling methods on
HTMLForms, calling
urlopen, etc. See above re forms.
Use Java’s HtmlUnit or HttpUnit from Jython, since they know some JavaScript.
Instead of using mechanize, automate a browser instead. For example use MS Internet Explorer via its COM automation interfaces, using the Python for Windows extensions, aka pywin32, aka win32all (e.g. simple function, pamie; pywin32 chapter from the O’Reilly book) or ctypes (example). This kind of thing may also come in useful on Windows for cases where the automation API is lacking. For Firefox, there is PyXPCOM.
Get ambitious and automatically delegate the work to an appropriate interpreter (Mozilla’s JavaScript interpreter, for instance). This is what HtmlUnit and httpunit do. I did a spike along these lines some years ago, but I think it would (still) be quite a lot of work to do well.
Misc links
The following libraries can be useful for dealing with bad HTML: lxml.html, html5lib, BeautifulSoup 3, mxTidy and mu-Tidylib.
Selenium: In-browser web functional testing. If you need to test websites against real browsers, this is a standard way to do it.
O’Reilly book: Spidering Hacks. Very Perl-oriented.
Standard extensions for web development with Firefox, which are also handy if you’re scraping the web: Web Developer (amongst other things, this can display HTML form information), Firebug.
Similar functionality for IE6 and IE7: Internet Explorer Developer Toolbar (IE8 comes with something equivalent built-in, as does Google Chrome).
Open source functional testing tools.
A HOWTO on web scraping from Dave Kuhlman.
Will any of this code make its way into the Python standard library?
The request / response processing extensions to
urllib2from mechanize have been merged into
urllib2for Python 2.4. The cookie processing has been added, as module
cookielib. There are other features that would be appropriate additions to
urllib2, but since Python 2 is heading into bugfix-only mode, and I’m not using Python 3, they’re unlikely to be added.
Where can I find out about the relevant standards?
-
Draft HTML 5 Specification
RFC 1866 - the HTML 2.0 standard (you don’t want to read this)
RFC 1867 - Form-based file upload
RFC 2616 - HTTP 1.1 Specification
-
-
I prefer questions and comments to be sent to the mailing list rather than direct to me.
John J. Lee, October 2010. | http://wwwsearch.sourceforge.net/mechanize/faq.html | CC-MAIN-2017-04 | refinedweb | 1,434 | 60.21 |
Python is an open-source programming language used for a variety of applications today. There are so many remarkable modules and functions within Python, and if you’re a web developer or data scientist, learning these aspects is mandatory.
Let’s get started right away.
What is Datetime in Python?
In the Python programming language, datetime is a single module. This means that it is not two separate data types. You can import this datetime module in such a way that they work with dates and times. datetime is a built-in Python module. Developers don’t need to install it separately. It’s right there.
With the datetime module, you get to work with different classes that work perfectly with date and time. Within these classes, you can get a range of functions that deal with dates, times, and different time intervals.
Always remember that when you’re working on Python, date, and datetime are two separate objects. If you modify them in any way, you’re modifying the objects; not the timestamps or strings.
Not clear? Let’s look into some examples in the next section to ease things out a bit.
Examples of Datetime in Python
Here are a few examples to help in the understanding of datetime in Python properly. Let’s begin.
Example 1
Here, this example shows you how can get the current date using datetime in Python:
# importing the datetime class
from datetime import datetime
#calling the now() function of datetime class
Now = datetime.datetime.now()
print("Now the date and time are", Now)
The output is as follows:
Example 2
Here is the second example. The aim is to count the difference between two different datetimes.
#Importing the datetime class
from datetime import datetime
#Initializing the first date and time
time1 = datetime(year=2020, month=5, day=9, hour=4, minute=33, second=6)
#Initializing the second date and time
time2 = datetime(year=2021, month=7, day=4, hour=7, minute=55, second=4)
#Calculating and printing the time difference between two given date and times
time_difference = time2 - time1
print("The time difference between the two times is", time_difference)
And the output is:
What is the Import Datetime Statement?
If you’re planning to import datetime in Python, do remember this one thing - it is pretty simple. Say, you want to import a particular class like date, time, datetime, timedelta, etc. from the datetime module, here’s how it can be done:
# importing the date class
from datetime import date
# importing the time class
from datetime import time
# importing the datetime class
from datetime import datetime
What are the Commonly Used Classes in the Datetime Module?
There are 6 main classes when it comes to datetime in Python. Keep reading to find out more about them in this section:
Date
By date, we mean the conventional concept of dates with effect to the Gregorian Calander. Very naturally, the date class in Python is explained with the attributes like day, month, and year.
Time
The next class that is discussed here is called time. This class is independent of any day. Here it is generally assumed that each day has 24*60*60 seconds. The attributes of the time class in Python include minute, second, microsecond, hour, and tzinfo.
Wondering what tzinfo even is? We’ll get to that in this section itself, shortly.
Datetime
datetime in Python is the combination between dates and times. The attributes of this class are similar to both date and separate classes. These attributes include day, month, year, minute, second, microsecond, hour, and tzinfo.
Timedelta
You can think of timedelta as the variation between two different dates, time, or datetime examples to microsecond resolution.
Tzinfo
tzinfo is nothing but an attribute. It can provide you different timezone-related information objects.
Timezone
Finally, it’s time to learn about the timezone class. This class can deploy the tzinfo abstract base class in terms of a fixed offset from the UTC. Here, you are looking into the new UTC version 3.2.
What Are the Various Functions Available Within Date Class?
Hopefully, by now, you have developed a distinctive idea on the concept of datetime in Python and date class. So, now the question is, do you know about the different functions available within the date class?
Let’s see this in detail.
When any particular object within the date class is embodied, the date format is represented in this way - YYYY-MM-DD. Hence, a syntax of this particular class requires three specific arguments viz. year, month, and day.
Check the following examples:
Example 1: Date Object to Represent a Date
Your arguments should be in the range mentioned below:
- Year must be between MINYEAR and MAXYEAR
- The month should be between 1 to 12
- The day should be between 1 to the number of days in a month or year
# Explaining date class
# importing the date class
from datetime import date
# initializing syntax with arguments in the format year, month, date
Date = date(2021, 7, 4)
print("Date is", Date)
Here’s the output:
Please note that whenever there is any anomaly while passing any argument, Python will raise ValueError.
Example: There are 31 days in May (5th month). If Python gets an argument like a date(2021, 5, 32) it will show an error. Also, the arguments should be integer only. Otherwise, it will show TypeError if it gets any string or float as an argument.
Example 2: Get Current Date
While working with datetime in Python, one can create a date object that contains the current date with the help of a classmethod. It is named ‘today()’. You can print components like the year, month, and date of today separately. In that case, you’ll need to call an individual agent.
# To print current date
# importing the date class
from datetime import date
# calling the today() function of date class
Today = date.today()
print("Today's date is", Today)
print("Today’s components are", today.year, today.month, today.day)
Here’s the output:
Example 3: Get Date From a Timestamp
Did you know that creating dateobjects from a timestamp is also possible? Think of a Unix Timestamp. It is basically the number of seconds between any specific date and January 1, 1970, at UTC. Converting a timestamp is also possible. But in that case, you have to use the fromtimestamp() method.
# To get timestamp from a date
# importing the date class
from datetime import date
Timestamp = date.fromtimestamp(1000000000)
print("Date is", timestamp)
Here’s the output:
How Can You Handle Timezones in Python?
Let’s say that you’re working on a technical project at the moment. You’ve to display the dates in the timezone of your clients. Handling timezone by yourself can be a big deal. When working with datetime in Python, why not use a third-party pytz module?
See the below examples for more clarity:
#To get the local time of specific place
from datetime import datetime
#Importing third-party module pytZ
import pytz
#Calling local time
Local_datetime = datetime.now()
print("Local date is", Local_datetime)
#Calling local time of Toronto
Timezone_Toronto = pytz.timezone('America/Toronto')
Local_datetime_Toronto = datetime.now(Timezone_Toronto)
print("The date and time in Toronto is",Local_datetime_Toronto)
#Calling local time of Tokyo
Timezone_Tokyo = pytz.timezone('Asia/Tokyo')
Local_datetime_Tokyo = datetime.now(Timezone_Tokyo)
print("The date and time in Tokyo is",Local_datetime_Tokyo)
Here’s the output:
Looking forward to making a move to the programming field? Take up the Python Training Course and begin your career as a professional Python programmer
Conclusion
With this, you have come to the end of this ‘datetime in Python’ tutorial. Python is today, one of the most widely used languages with widespread applications in cutting-edge areas like data science and analytics. If you are looking to build a career in data science, python is an essential skill you’d have to possess. Simplilearn’s Data Science with Python Certification Course is an excellent way for you to acquire all the skills you need to embark on the journey to becoming a data scientist. With 68 hours of applied learning sessions featuring 4 real-life projects to practice and perfect your skills, this program will offer your work-ready training in the fundamentals of data analysis, data visualization, web scraping, machine learning & natural language processing. Do explore the course at the earliest and get started.
We hope you found this ‘datetime in Python’ tutorial, useful. Check out our next tutorial on Random number in python. If you still have any questions on the topic, don’t hesitate to let us know your questions in the comments section below. Our team of SMEs will review them and respond at the earliest. | https://www.simplilearn.com/tutorials/python-tutorial/datetime-in-python?source=frs_recommended_resource_clicked | CC-MAIN-2021-49 | refinedweb | 1,456 | 63.8 |
Dash Button Santa with Arduino MKR1000
Send information to Santa Claus about the status of the gift request.
Things used in this project
Story
Introduction
How many times have you asked Santa to bring you an Arduino kit, a 3D printer or a set of tools and only brought you a pair of socks? Well, this is going to end.
Santa Claus has hired us to run a project based on IoT and Arduino MKR1000, which will help you to have more control over where to deliver the gifts.
The first thing you have to do is be good, do the dishes, help at home and the elderly ladies, make the purchase and, of course, send the letter to Santa.
Once you have fulfilled all the requirements, you are ready to press the magic button that will send to Santa’s Twitter (@dashbuttonsanta) the coordinates where you have to leave the gifts.
In addition, he will send an email to the department that has been created with this project, Dash Button Department.
Finally, once Santa Claus leaves the gifts in your house, will make the confirmation of the delivery in your Twitter account and you will be able to see the updated map in this web page.
This way Santa will never forget to stop by your house and you will know where to take your Makers gifts.
General operation of the system
The system will work with 4 different states depending on the situation of the Dash Button Santa.
Status 1: Disabled
In this state no notification has been sent to Santa from the location of where we want to leave the gifts.
The first thing you have to do is to be good, from the dishes, help at home and the elderly ladies, make the purchase and, of course, send the letter to Santa.
Then, you can send the location to Santa to leave the gifts in your house. Pressing the button will move to the next state.
Status 2: sent location
When you enter this state, a Tweet is sent to Santa Twitter (@dashbuttonsanta) and an email to the account [email protected]
The Dash Button Santa will remain in this state until Christmas Day. Even if you press the button it will not do anything.
When it is December 25th, the lights will start flashing. This means that we are ready to receive Santa. Only he can press the button again to change to the next state.
State 3: Gifts delivered
Santa will press the button and confirm that the gifts have been delivered.
When you wake up in the morning you can check whether or not it has happened because the Dash Button Santa will be green.
System Architecture
The architecture is very simple. We rely on different services all of them free of charge.
Of course the central axis of the project revolves around Arduino MKR1000. It can be divided into 4 large blocks:
- Location finding
- Save information in the cloud
- Post to Twitter and send an email
- The circuit
Location finding
Dash Button Santa goes with you. You don’t know in advance where are you going to be on Christmas day. Use Dash Button Santa to tell Santa where are you.
It listens what WiFi networks are around and gets precise location using Google Maps Geolocation API.
WiFi101 library, as it is now, does not allow getting a list of access points MAC addresses (BSSID). We needed to modify library in order to enable the feature to get a detailed list of WiFi networks in the neighborhood.
Store information in the cloud
After testing several services and platforms in the cloud for IoT, the simplest we have found to implement this project has been Firebase.
This service has a simple database based on JSON and accessible through its API. By simply making a PUT request with a JSON, you can generate your own data structure on the fly.
You need to have a Google account to access the free service with limitations that they offer.
The JSON that we have used is the following:
{ "persons" : { "A9:D8:F5:05:F0:F8" : { "lat" : 38.3685, "lon" : -0.4219, "prec" : 100999, "status" : "2" } } }
Everything hangs from persons. As a unique identifier we have used the MAC of the Arduino MKR100.
Then we send the latitude, longitude, precision and status of the button.
The goal of storing location information is to be able to track Dash Button Santa around the world.
In a web page we will be able to see all connected Dash Buttons. This web is made with jQuery that accesses the Firebase API to obtain the information.
It consists of two files, one .html and one .js. Then you can see the code.
<html> <head> <title>Dash Button Tracking Santa Claus</title> <style type="text/css"> html, body { height: 100%; margin: ; padding: ; } #map { height: 100%; } </style> <meta name="robots" content="noindex"> </head> <body> <!--<ul id="costumers" class="list-group"> </ul>--> <div id="map"></div> <!-- jQuery (necessary for Bootstrap's JavaScript plugins) --> <script src=""></script> <!-- Latest compiled and minified Bootstrap --> <script src=""></script> <!-- Include Firebase Library --> <script src=''></script> <!-- Tracking Store JavaScript --> <script src="script.js"></script> <!-- API Google Maps --> <script async defer </script> </body> </html>
And the JavaScript file.
// Create a firebase reference var dbRef = new Firebase(''); var costumersRef = dbRef.child('persons'); var markers = {} //load persons costumersRef.on("child_added", function(snap) { // Print to map addNewPerson(snap.val().lat,snap.val().lon,snap.val().status, snap.key()); }); //change persons costumersRef.on("child_changed", function (snap) { changePerson(snap.val().lat, snap.val().lon, snap.val().status); }); /******** GOOGLE MAPS *********/ var map; function initMap() { // Center map var myLatLng = {lat: 38.392101, lng: -0.525467}; map = new google.maps.Map(document.getElementById('map'), { center: myLatLng, zoom: 3 }); } function addNewPerson(lat, lon,status, key){ console.log("status",status); var image if(status==) { image='images/santa-icon-1.png'; } else if(status==1) { image='images/santa-icon-2.png'; } else if(status==2) { image='images/santa-icon-3.png'; } var marker = new google.maps.Marker({ position: new google.maps.LatLng(lat,lon), icon: image, map: map, title: key // Tooltip = MAC address }); markers[key] = marker; } function changePerson(lat, lon, status, key) { console.log("status", status); var image; if (status == ) { image = 'images/santa-icon-1.png'; } else if (status == 1) { image = 'images/santa-icon-2.png'; } else if (status == 2) { image = 'images/santa-icon-3.png'; } marker = markers[key]; marker = new google.maps.Marker({ position: new google.maps.LatLng(lat, lon), icon: image, map: map, title: key // Tooltip = MAC address }); markers[key] = marker; }
You can access the web that is published in this URL.
Post to Twitter and send email
In this block is where more difficulties we could find, but thanks to the service offered by IFTTT, this task has been very simple :).
They have recently enabled a service called IFTTT Maker that lets you launch events and triggers through an API. With a simple GET call and a well configured recipes it is very simple.
This makes it much easier to publish in any social network and open channels of communication between objects or machines, giving free rein to IoT technologies.
In this project we will use 3 recipes.
- Recipe 1: sends the information to the @dashbuttonsanta account with the length and latitude of the Dash Button.
- Recipe 2: send an email to the account [email protected] with the longitude and latitude.
- Recipe 3: Once Santa has left the gifts at home, he publishes that the gifts have already been delivered.
The circuit
The basic scheme is to connect a pushbutton to the Arduino MKR1000 and 3 pixels Neopixel.
Then we can decorate it as we want. In our case we used a teddy of Rudolf, the reindeer of Santa Claus.
We took advantage of a button inside and the Neopixel strip in the scarf.
The pixels tell us the status of the Dash Button Santa.
- Pixel 1 red, pixel 2 blue and pixel 3 green, attemping to connect to WiFi.
- 3 pixels orange blink, WiFi error.
- 3 pixels red blink, error RTC.
- 3 pixels red, device off. Location not sent to Santa.
- 3 pixels blue, location sent to Santa.
- 3 pixels blue blinking, Christmas Day !!!!!
- 3 pixels green, Santa claus left the presents and pressed the button.
All this you can see in the next section where we talk about the code.
Arduino Sketch
In the sketch several libraries are used that facilitate the task of managing the connections with the services and controlling the time to know when it is Christmas.
You will need different accounts to use the service. We are working on improving it and not depend on third parties :).
You will need the following to make it work:
- Google API key
- IFTTT key
- SSID WiFi
- Password WiFi
The whole code is very well explained. Any questions you can leave in the comments of this article.
To store the state in local and to be able to know in which state it was if it is disconnected we have used FlashStorage that simulates an EEPROM memory.
To control the day and the month, we have done it through RTCZero.
In addition we leave you two links so that you download the WiFi101 modified library and the WifiLocation.
You can find the links to the libraries in GitHub, in the section of libraries of this article.
/* DashButtonSanta Send information to Santa Claus about the status of the gift request. It uses the geolocation through the WiFi, of the Google API, and it is sent to Firebase, along with the state and with the MAC (key value) of the Arduino MKR1000. It also sends the information to Twitter @ dashbuttonsanta and to the email [email protected] through IFTTT. On the web you can follow the status of all requests. When at the end Santa leaves the gifts on the site sent to Twitter and the email, press the button and gives the delivery finished. States: 0 => Device OFF (LED color red) 1 => I have behaved well, I have made the bed, I have cleaned the dishes, I have helped an old lady, I pray every night, etc ... and I have sent the letter :) (LED color blue) 2 => Is Christmas Day (LED blink color blue) 3 => Santa claus left the presents and pressed the button (LED blink color green) Errors: Wifi shield not present => LED blink Color(233, 149, 16) NTP unreachable => LED blink Color(150, 0, 0) The circuit: Arduino MKR1000 Pushbutton to pin 6 pull-down resistor to pushbutton 3 addressable LEDs Created 2017 By Luis del Valle @ldelvalleh Germán Martín @gmag12 */ // Fill these fields with your data // Constants // Neopixel Adafruit_NeoPixel pixels = Adafruit_NeoPixel(NUMPIXELS, PINNEO, NEO_GRB + NEO_KHZ800); // Dash Button States int dashButtonState = ; volatile bool makeUpdate = false; // Reserve a portion of flash memory to store an "int" variable // and call it "state_dashbutton". FlashStorage(state_dashbutton, int); // RTC with WiFi RTCZero rtc; const int GMT = 1; int status = WL_IDLE_STATUS; WiFiClient client; // Info person byte mac[6]; location_t location; void setup() { // Init Neopixel pixels.begin(); // Read the content of state_dashbutton dashButtonState = state_dashbutton.read(); pixels.setPixelColor(, pixels.Color(150, , )); pixels.show(); pixels.setPixelColor(1, pixels.Color(, , 150)); pixels.show(); pixels.setPixelColor(2, pixels.Color(, 150, )); pixels.show(); // WiFi configuration configWiFi(); // Get location using WiFi networks around getLocation(); // Update Dash Button state putRequest(dashButtonState); // Init Dash Button pinMode(DASHBUTTON, INPUT); // Init interrupt attachInterrupt(digitalPinToInterrupt(DASHBUTTON), dashbuttonAction, RISING); // RTC configuration configRTC(); // Random seed randomSeed(analogRead(A0)); } void loop() { // Off if (dashButtonState == ) { for (int i = ; i < NUMPIXELS; i++) { // pixels.Color takes RGB values, from 0,0,0 up to 255,255,255 pixels.setPixelColor(i, pixels.Color(150, , )); pixels.show(); } } // Location sended but not Christmas day else if (dashButtonState == 1 && rtc.getDay() != D_DAY && rtc.getMonth() != M_MONTH) { for (int i = ; i < NUMPIXELS; i++) { pixels.setPixelColor(i, pixels.Color(, , 150)); pixels.show(); } } // Location sended and Christmas day else if (dashButtonState == 1 && rtc.getDay() == D_DAY && rtc.getMonth() == M_MONTH) { blinkNeopixel(pixels.Color(, , 150)); } // Shipping else if (dashButtonState == 2) { for (int i = ; i < NUMPIXELS; i++) { pixels.setPixelColor(i, pixels.Color(, 150, )); pixels.show(); } } // If need update if (makeUpdate) { // Change Dash Button State if (dashButtonState == ) { pixels.setPixelColor(, pixels.Color(150, , )); pixels.show(); pixels.setPixelColor(1, pixels.Color(, , 150)); pixels.show(); pixels.setPixelColor(2, pixels.Color(, , 150)); pixels.show(); if (putRequest(1)) { // Send Email sendEmail(); delay(2000); // Send Twitter sendToTwitterPerson(); dashButtonState = 1; } } // Only Christmas Day else if (dashButtonState == 1 && rtc.getDay() == D_DAY && rtc.getMonth() == M_MONTH) { pixels.setPixelColor(, pixels.Color(, , 150)); pixels.show(); pixels.setPixelColor(1, pixels.Color(, 150, )); pixels.show(); pixels.setPixelColor(2, pixels.Color(, 150, )); pixels.show(); if (putRequest(2)) { sendToTwitterSanta(); dashButtonState = 2; } } // Change state in FlashStorage state_dashbutton.write(dashButtonState); } makeUpdate = false; } // Callback interruption void dashbuttonAction() { noInterrupts(); if (!makeUpdate) makeUpdate = true; interrupts(); } void blinkNeopixel(uint32_t c) { for (int i = ; i < NUMPIXELS; i++) { pixels.setPixelColor(i, pixels.Color(, , )); pixels.show(); } delay(500); for (int i = ; i < NUMPIXELS; i++) { pixels.setPixelColor(i, c); pixels.show(); } delay(500); } /**************** WiFi Connection ****************/ void configWiFi() { // check for the presence of the shield: if (WiFi.status() == WL_NO_SHIELD) { // don't continue: while (true) { // Show error shield not present blinkNeopixel(pixels.Color(233, 149, 16)); } } // attempt to connect to Wifi network: while (status != WL_CONNECTED) { // Connect to WPA/WPA2 network. Change this line if using open or WEP network: status = WiFi.begin(SSID, PASS); // wait 10 seconds for connection: delay(10000); } // Get mac WiFi.macAddress(mac); } /**************** RTCZero ****************/ void configRTC() { // Init RTC rtc.begin(); unsigned long epoch; int numberOfTries = , maxTries = 6; do { epoch = WiFi.getTime(); numberOfTries++; } while ((epoch == ) || (numberOfTries > maxTries)); if (numberOfTries > maxTries) { while (1) { { // Show error RTC blinkNeopixel(pixels.Color(150, , )); } } } else { rtc.setEpoch(epoch); } } /**************** HTTP PUT Request to Firebase ****************/ bool putRequest(int newDashButtonState) { String keyMac = ""; for (int i = ; i < 6; i++) { String pos = String((uint8_t)mac[i], HEX); if (mac[i] <= 0xF) pos = "0" + pos; pos.toUpperCase(); keyMac += pos; if (i < 5) keyMac += ":"; } // close any connection before send a new request. client.stop(); client.flush(); // send SSL request if (client.connectSSL(HOST, 443)) { // PUT request String toSend = "PUT /persons/"; toSend += keyMac; toSend += ".json HTTP/1.1\r\n"; toSend += "Host:"; toSend += HOST; toSend += "\r\n" ; toSend += "Content-Type: application/json\r\n"; String payload = "{\"lat\":"; payload += String(location.lat, LOC_PRECISSION); payload += ","; payload += "\"lon\":"; payload += String(location.lon, LOC_PRECISSION); payload += ","; payload += "\"prec\":"; payload += String(location.accuracy); payload += ","; payload += "\"status\": \""; payload += newDashButtonState; payload += "\"}"; payload += "\r\n"; toSend += "Content-Length: " + String(payload.length()) + "\r\n"; toSend += "\r\n"; toSend += payload; client.println(toSend); client.println(); client.flush(); client.stop(); return true; } else { // if you couldn't make a connection: client.flush(); client.stop(); return false; } } /**************** Send to Twitter IFTTT ****************/ void sendToTwitterPerson() { requestIFTTT(TWITTEREVENT); } void sendToTwitterSanta() { requestIFTTT(SANTAEVENT); } /**************** Send email IFTTT ****************/ void sendEmail() { requestIFTTT(EMAILEVENT); } /**************** Request IFTTT ****************/ void requestIFTTT(String eventName) { for (int i = ; i < 3; i++) { // close any connection before send a new request. if (client.connected()) { client.stop(); } client.flush(); // Random request: from IFTTT Twitter publish Cannot send duplicate tweet. long randomRequest = random(1, 10000); // send SSL request if (client.connectSSL(HOSTIFTTT, 443)) { // Make a HTTP request: String toSend = "GET /trigger/"; toSend += eventName; toSend += "/with/key/"; toSend += IFTTTKEY; toSend += "?value1="; toSend += String(location.lat, LOC_PRECISSION); toSend += "&value2="; toSend += String(location.lon, LOC_PRECISSION); toSend += "&value3="; toSend += randomRequest; toSend += " HTTP/1.1\r\n"; toSend += "Host: maker.ifttt.com\r\n"; toSend += "Connection: close\r\n\r\n"; client.print(toSend); break; } else { // if you couldn't make a connection: } } client.flush(); client.stop(); } /**************** Get Location info ****************/ bool getLocation() { WifiLocation gLocation(GOOGLE_API_KEY); location = gLocation.getGeoFromWiFi(); return (location.accuracy < 100); }
Conclusions
In this project we wanted to demonstrate that simply with a little ingenuity and an Arduino MKR1000, we can track anything.
We have used free services that allow us to prototype and create technology in a simple and economical way.
Although this is just a project for The Arduino Internet of Holyday Things, we hope that we can inspire you so that you can create your own project based on our idea.
We would appreciate any comments and suggestions. Many thanks for your attention and Oh Oh Oh Merry Christmas!
Schematics
JLCPCB – Prototype 10 PCBs for $2 + 2 days Lead Time
China’s Largest PCB Prototype Enterprise, 300,000+ Customers & 10,000+ Online Orders Per Day
Inside a huge PCB factory: | https://duino4projects.com/dash-button-santa-with-arduino-mkr1000/ | CC-MAIN-2019-13 | refinedweb | 2,685 | 57.77 |
Technote (troubleshooting)
Problem(Abstract)
During an install of RHEL 6 ppc64 onto a Power 7 system, the system experiences the following when booting from the DVD installation media. Note that this may occur on any P7 hardware.
Initalizing network drop monitor service
RAMDISK: incomplete write (4318 != 32768)
write error
List of all partitions:
No filesystem could mount root, tried: iso9660
Kernel panic - not syncing: VFS: Unable to mount root fs on
unknown-block(1,0)
Call Trace:
[c000001ecbe3fc30] [c000000000012eb4] .show_stack+0x74/0x1c0
(unreliable)
[c000001ecbe3fce0] [c0000000005a640c] .panic+0x80/0x1b4
[c000001ecbe3fd70] [c00000000082152c] .mount_block_root+0x2e8/0x324
[c000001ecbe3fe50] [c0000000008217b4] .prepare_namespace+0x1c4/0x218
[c000001ecbe3fee0] [c000000000820574] .kernel_init+0x348/0x374
[c000001ecbe3ff90] [c0000000000323f4] .kernel_thread+0x54/0x70
Rebooting in 180 seconds..
This issue has also been reported installing RHEL 6 onto a p7 system.
Resolving the problem
To resolve this issue:
1) When your system comes up to SMS, type 8 to get to the firmware prompt.
2) Enter the following two commands:
dev nvram
wipe-nvram
3) Reboot and boot from DVD. | http://www-01.ibm.com/support/docview.wss?uid=isg3T1012911 | CC-MAIN-2014-42 | refinedweb | 164 | 60.01 |
{-# OPTIONS_GHC -fglasgow-exts -fallow-undecidable-instances -fparr #-} module MO.Base (module MO.Base, Invocant, stubInvocant) where import {-# SOURCE #-} MO.Run import Data.Maybe import Data.Typeable import StringTable.Atom import MO.Capture import GHC.PArr import StringTable.AtomMap as AtomMap -- Codeable is an abstraction of possible different pieces of code that -- a method may use as implementation. It's supposed to be used as member -- of the MethodCompiled structure. A Codeable type need to have a function -- "run" that accepts Arguments and returns some Invocant. -- | open type to represent Code class Monad m => Codeable m c where run :: c -> Arguments m -> m (Invocant m) -- | stub code which always return the same newtype NoCode m = NoCode (Invocant m) instance (Typeable (NoCode m), Monad m) => Codeable m (NoCode m) where run (NoCode obj) _ = return obj instance Show (NoCode m) where show _ = "<NoCode>" -- | Pure code that works with any monad. newtype PureCode = PureCode (forall m. (Typeable1 m, Monad m) => Arguments m -> Invocant m) instance (Typeable1 m, Monad m) => Codeable m PureCode where run (PureCode f) a = return (f a) instance Show PureCode where show _ = "<PureCode>" -- | Real monadic primitive code. newtype Monad m => HsCode m = HsCode (Arguments m -> m (Invocant m)) instance (Typeable1 m, Monad m) => Codeable m (HsCode m) where run (HsCode f) a = f a instance Show (HsCode m) where show _ = "<HsCode>" -- Arguments represents (surprise) arguments that are passed to methods, -- right now is just a Pugs' Capture type, but could be generalized to a -- class, in case of separating MO "generic" code from Pugs specifics. type Arguments m = Capt (Invocant m) -- This Invocant refers to the same concept as in Perl-esque syntax: -- "foo $moose: $a, $b" which means "$moose.foo($a, $b)". withInvocant :: (Typeable1 m, Monad m) => Arguments m -> Invocant m -> Arguments m withInvocant args x = CaptMeth{ c_invocant = x, c_feeds = c_feeds args } getInvocant :: (Typeable1 m, Monad m) => Arguments m -> Maybe (Invocant m) getInvocant CaptMeth{ c_invocant = x } = Just x getInvocant _ = Nothing namedArg :: (Typeable1 m, Monad m) => Arguments m -> Atom -> Maybe (Invocant m) namedArg args key = foldlP findArg Nothing (c_feeds args) where -- Notice that each feed has a Map with the named arguments (given by f_nameds) -- and the values are of type '[:a:]' and not 'a', because of this we get only -- the first one. "(!: 0)" means "(!! 0)" in parallel arrays notation. -- (is getting only the first one right??) findArg Nothing MkFeed{ f_nameds = ns } = fmap (!: 0) (AtomMap.lookup key ns) findArg x _ = x | http://hackage.haskell.org/package/MetaObject-0.0.4/docs/src/MO-Base.html | CC-MAIN-2014-52 | refinedweb | 407 | 61.97 |
I recently covered ways to convert a string into a byte array with VB.NET. When you need to convert a byte array into a string, you can use either BitConverter.ToString or Convert.ToBase64String methods. I provide examples that show you how to make the conversion using both methods.
Byte array usage
If you are restoring text stored in binary format, call the GetString method of the appropriate encoding object in the System.Text namespace.
The quickest way to convert a byte array into a string is to use the System.BitConverter class. The class provides methods for converting basic data types into byte arrays and from byte arrays. To use this method, use the overloaded ToString method that accepts a byte array as a parameter. In this case, the string contains each value of the byte array in hexadecimal format separated by a dash. There is no automatic way to reverse the conversion to find out the original byte array using the string. View the example in Listing A.
Another option for converting a byte array into a string is to utilize Base64 encoding through the ToBase64String and FromBase64String methods of the System.Convert class. In Base64 encoding, each sequence of three bytes is converted into a separate of four types. Each Base64 encoded character has one of the 64 possible values in the range. View the example in Listing B.
Both options are useful for creating a representation of binary data. Keep in mind that, in order to retrieve a real text information from a byte array, you have to use the correct encoding! | http://www.techrepublic.com/blog/programming-and-development/convert-a-byte-array-to-a-string-with-vbnet/432 | CC-MAIN-2013-20 | refinedweb | 267 | 55.84 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
How does one use NBIT_CKEY_ACTIVE? I see in the documentation that it's the "Active point of animation path in editor." I couldn't find any examples that use it successfully however. I found it used in this old post on CGSociety by Per-Anders from 2013, but I couldn't get it to work. Here's the code I'm using to test it:
NBIT_CKEY_ACTIVE
import c4d
def main():
tracks = op.GetCTracks()
for track in tracks:
print "*"*100
print track.GetName()
curve = track.GetCurve()
count = curve.GetKeyCount()
for x in range(0,count):
key = curve.GetKey(x)
print "Key %s: NBIT_CKEY_ACTIVE: %s"%(x,key.GetNBit(c4d.NBIT_CKEY_ACTIVE))
if __name__=='__main__':
main()
Thank you.
Hi @blastframe the NBIT_CKEY_ACTIVE is set when the animation path key is selected in the viewport (and not in the Timeline)
Cheers, R
@r_gigante Ahhhhhhhh, okay, I get it now. Thank you | https://plugincafe.maxon.net/topic/12702/how-is-nbit_ckey_active-used | CC-MAIN-2022-40 | refinedweb | 189 | 68.67 |
So guys, I'm doing a project related to Telemetry, and I want to make ArduPilot (Uploaded with ArduPlane 2.73) send through Serial Port, The sensors informations, like Height, GPS Position, and this kind of things. I have tried to use ArduStation, but I could not change its firmware to do what I want.
The idea is reading Ardupilot Serial port using an Arduino Uno, and then, saving it in a SD card in real-time. So ArduPilot needs to send data without any User Input or something like that. I've already tried to manipulate ArduPlane source code to do that, but I couldn't either.
Has someone here did something like that before? I need some advices!
Thanks,
Alessandro
alessandro.ssj@gmail.com
EDIT (30/06/2016) : Guys, I've successfully done the communication of Pixhawk with Arduino through Pixhawk's UART. So I can send informations of the pixhawk's IMU to arduino or any other microcontroller through serial PORT. So, if you want any help, I don't follow this post anymore, so send me an e-mail (alessandro.ssj@gmail.com) and I can help with some details.
EDIT(08/06/2017): So, since I've been receiving lots of emails about this issue, I will clarify here a little more.
The communication was from the arduino to the pixhawk through serial communication (and the inverse communication is straight forward). In the code shown below, the port TELEM2 is used.
The idea was to read a single byte of the serial port and wait for a flag, that was the character 'a' sent by an arduino ( I was communicating the Pixhawk with an arduino). So, when it detects an a, it starts to reads the rest of bytes until the flag 'b', which indicates end of transmission, arrives. The code below implements this idea:
int arduino_thread_main(int argc, char *argv[])
{
running = true;
struct arduino_s data;
char buffer[50];
int c; int i;
// Advertise arduino topic
memset(&data, 0 , sizeof(data));
//Open serial communication with Arduino
//ttyACM0=USB cable; ttyS1= TELEM1; ttyS2=TELEM2
//int fd=open("/dev/ttyS2", O_RDWR | 1<<6 | 0); THIS OPEN STATEMENT DIDN't WORK WELL
FILE *fd=fopen("/dev/ttyS2","r"); // Open Serial communication through TELEM 2
orb_advert_t arduino_pub_fd = orb_advertise(ORB_ID(arduino), &data); // Advertise arduino uORB
while(!should_exit){
float rpm_hall=-1.0f; float rpm_motor=-1.0f; i=0;
while ((c=fgetc(fd)) !='\n') {buffer[i++] = c;} // Construct string from serial comm. c = fgetc(fd) do the magic!
buffer[i]='\0'; // Finish the construction of the string
sscanf(buffer,"h%fr%fe",&rpm_hall,&rpm_motor); // Read separated date using the protocol defined as "hDATAFLOATrDATAFLOATe"
data.hall=(double)rpm_hall*(M_PI/30.0); // in rad/s
data.rpm= (double)rpm_motor*(M_PI/30.0); // in rad/s
orb_publish(ORB_ID(arduino),arduino_pub_fd,&data); // Publish information
}
fclose(fd);
running = false;
return 0;
}
Another issue people are asking me, is how to get pixhawk sensors information (like pitch, yaw, altitude and so on) and do something with it (save in a SD card for example).
Years ago I made a code to translate the ORB protocol used in pixhawk to access the sensors data, to a more friendly way. You can find the code here:
Thanks!
Alessandro Soares da Silva Junior - PUC-Rio<\p>
I realize this is an old thread, but maybe someone can help me. I'm also trying to download telemetry data directly from a Pix/Mavlink USB/Serial port and dump the data to a file. I have this working with Mavproxy, but they delay and latency make it near useless. This is on a Raspberry PI Zero w/ Debian Jessie. I have compiled a couple C++ codes from examples, but I can't obtain the data I want. Here is how it should work.
1. Connect to PIX/Mavlink via /dev/ttyACM0 port.
2. Download several pieces of telemetry data (GPS, Alt, Speed, Etc) and save them to a local file on the RPI
3. Repeat, Repeat, Repeat...
Near real time speed is critical. I could define the telemetry options either in the C++ source or pass them as parameters.
Any help would be GREATLY appreciated.
Thx,
Dave M
Hello Alessandro,
I am working on a similar project. The idea is to control the ArduCopter with an on board BeagleBone. I am trying to figure out how to send commands from one to the other. Did you ever figure out how to send data through serial? And also, how to receive it?
Best,
Hello Gabriel,
The code of ArduPilot that can be found in googlecode is, in my opinion, a mess. So it would take a long time in order to find where I should modificate it to do the things i wanted. Considering this, I've started to study the code of PX4, and in a short period of time (like two days) i was able to make any modification i wanted.
So my advice is, if you really want to communicate with BeagleBone, buy a PX4 (Or Pixhawk), study its code and you will be able to make serial communication easily!
For example, I make PX4 send through one of its UART, Pitch Angle, Roll Angle, Yaw Angle, Height and groundspeed of the airplane in a chosen baud rate and everything had worked fine!
Good Luck, send me a message if you need more help!
Alessandro
Alessandro, I have been studying the PX4 code and haven't made a lot of progress. Can you give me some detail on how you were able to send data through a UART.
It seems all the tutorials and info about accessing the serial ports have been taken down.
Thanks for any help you can provide!
Bill
Hey @Alessandro - would you perhaps still have the code you used to have the PX4 send those values through it's UART? I am trying to do something similar, where I read PWM values that I am trying to print on the UART and read it via a microcontroller. As of now, all I get is "blank"
uint8_t nchannels = hal.rcin->num_channels();
for (uint8_t i=0; i<nchannels; i++)
{
uint16_t v = hal.rcin->read(i);
hal.uartE->printf("Channel: %d Output: %d \n",i, v);
}
The above should send channel data and I should be able to read at the UART, right? But this doesn't do anything.
Hello Alessandro,
Thank you for responding to my post.
Unfortunately, I am working with a team on this project and we are constrained to using the ArduPilot ... I have been looking at the code and it is a mess. The code is really well written, but its a pain to edit. We have managed to add some features but we have not managed to get the Serial to work...
Right now the idea is to use uartB (used for GPS) but we are still thinking about it.
Best,
UartC is available on the apm 2.5. I connected an FTDI cable to the serial pins labeled uart2 and sent print statements on uartC.
Hi Parth,
Could you please explain with an example? I did similar thing but didn't work. First I connected TX2, RX2, +5V and GND pins (PINS 5,6,7 and 8) below reset button of ArduPilot2.5 to my LinuX computer. I used hal.uartC->write(1) inside void loop() function of ArduPlane.pde file. This didn't work :( . Do we need to modify any other file?
Thank you for your help.
Best Regards,
Aak
The syntax is same as regular arduino code...Add hal.uartC->begin(baudrate) to void setup.
Use hal.uartC->println("hello") or whatever you need to print out in the loop block. Don't connect the +5v pin, power is not needed (assuming that you are connected with USB already)
Hi Parth,
As I was trying to connect the UT390B laser rangefinder to the APM, I tried the following code but it does not work:
#include <AP_Common.h>
#include <AP_Math.h>
#include <AP_Param.h>
#include <AP_Progmem.h>
#include <AP_HAL.h>
#include <AP_HAL_AVR.h>
const AP_HAL::HAL& hal = AP_HAL_AVR_APM2;
static uint32_t millis()
{
return hal.scheduler->millis();
}
static uint32_t t1 = 0;
void setup() {
hal.uartC->begin(115200, 256, 256);
hal.uartC->set_blocking_writes(false);
hal.uartC->println("*00084553#");
}
void loop() {
if (hal.uartC->is_initialized() == false) {
hal.console->printf_P(PSTR("uartC is not initialised\r\n"));
}
if (millis() > t1 + 3000) {
hal.console->printf_P(PSTR("Sending restart sequence..\r\n"));
hal.uartC->println("*00084553#");
t1 = millis();
}
hal.scheduler->delay(1);
}
AP_HAL_MAIN();
But the following code in Arduino works:
void setup() {
// put your setup code here, to run once:
Serial2.begin(115200);
}
static unsigned long t1 = 0;
void loop() {
if (millis() > (t1 + 10000)) {
Serial2.write("*00084553#");
t1 = millis();
}
}
Could you please help me to find out why? Thanks. | https://diydrones.com/forum/topics/how-to-get-data-from-ardupilot-through-serial-port | CC-MAIN-2020-40 | refinedweb | 1,460 | 66.54 |
19 September 2012 09:42 [Source: ICIS news]
SINGAPORE (ICIS)--Taiwanese producer China Petrochemical Development Corp (CPDC) aims to start commercial operations at its new 100,000 tonne/year caprolactam (capro) line in Toufen, Miaoli county by the week of 28 September, a company source said on Wednesday.
“The raw material will be fed in from tomorrow,” the source added. “Should there be no quality issues, they should be able to produce on-spec product by next week.”
CPDC, the sole capro producer in ?xml:namespace>
The company’s other 200,000 tonne/year line at Xiaogang is currently operating above 90% capacity, the source added.
The company is aiming to increase domestic supply to key downstream nylon producers, and is also keen to export spot cargoes | http://www.icis.com/Articles/2012/09/19/9596669/taiwans-cpdc-eyes-commercial-ops-at-toufen-capro-line-by-late.html | CC-MAIN-2014-35 | refinedweb | 127 | 51.48 |
kcgi is an open source CGI and FastCGI library for C/C++ web applications. It is minimal, secure, and auditable.
To start, install the library. Then read the deployment and usage guides. Use the GitHub tracker for questions or comments, or find contact information there for direct contact.
Hello, World!as an HTTP response to a CGI request.
[KMIME_TEXT_PLAIN]); khttp_body(&r); khttp_puts(&r, "Hello, world!"); khttp_free(&r); return 0; }
#include <sys/types.h> /* size_t, ssize_t */ #include <stdarg.h> /* va_list */ #include <stddef.h> /* NULL */ #include <stdint.h> /* int64_t */ #include <kcgi.h> int main(void) { struct kreq r; const char *page = "index"; /* * Parse the HTTP environment. * We only know a single page, "index", which is also * the default page if none is supplied. * (We don't validate any input fields.) */ if (khttp_parse(&r, NULL, 0, &page, 1, 0) != KCGI_OK) return 1; /* * Ordinarily, here I'd switch on the method (OPTIONS, etc., * defined in themethodvariable) then switch on which * page was requested (pagevariable). * But for brevity's sake, just output a response: HTTP 200. */ khttp_head(&r, kresps[KRESP_STATUS], "%s", khttps[KHTTP_200]); /* * Show content-type unilaterally as text/plain. * This would usually be set from r.mime. */ khttp_head(&r, kresps[KRESP_CONTENT_TYPE], "%s", kmimetypes[KMIME_TEXT_PLAIN]); /* No more HTTP headers: start the HTTP document body. */ khttp_body(&r); /* * We can put any content below here: JSON, HTML, etc. * Usually we'd switch on our MIME type. * However, we're just going to put the literal string as noted… */ khttp_puts(&r, "Hello, world!"); /* Flush the document and free resources. */ khttp_free(&r); return 0; }
For a fuller example, see sample.c, or jump to the Documentation section. (Want a C++ version? See samplepp.cc.)
kcgi supports many features: auto-compression, handling of all HTTP input operations (query strings, cookies, page bodies, multipart) with validation, authentication, configurable output caching, request debugging, and so on. Its strongest differentiating feature is using sandboxing and process separation for handling the untrusted input path.
First, check if kcgi isn't already packaged for your system, such as for OpenBSD, FreeBSD, Arch Linux, and so on. (If it is, make sure it's up to date!) If so, install using that system.
If not, you'll need a modern UNIX system.
To date, kcgi has been built and run on
GNU/Linux machines
(musl and glibc), BSD
(OpenBSD,
NetBSD,
FreeBSD),
Solaris,
OmniOS, and
Mac OS X
(only Mojave and newer!) on i386, amd64, powerpc, arm64, and sparc64.
It has been deployed under Apache, nginx, and OpenBSD's httpd(8)
(the latter two natively over FastCGI and via the
slowcgi wrapper).
The only hard dependency is BSD make (
bmake on Linux).
If you're running the regression tests (see Testing), you'll need libcurl.
Download kcgi.tgz and verify the archive with kcgi.tgz.sha512.
Configure with
./configure, compile with
make (or
bmake
on Linux systems).
Finally, install the software using
make install.
Optionally override default paths with a
configure.local file (see the
configure script
for details) prior to configuration.
If kcgi doesn't compile, please send me the config.log file and the output of the failed compilation. Along with all of your operating system information of course.
To run bleeding-edge code between releases, the CVS repository is mirrored on GitHub. Installation instructions tracking the repository version may be found on that page.
To compile kcgi applications, use the package configuration. Linking is similarly normative.
% cc `pkg-config --cflags kcgi` -c yourprog.c % cc yourprog.o `pkg-config --libs kcgi`
Well-deployed web servers, such as the default OpenBSD server, by default are deployed within a chroot(2). If this is the case, you'll need to statically link your binary.
% cc -static yourprog.o `pkg-config --static --libs kcgi`
FastCGI applications may either be started directly by the web server (which is popular with Apache) or
externally given a socket and kfcgi(8) (this method is normative for OpenBSD's httpd(8) and
suggested for the security precautions taken by the wrapper).
The kcgi manpages, starting with kcgi(3), are the canonical source of documentation. The following is a list of all manpages:
If it's easier to start by example, you can use kcgi-framework as an initial boilerplate to start your project. The following are introductory materials to the system.
In addition to these resources, the following conference sessions have referenced kcgi.
And the following relate to extending standards:
The bulk of kcgi's CGI handling lies in khttp_parse(3), which fully parses the HTTP request. Application developers must invoke this function before all others. For FastCGI, this function is split between khttp_fcgi_init(3), which initialises context; and khttp_fcgi_parse(3), which receives new parsed requests. In either case, requests must be freed by khttp_free(3).
All functions isolate the parsing and validation of untrusted network data within a sandboxed child process. Sandboxes limit the environment available to a process, so exploitable errors in the parsing process (or validation with third-party libraries) cannot touch the system environment. This parsed data is returned to the parent process over a socket. In the following, the HTTP parser and input validator manage a single HTTP request, while connection delegator accepts new HTTP requests and passes them along.
This method of sandboxing the untrusted parsing process follows OpenSSH, and requires special handling for each operating system:
setrlimit(2)limiting. For the time being, this feature is only available for x86, x86_64, and arm architectures. If you're using another one, please send me your
uname -mand, if you know if it, the correct
AUDIT_ARCH_xxxfound in
/usr/include/linux/audit.h.
pure computationas provided in Mac OS X Leopard and later. This is supplemented by resource limiting via
setrlimit(2).
setrlimit(2).
Since validation occurs within the sandbox, special care must be taken that validation routines don't access the environment (e.g., by opening files, network connections, etc.), as the child might be abruptly killed by the sandbox facility. (Not all sandboxes do this.) If required, this kind of validation can take place after the parse validation sequence.
The connection delegator is similar, but has different sandboxing rules, as it must manage an open socket connection and respond to new requests.
kcgi is shipped with a fully automated testing framework executed with
make regress.
To test your own applications, use the kcgiregress(3) library.
This framework acts as a mini-webserver, listening on a local port, translating an HTTP document into a
minimal CGI request, and passing the request to a kcgi CGI client.
For internal tests, test requests are constructed with libcurl.
The binding local port is fixed: if you plan on running the regression suite, you may need to
tweak its access port.
Another testing framework exists for use with the American
fuzzy lop.
To use this, you'll need to compile the
make afl target with your compiler of choice, e.g.,
make clean, then
make afl CC=afl-gcc.
Then run the
afl-fuzz tool on the
afl-multipart,
afl-plain, and
afl-urlencoded binaries using the test cases (and dictionaries, for the first) provided.
Security comes at a price—but not a stiff price. By design, kcgi incurs overhead in three ways: first, spawning a child to process the untrusted network data; second, enacting the sandbox framework; and third, passing parsed pairs back to the parent context. In the case of running CGI scripts, kcgi performance is bound to the operating system's ability to spawn and reap processes. For FastCGI, the bottleneck becomes the transfer of data. In the following graph, I graph the responsiveness of kcgi against the baseline web-server performance.
This shows the empirical cumulative distribution of a statisically-significant number of page requests
as measured by ab(1) with 10 concurrent
requests.
The CGI line is the CGI sample included in the source;
the FastCGI line is the FastCGI sample;
the CGI (simple) simply emits a 200 HTTP status and
Hello, World; and
the static is a small static file on the web server.
The operating system is Mac OS X 10.7.5
Air laptop (1.86 GHz Intel Core 2 Duo, 2 GB RAM) with the
stock Apache.
The FastCGI server was started using the kfcgi(8) defaults. | https://kristaps.bsd.lv/kcgi/index.html | CC-MAIN-2021-21 | refinedweb | 1,366 | 58.79 |
Created on 2008-08-21 22:30 by dalcinl, last changed 2008-09-02 04:02 by brett.cannon. This issue is now closed.
from warnings import warn
warn("hello world") # -> Success
warn(UserWarning) # -> Segmentation fault
warn(None) # -> Segmentation fault
warn(1) # -> Segmentation fault
Two small clues.
First, a backtrace:
#0 0xb7df102a in strcmp () from /lib/tls/i686/cmov/libc.so.6
#1 0x0809e678 in warn_explicit (category=0x81dd140, message=0xb7ac58f4,
filename=0xb7acced0, lineno=1, module=0xb7f53300,
registry=0xb7ac9e94, sourceline=0x0) at Python/_warnings.c:393
#2 0x0809f1df in do_warn (message=0x81fbd78, category=0x81dd140,
stack_level=1) at Python/_warnings.c:606
#3 0x0809f37d in warnings_warn (self=0xb7aceab4, args=0xb7af0a7c,
kwds=0x0) at Python/_warnings.c:628
#4 0x081624ee in PyCFunction_Call (func=0xb7acace4, arg=0xb7af0a7c,
kw=0x0) at Objects/methodobject.c:84
#5 0x080b3633 in call_function (pp_stack=0xbfd51f44, oparg=1) at
Python/ceval.c:3403
#6 0x080ae776 in PyEval_EvalFrameEx (f=0x82b5e6c, throwflag=0) at
Python/ceval.c:2205
#7 0x080b1ac8 in PyEval_EvalCodeEx (co=0xb7ade988, globals=0xb7f4f5d4,
locals=0xb7f4f5d4, args=0x0, argcount=0, kws=0x0,
kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at
Python/ceval.c:2840
#8 0x080a686f in PyEval_EvalCode (co=0xb7ade988, globals=0xb7f4f5d4,
locals=0xb7f4f5d4) at Python/ceval.c:519
#9 0x080df486 in run_mod (mod=0x82ba910, filename=0x81a09e4 "<stdin>",
globals=0xb7f4f5d4, locals=0xb7f4f5d4,
flags=0xbfd52370, arena=0x8216df8) at Python/pythonrun.c:1553
#10 0x080dd67e in PyRun_InteractiveOneFlags (fp=0xb7ec7440,
filename=0x81a09e4 "<stdin>", flags=0xbfd52370)
at Python/pythonrun.c:958
#11 0x080dd1e0 in PyRun_InteractiveLoopFlags (fp=0xb7ec7440,
filename=0x81a09e4 "<stdin>", flags=0xbfd52370)
at Python/pythonrun.c:870
#12 0x080dd038 in PyRun_AnyFileExFlags (fp=0xb7ec7440,
filename=0x81a09e4 "<stdin>", closeit=0, flags=0xbfd52370)
at Python/pythonrun.c:839
#13 0x080ef6ba in Py_Main (argc=1, argv=0xb7f22028) at Modules/main.c:592
#14 0x0805a689 in main (argc=1, argv=0xbfd534c4) at ./Modules/python.c:57
Then, this behavior:
Python 3.0b3+ (py3k:65930M, Aug 21 2008, 21:23:08)
[GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import _warnings
[40709 refs]
>>> _warnings.warn(0)
__main__:1: UserWarning: 0
[40739 refs]
>>> _warnings.warn(12345)
__main__:1: UserWarning: 12345
[40744 refs]
>>> _warnings.warn(AttributeError)
__main__:1: UserWarning: <class 'AttributeError'>
[40750 refs]
>>> import warnings
[41483 refs]
>>> warnings.warn(0)
[41483 refs]
>>> warnings.warn(12345)
[41483 refs]
>>> warnings.warn(10101)
Segmentation fault
That is, _warnings.warn(spam) works OK and avoids the
warnings.warn(spam) crash for values already called by the former.
If you search for _PyUnicode_AsString() in Python/_warnings.c you will
find several places that assume that the proper measures have been taken
to make sure the object is a string. All of those places need to be
fixed so that if a string is not passed in then one is grabbed.
And the reason this turned out as a segfault is for a missing error
return value just before the strcmp() call..
On Fri, Aug 22, 2008 at 8:03 AM, Daniel Diniz <report@bugs.python.org> wrote:
>
> Daniel Diniz <ajaksu@gmail.com> added the comment:
>
>.
>
That's along the lines of what needs to be done (and what I was
planning on doing), although you need to do more error checking on the
return values. Plus the patch I am cooking up adds more checks in the
code for the return value of _PyUnicode_AsString().
The patch doesn't actually bother with a translation as the code causing
issue is only there to prevent infinite recursion. So if the object
being used is not a string, then there is no need to worry as it is not
part of the infinite recursion problem.
I also added a bunch of missing error checks.
Brett, is this patch ready for review?
That's why the keyword is set. =)
On Fri, Aug 22, 2008 at 2:59 PM, Brett Cannon <report@bugs.python.org> wrote:
>
> Brett Cannon <brett@python.org> added the comment:
>
> That's why the keyword is set. =)
Ah. I missed that. :) The patch looks fine.
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
>
--
Cheers,
Benjamin Peterson
"There's no place like 127.0.0.1."
Checked in r66140. | https://bugs.python.org/issue3639 | CC-MAIN-2017-51 | refinedweb | 687 | 69.99 |
Recently, I worked on an interesting project called SpeedBoard which is a real-time board for Agile and Scrum retrospectives. It's the kind of tool we use at work after our Scrum Sprint review to easily share our feedback about the last Sprint.
Since it was a very enriching experience, I thought that I would do a quick tutorial on how to set up a simple chat with the same technology stack which includes: MongoDB, Express, React, Node.js and is also called the MERN stack. I am also using Socket.IO for the real-time engine and Material-UI which is a UI framework for React based on Material Design.
If you don't want to wait until the end of this tutorial, you can already check a preview of the final result, and also check the Github repository if you want to fork it and start to improve it ;)
Prerequisites
In this tutorial, we will use Heroku for hosting our live project and Github for hosting our code and deploying it to Heroku, so make sure you already have an account with them, they both provide a free sign up.
Structure
Before we start, let's have a quick look at the structure of our project. Inside our root folder, we will have 2 subfolders: one called
client which contains the React app and one called
server with our Node.js server:
speedchatapp/ ├── client/ ├── server/
Let's open our Terminal and create our project folder:
mkdir speedchatapp cd speedchatapp/
Set up the client
On the client-side, we will use the Create React App (CRA) which provides a very easy way to start building any React SPA.
CRA provides a very simple command to install the app, but first, let's ensure that
npx is using the latest version if you used
create-react-app in the past:
npm uninstall -g create-react-app
Now, let's create our app in our
client folder with this simple command:
npx create-react-app client
This might take a couple of minutes to install all the dependencies, and once you are done, try:
cd client/ npm start
You should now be able to access your app at
That was quick and simple : ) But still pretty far from our final result! We'll come back a little bit later to our React app once the server-side of our project is ready.
Set up the server
Now that we have the skeleton of our
client ready, let's have a look at the backend side.
First, let's create our
server folder at the root of our project and initialize our
package.json file:
mkdir server cd server/ npm init
A utility will take you through the configuration of the file but you can type Enter for all options for this tutorial.
Now, we will install all the dependencies required for our server (Express, Mongoose and Socket.IO) with the following command:
npm install express mongoose socket.io --save
Then, copy the
.gitignore file from the
client folder to the
server folder to prevent some files and folders to be pushed to our GitHub repository (e.g.
/node_modules folder):
cp ../client/.gitignore ./
We will create the 2 files necessary for our server to work. The first one (Message.js) is the schema of the documents we will keep in our database. We will need 3 information: the name of the user who is posting a message in the chat, the content of its message and a timestamp to know when he posted his message.
server/Message.js
const mongoose = require('mongoose'); const messageSchema = new mongoose.Schema({ content: String, name: String, }, { timestamps: true, }); module.exports = mongoose.model('Message', messageSchema);
The second one (index.js) is our main file, I won't go too much into details because that would make this tutorial a bit too long, but feel free to ask any question in the comments, I'll be glad to answer them or improve the comments directly in the code if necessary.
server/index.js
const express = require('express'); const app = express(); const http = require('http').Server(app); const path = require('path'); const io = require('socket.io')(http); const uri = process.env.MONGODB_URI; const port = process.env.PORT || 5000; const Message = require('./Message'); const mongoose = require('mongoose'); mongoose.connect(uri, { useUnifiedTopology: true, useNewUrlParser: true, }); app.use(express.static(path.join(__dirname, '..', 'client', 'build'))); io.on('connection', (socket) => { // Get the last 10 messages from the database. Message.find().sort({createdAt: -1}).limit(10).exec((err, messages) => { if (err) return console.error(err); // Send the last messages to the user. socket.emit('init', messages); }); // Listen to connected users for a new message. socket.on('message', (msg) => { // Create a message with the content and the name of the user. const message = new Message({ content: msg.content, name: msg.name, }); // Save the message to the database. message.save((err) => { if (err) return console.error(err); }); // Notify all other users about a new message. socket.broadcast.emit('push', msg); }); }); http.listen(port, () => { console.log('listening on *:' + port); });
The structure of your project should now look like this:
speedchatapp/ ├── client/ │ └── (Several files and folders) └── server/ ├── node_modules/ ├── .gitignore ├── index.js ├── Message.js ├── package-lock.json (auto-generated) └── package.json
Before coming back to our React app to finish our project, let's set up our Heroku hosting and link it to our Github repository to make sure the deployment works fine.
Set up our Heroku hosting
Let's download and install the Heroku CLI to set up everything from our Terminal.
Once downloaded and installed, let's go back to our Terminal and login to our Heroku account:
heroku login
It will open a new tab in your browser and once you are logged in, you can close the browser tab and go back to your Terminal.
Now let's create our new app that will host our project:
heroku create
It will automatically generate an identifier with a URL where you can access your app, it should look like this:
You can rename your app if you want something a little bit easier to remember, you can then use it for the rest of this tutorial:
Alright, now we need our MongoDB database to store the chat messages from the users. Let's add the mongolab addon to our app:
heroku addons:create mongolab --app speedchatapp
I used
speedchatapp in the previous command because I renamed my application but you should use the one provided when you created it if you didn't rename it, for example,
sleepy-meadow-81798.
Once created it will show you the name of a variable in green, i.e
MONGODB_URI. Now let's get the configuration URI of our newly created database:
heroku config:get MONGODB_URI
You should see something like this:
mongodb://heroku_123abc:abc123@ds141188.mlab.com:41188/heroku_123abc
Copy this URI, and create a file at the root of your project called
.env with the following content
[VARIABLE_IN_GREEN]=[URI]. It should look like this:
MONGODB_URI=mongodb://heroku_123abc:abc123@ds141188.mlab.com:41188/heroku_123abc
Let's copy one more time the
.gitignore and add the
.env file at the end of it to avoid pushing the credentials of our database to GitHub:
cp server/.gitignore ./ echo '.env' >> .gitignore
During the deployment of our app, we need to tell Heroku how to start our server. It can be done by using a Procfile that we will put at the root of our project. So let's create it and add the command line that will start our server:
echo 'web: node server/index.js' > Procfile
Now let's initialize another
package.json at the root of our project. Same as before, don't worry about all the options, for now, just type Enter at all prompts:
npm init
One last thing we want to do here is to install the npm package called Concurrently that will allow us to run both the server and the client in a single command line during our development mode:
npm install --save-dev concurrently
And finally, in our newly created
package.json at the root of the project, we will add 2 lines in the
scripts section:
"scripts": { "dev": "concurrently --kill-others \"heroku local\" \"npm run start --prefix ./client\"", "postinstall": "npm install --prefix ./server && npm install --prefix ./client && npm run build --prefix ./client", }
The
postinstall command, as you can guess, will be executed after Heroku has finished running the
npm install command at the root of our folder. It's telling Heroku to also run the
npm install command inside our
client and
server folder and will also build our React app for production.
Now, it's time to test it, go to the root of your project and type:
npm run dev
This will launch the server and our React app in development mode, and it should open a window in your browser with the previous landing page of our React app.
In your terminal, you should see something like this:
> concurrently --kill-others "heroku local" "npm run start --prefix ./client" [1] [1] > react-scripts start [1] [0] [OKAY] Loaded ENV .env File as KEY=VALUE Format [0] 12:16:15 PM web.1 | listening on *:5000 [1] Starting the development server... [1] [1] Compiled successfully! [1] [1] You can now view client in the browser. [1] [1] Local: [1] On Your Network: [1] [1] Note that the development build is not optimized. [1] To create a production build, use npm run build.
Note: we are using the same database for both Dev and Live mode, if you want to use a different database, you can always create another one in Heroku like we have seen before and update your
.env file with the credentials of your new database to make sure it won't interfere with the one in production.
Set up GitHub and link to Heroku
Now, we are going create a new repository on GitHub, and we are going to connect it to Heroku so every time we will merge a Pull Request on the master branch, it will automatically deploy it on Heroku.
Let's create our repository on GitHub. Go to:
Write down the repository URL that we will use in the next step. Back to our Terminal, in the root folder of our project:
// Initialize the root folder as a Git repository git init // Add all the files for the initial commit git add . // Commit staged files git commit -m "Initial commit" // Set the GitHub remote repository git remote add origin <repository url> // Push the local changes to GitHub git push origin master
Now our code is on GitHub, let's link this repository to our Heroku app.
From the Heroku UI, select your app and click on the
Deploy tab. In the
Deployment method, click on
Github, type your repository name and click on
Connect:
Also, make sure that the "Enable Automatic Deploys" on the
master branch is activated:
It should now look like this:
Now let's trigger a first manual deployment to check that everything is fine. Click on the
Deploy Branch and wait until you see you see
Your app was successfully deployed.
Finally, after clicking on the
Open App button at the top right of the page, you should see the React app on your Heroku hosting.
From now on, after pushing any update to your GitHub repository, you should see the deployment triggered automatically in your Heroku UI:
Finishing the client
Now that the architecture of our project is ready, let's finish our
clientReact app.
The first thing we will need here is to install our frontend dependencies in the
client folder: Socket.IO for client, Material-UI core and icons:
cd client/ npm install socket.io-client @material-ui/core @material-ui/icons --save
Now in the
client/package.json, add the following
proxy field at the end of the file:
"proxy": ""
It will tell the development server to proxy any unknown requests to your server in development. Check the official documentation for more information.
Next, we'll create a
config.js file to tell our app to switch endpoints in case we are on our local machine or live hosting:
client/src/config.js
import pkg from '../package.json'; export default { development: { endpoint: pkg.proxy }, production: { endpoint: window.location.hostname } }
Okay now let's start our local development environment from our root folder:
npm run dev
Last steps
For the last step, either create or update each file below manually or go directly to the GitHub repository to check out the project.
Replace
client/src/App.css:
body { background: #f5f5f5; padding: 16px; } #chat { max-height: calc(100vh - 128px); overflow: scroll; padding: 16px; } .name { color: rgba(0, 0, 0, 0.54); } .content { margin-bottom: 8px; }
Replace
client/src/App.js:
import React from 'react'; import config from './config'; import io from 'socket.io-client'; import Paper from '@material-ui/core/Paper'; import Typography from '@material-ui/core/Typography'; import BottomBar from './BottomBar'; import './App.css'; class App extends React.Component { constructor(props) { super(props); this.state = { chat: [], content: '', name: '', }; } componentDidMount() { this.socket = io(config[process.env.NODE_ENV].endpoint); // Load the last 10 messages in the window. this.socket.on('init', (msg) => { let msgReversed = msg.reverse(); this.setState((state) => ({ chat: [...state.chat, ...msgReversed], }), this.scrollToBottom); }); // Update the chat if a new message is broadcasted. this.socket.on('push', (msg) => { this.setState((state) => ({ chat: [...state.chat, msg], }), this.scrollToBottom); }); } // Save the message the user is typing in the input field. handleContent(event) { this.setState({ content: event.target.value, }); } // handleName(event) { this.setState({ name: event.target.value, }); } handleSubmit(event) { // Prevent the form to reload the current page. event.preventDefault(); // Send the new message to the server. this.socket.emit('message', { name: this.state.name, content: this.state.content, }); this.setState((state) => { // Update the chat with the user's message and remove the current message. return { chat: [...state.chat, { name: state.name, content: state.content, }], content: '', }; }, this.scrollToBottom); } // Always make sure the window is scrolled down to the last message. scrollToBottom() { const chat = document.getElementById('chat'); chat.scrollTop = chat.scrollHeight; } render() { return ( <div className="App"> <Paper id="chat" elevation={3}> {this.state.chat.map((el, index) => { return ( <div key={index}> <Typography variant="caption" className="name"> {el.name} </Typography> <Typography variant="body1" className="content"> {el.content} </Typography> </div> ); })} </Paper> <BottomBar content={this.state.content} handleContent={this.handleContent.bind(this)} handleName={this.handleName.bind(this)} handleSubmit={this.handleSubmit.bind(this)} name={this.state.name} /> </div> ); } }; export default App;
Create
client/src/BottomBar.js:
import React from 'react'; import { fade, makeStyles } from '@material-ui/core/styles'; import AppBar from '@material-ui/core/AppBar'; import InputBase from '@material-ui/core/InputBase'; import Toolbar from '@material-ui/core/Toolbar'; import ChatIcon from '@material-ui/icons/Chat'; import FaceIcon from '@material-ui/icons/Face'; const useStyles = makeStyles(theme => ({ appBar: { bottom: 0, top: 'auto', }, inputContainer: { backgroundColor: fade(theme.palette.common.white, 0.15), '&:hover': { backgroundColor: fade(theme.palette.common.white, 0.25), }, borderRadius: theme.shape.borderRadius, marginLeft: theme.spacing(1), position: 'relative', width: '100%', }, icon: { width: theme.spacing(7), height: '100%', position: 'absolute', pointerEvents: 'none', display: 'flex', alignItems: 'center', justifyContent: 'center', }, inputRoot: { color: 'inherit', }, inputInput: { padding: theme.spacing(1, 1, 1, 7), width: '100%', }, })); export default function BottomBar(props) { const classes = useStyles(); return ( <AppBar position="fixed" className={classes.appBar}> <Toolbar> <div className={classes.inputContainer} style={{maxWidth: '200px'}}> <div className={classes.icon}> <FaceIcon /> </div> <InputBase onChange={props.handleName} value={props.name} placeholder="Name" classes={{ root: classes.inputRoot, input: classes.inputInput, }} inputProps={{ 'aria-label': 'name' }} /> </div> <div className={classes.inputContainer}> <form onSubmit={props.handleSubmit}> <div className={classes.icon}> <ChatIcon /> </div> <InputBase onChange={props.handleContent} value={props.content} placeholder="Type your message..." classes={{ root: classes.inputRoot, input: classes.inputInput, }} inputProps={{ 'aria-label': 'content' }} /> </form> </div> </Toolbar> </AppBar> ); }
Every time you update the code, you should see the project at automatically reload with the last changes.
Finally, let's push our latest update to GitHub to trigger a new deployment on our live project:
git add . git commit -m "Final update" git push origin master
Et voilà, Bob's your uncle! Our chat is now finished and ready:
If you have any question, feel free to ask in the comments, I'll be glad to answer it and improve this tutorial. And feel free to fork the project to improve it ;)
Discussion (30)
I'm unable to deploy the master branch. I'm getting an error in the build.
npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path /tmp/build_4120397c_/client/package.json
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, open '/tmp/build_4120397c_/client/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent
-----> Build failed
! Push rejected, failed to compile Node.js app.
! Push failed
Hi , I faced the same issue , but in my case there was a .git folder present in my client folder which was preventing the deploy . just delete that .git folder and reinitialise your repository and the build will automatically start on heroku.
Hello, I have the same kind of issue first the same as syedabra003 but I added the node version to my package.json and then the error switched now to this one. I'm at the early step of the guide, when trying manual deploy with Heroku, I started from scratch two times but still the same result, 'npm run dev' runs fine on local so if any clues ? Thanks !
Hi, how do you reproduce the error exactly? Which command do you try to execute? Thanks
Hey,
I followed every step in the post. This output is occurred when I entered 'npm run dev' in terminal.
In case anybody else runs into the same issue, I think the client's package "name" needs to be "client", in case you copy in your own react app... Also, Procfile should not have ' ' around the 'web: ... ' part. Not 100% if that's what solved it for me, but oh well.
Hi Drybone,
Yes, the Procfile should not include the single quote, I am using macOS and the command I wrote in the article is not adding the quote. Maybe it does under another OS? Anyway, the file should be like this:
github.com/armelpingault/speedchat...
And what do you mean exactly by
the client's package "name" needs to be "client"?
I am not sure I understand :)
yes removing the single quotes solved this issue for me.
In server/Message.js file:
const messageSchema = new mongoose.Schema({
content: String,
name: String,
}, {
timestamps: true, -----> this line
});
Is timestamp true by default?
I tried _id.getTimestamp() on a MongoDB _id in which "timestamp: true" is not passed as a parameter, but is still returning the timestamp.
So, is it truly necessary or does it have any other use other than storing the time of creation?
timestamp : true, creates a createdAt and updatedAt field while your inserting documents in the document itself, by default it is not added.
The latter getTimestamp() is a function which finds created date. So there is a difference.
I think you interpreted my doubt in the wrong way.
I didn't pass "timestamps: true" as a parameter. And still, it returned the createdAt field when I tried "_id.getTimestamp()". Then what is the use of passing "timestamps: true" as a parameter?
Lets take a sample model for a signup in mongoose -->
var userSchema = new Schema({
password : String,
fullName : String,
userName : {
type : String,
unique : true
}
})
This piece of code will create a mongodb document of this format -->
{
"id" : ObjectId("5eac7f0101dce40f15a97e8d"),
"userName" : "hv98",
"fullName" : "asd",
"password" : "asd",
"_v" : 0
}
Notice this doesn't have the createdAt and updatedAt fields
Now a sample model with the timestamp true field -->
var imageSchema = new Schema({
username : String,
description : String,
imagePath : {
type : String
},
likes : [String],
nsfw : {
type : Boolean,
default : false
}
},{
timestamps : true
})
A document from this model would look like this -->
"id" : ObjectId("5eb02f999a15002d41f83e14"),
"likes" : [
"hv98"
],
"nsfw" : false,
"username" : "hv98",
"description" : "d",
"imagePath" : "1588604825052IMG_3265.JPG",
{
"_id" : ObjectId("5eb1581ff810f83199fca925"),
"username" : "hv98",
"comment" : "dd",
"updatedAt" : ISODate("2020-05-05T12:12:15.736Z"),
"createdAt" : ISODate("2020-05-05T12:12:15.736Z")
}
],
"createdAt" : ISODate("2020-05-04T15:07:05.068Z"),
"updatedAt" : ISODate("2020-05-05T12:20:37.408Z"),
"_v" : 0
}
Now if you notice this document has a field called createdAt and updatedAt which was not the case in the earlier one
So when you use _id.getTimestamp() you get the timestamp but it is not a field which is already present in the document but something which the function does and if you have the timestamp : true then this is a field in the document and doesn't require an extra function to be called.
I hope this can settle the difference.
Edit -- **
**One of the uses of the createdAt field is displaying the documents in ascending or descending order.
eg code -->
Image.find({}).sort({ createdAt: -1 }).exec(function(err, docs) {
if(err) console.log(err);
res.json(docs);
});
This returns all the documents and sort them in ascending order that is the latest doc is displayed first and sends it to your client.
Amazing explanation Harsh. This cleared all my doubts.
Thanks for the reply Harsh ;)
Hi I have an issue to share with you guys. I got an issue if I wrap the with . You will have your code inside setState run twice. The possible way to fix this is to move the logic outside of setState. I have fix this two setState in App.js.
in componentDidMount -> I moved the msg.reverse() outside setState
and in handleSubmit -> I moved the this.socket.emit function call outside. preventing from emit the message twice
Hope it can help. Thank you Armel.
Hi Yodi, thanks a lot, I have updated the source code on Github and in the article ;)
Can you please explain why you use socket.broadcast.emit for your 'push' event? It seems like socket.emit would work just fine but it doesn't. I've read this cheat sheet and it doesn't seem to explain why it wouldn't work:
socket.io/docs/v3/emit-cheatsheet/...
Hi Jeff, you might be right, I didn't test it with socket.emit, but it could be a mistake on my side ;)
Thanks for the guide. I needed to install 'dotenv' and add require('dotenv').config(); to the top of index.js so that I could run it locally. I also needed to add this since I was using socket.io v3.0+:
const io = require('socket.io')(http, {
cors: {
origin: "localhost:3000",
methods: ["GET", "POST"]
}
});
To avoid the CORS errors
socket.io/docs/v3/migrating-from-2...
I tried this on my localhost but here I got
"GET localhost:5000/socket.io/?EIO=4&tr... net::ERR_FAILED"
"Access to XMLHttpRequest at 'localhost:5000/socket.io/?EIO=4&tr...' from origin 'localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource."
If you're using socket.io v3.0+ you need to add this to index.js in the server (replace old code with this):
const io = require('socket.io')(http, {
cors: {
origin: "localhost:3000",
methods: ["GET", "POST"]
}
});
socket.io/docs/v3/migrating-from-2...
After using that package.json file will be created I heard but I am unable to create if you an idea or suggestion where I could do a mistake
{
“name”: “test”,
“version”: “1.0.0”,
“description”: “”,
“main”: “index.js”,
“scripts”: {
“test”: “echo \”Error: no test specified\” && exit 1"
},
“author”: “”,
“license”: “ISC”
}
CETPA- app directory is now set
hey , mlab is discontinued and im completely new to heroku . which addon should be used now?
Hi Prachita, you can find a free development solution on mongodb.com/, this is what I am using right now ;)
Why do you need a Procfile in your file structure?
inside Procfile, web: node server/index.js.
Is it only for development? Does this file use when deploying the app?
Hi, it's used by Heroku: devcenter.heroku.com/articles/proc...
Is there a reason you decided to let people enter a name? Anyone can type someone else's name and then type.
Hi Siddhant, well, like any other website, you can always use someone else name, and the point here is to show a tutorial about the MERN stack, not really about verifying the identity of a user. But you can always implement your own solution to verify your users' identity.
changing the speechat app into a component. Is there a simple solution to change file structure? that way i can use the chat in a seperate app?
Yes, it depends exactly how you want to integrate it into your app. It's a bit hard to tell you exactly how to do it without an example though :) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/armelpingault/how-to-create-a-simple-and-beautiful-chat-with-mongodb-express-react-and-node-js-mern-stack-29l6 | CC-MAIN-2021-10 | refinedweb | 4,130 | 66.23 |
Python – the Handy defaultdict
Python – the Handy defaultdict
The collections module has a pretty handy tool called defaultdict. Allow me to show you an example of how it's used and what it's useful for...
Join the DZone community and get the full member experience.Join For Free
The collections module has a pretty handy tool called defaultdict. The defaultdict is a subclass of Python’s dict that accepts a default_factory as its primary argument. The default_factory is usually a Python type, such as int or list, but you can also use a function or a lambda too. Let’s start by creating a regular Python dictionary that counts the number of times each word is used in a sentence:
sentence = "The red for jumped over the fence and ran to the zoo for food" words = sentence.split(' ') reg_dict = {} for word in words: if word in reg_dict: reg_dict[word] += 1 else: reg_dict[word] = 1 print(reg_dict)
If you run this code, you should see output that is similar to the following:
{'The': 1, 'and': 1, 'fence': 1, 'food': 1, 'for': 2, 'jumped': 1, 'over': 1, 'ran': 1, 'red': 1, 'the': 2, 'to': 1, 'zoo': 1}
Now, let’s try doing the same thing with defaultdict!
from collections import defaultdict sentence = "The red for jumped over the fence and ran to the zoo for food" words = sentence.split(' ') d = defaultdict(int) for word in words: d[word] += 1 print(d)
You will notice right away that the code is much simpler. The defaultdict will automatically assign zero as the value to any key it doesn’t already have in it. We add one so it makes more sense and it will also increment if the word appears multiple times in the sentence.
defaultdict(<class 'int'>, {'The': 1, 'and': 1, 'fence': 1, 'food': 1, 'for': 2, 'jumped': 1, 'over': 1, 'ran': 1, 'red': 1, 'the': 2, 'to': 1, 'zoo': 1})
Now, let’s try using a Python list type as our default factory. We’ll start off with a regular dictionary first, as before.
my_list = [(1234, 100.23), (345, 10.45), (1234, 75.00), (345, 222.66), (678, 300.25), (1234, 35.67)] reg_dict = {} for acct_num, value in my_list: if acct_num in reg_dict: reg_dict[acct_num].append(value) else: reg_dict[acct_num] = [value] print(reg_dict)
This example is based on some code I wrote a few years ago. Basically, I was reading a file line by line and needed to grab the account number and the payment amount and keep track of them. Then at the end, I would sum up each account. We’re skipping the summing part here. If you run this code, you should get some output similar to the following:
{345: [10.45, 222.66], 678: [300.25], 1234: [100.23, 75.0, 35.67]}
Now, let’s reimplement this code using defaultdict:
from collections import defaultdict my_list = [(1234, 100.23), (345, 10.45), (1234, 75.00), (345, 222.66), (678, 300.25), (1234, 35.67)] d = defaultdict(list) for acct_num, value in my_list: d[acct_num].append(value) print(d)
Once again, this cuts out the if/else conditional logic and makes the code easier to follow. Here’s the output from the code above:
defaultdict(<class 'list'>, {345: [10.45, 222.66], 678: [300.25], 1234: [100.23, 75.0, 35.67]})
This is some pretty cool stuff! Let’s go ahead and try using a lambda too as our default_factory!
>>> from collections import defaultdict >>> animal = defaultdict(lambda: "Monkey") >>> animal['Sam'] = 'Tiger' >>> print animal['Nick'] Monkey >>> animal defaultdict(<function <lambda> at 0x7f32f26da8c0>, {'Nick': 'Monkey', 'Sam': 'Tiger'})
Here, we create a defaultdict that will assign 'Monkey' as the default value to any key. The first key we set to 'Tiger', then the next key we don’t set at all. If you print the second key, you will see that it got assigned 'Monkey'. In case you haven’t noticed yet, it’s basically impossible to cause a KeyError to happen as long as you set the default_factory to something that makes sense. The documentation does mention that if you happen to set the default_factory to None, then you will receive a KeyError. Let’s see how that works:
>>> from collections import defaultdict >>> x = defaultdict(None) >>> x['Mike'] Traceback (most recent call last): Python Shell, prompt 41, line 1 KeyError: 'Mike'
In this case, we just created a very broken defaultdict. It can no longer assign a default to our key, so it throws a KeyError instead. Of course, since it is a subclass of dict, we can just set the key to some value and it will work. But, that kind of defeats the purpose of the defaultdict.
Wrapping Up
Now you know how to use the handy defaultdict type from Python’s collection module. You can use it for much more than just assigning default values as you’ve just seen. I hope you will find some fun uses for this in your own code.
Published at DZone with permission of Mike Driscoll , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/python-the-handy-defaultdict?fromrel=true | CC-MAIN-2019-18 | refinedweb | 863 | 74.19 |
updated copyright years: : save-mem-dict { addr1 u -- addr2 u } 22: here { addr2 } 23: u allot 24: addr1 addr2 u move 25: addr2 u ; 26: 27: : delete-prefix ( c-addr1 u1 c-addr2 u2 -- c-addr3 u3 ) 28: \ if c-addr2 u2 is a prefix of c-addr1 u1, delete it 29: 2over 2over string-prefix? if 30: nip /string 31: else 32: 2drop 33: endif ; 34: 35: : update-image-included-files ( -- ) 36: included-files 2@ { addr cnt } 37: image-included-files 2@ { old-addr old-cnt } 38: align here { new-addr } 39: cnt 2* cells allot 40: new-addr cnt image-included-files 2! 41: old-addr new-addr old-cnt 2* cells move 42: cnt old-cnt 43: U+DO 44: addr i 2* cells + 2@ 45: s" GFORTHDESTDIR" getenv delete-prefix save-mem-dict 46: new-addr i 2* cells + 2! 47: LOOP 48: maxalign ; 49: 50: : dump-fi ( addr u -- ) 51: w/o bin create-file throw >r 52: update-image-included-files 53: update-image-order 54: here forthstart - forthstart 2 cells + ! 55: forthstart 56: begin \ search for start of file ("#! " at a multiple of 8) 57: 8 - 58: dup 3 s" #! " str= 59: until ( imagestart ) 60: here over - r@ write-file throw 61: r> close-file throw ; 62: 63: : savesystem ( "name" -- ) \ gforth 64: name dump-fi ; | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/savesys.fs?hideattic=0;f=h;only_with_tag=MAIN;ln=1;rev=1.10 | CC-MAIN-2021-49 | refinedweb | 222 | 53.65 |
Intro: Halloween Dropping Spider
My Halloween project consisted of a dropping spider triggered by a PIR motion sensor mounted on a Jackolantern and controlled by an arduino MCU. The motion sensor triggered a dropping spider, lights, sounds, low laying fog and finally a tweet with a picture attached ().
Step 1: Parts List
- Toy recorder
-
- Fake spider
- Lights and accesories
- Incandescent black light, strobe light, black light bulbs.
Step 2: Setup
It's all pretty basic. The arduino controlled the PIR motion sensor, the servos for dropping spider reel, Jackolantern LED lights, toy with scary sound, and the X10 CM17A (you can control as many x10 devices as you want)..
Step 3: Arduino Sketch
I used the 1K resistors on the led1Pin and led2Pin.
I used the 10K resistor on speakerPin going to 2N2222 transistor base, ground to the emitter. Then the emiter went to one side of the toy switch and the collector went to the other. This worked the 2N2222 transistor as a switch.
Look at the comments for the arduino pin wiring.
#include <X10Firecracker.h>
#include <Servo.h>
Servo myservo; // New instance of Servo.h
int rtsPin = 2; // RTS line for C17A - DB9 pin 7
int dtrPin = 3; // DTR line for C17A - DB9 pin 4
// Connect DB9 pin 5 to ground.
int servoPin = 5; // Servo used to lift the reel
int pirPin = 8;
int led1Pin = 10; // Left led
int led2Pin = 11; // Right led
int speakerPin = 12; // Piezo buzzer speaker
int bitDelay = 1; // mS delay between bits (1 mS OK)
int ledStatus = 0;
int calibrationTime = 30;
long unsigned int lowIn;
long unsigned int pause = 5000;
boolean lockLow = true;
boolean takeLowTime;
int booCounter = 1;
void setup(){
Serial.begin(9600); // Start serial communication at 9600 baud rate
pinMode(led1Pin, OUTPUT); // Set led1Pin digital pin to output
pinMode(led2Pin, OUTPUT); // Set led1Pin digital pin to output
pinMode(speakerPin, OUTPUT);// Set speakerPin digital pin to output
pinMode(servoPin, OUTPUT); // Set led1Pin digital pin to output
myservo.attach(7); // Atach servo on pin 7 for continous rotation servo
X10.init(rtsPin, dtrPin, bitDelay); // Initialize X10 C17A
pinMode(pirPin, INPUT);
digitalWrite(pirPin, LOW);
//give the sensor some time to calibrate
Serial.print("calibrating sensor ");
for(int i = 0; i < calibrationTime; i++){
Serial.print(".");
delay(1000);
}
Serial.println(" done");
Serial.println("SENSOR ACTIVE");
delay(50);
myservo.write(140);
}
void loop(){
if(digitalRead(pirPin) == HIGH){
Serial.print("[[[get|]]]"); // send serial message to iobridge.
digitalWrite(led1Pin, HIGH); //the led visualizes the sensors output pin state
digitalWrite(led2Pin, HIGH); //the led visualizes the sensors output pin state
if(lockLow){
// makes sure we wait for a transition to LOW before any further output is made:
lockLow = false;
// Release the reel by lifting servo.
myservo.write(140);
// Turn on toy with sound
digitalWrite(speakerPin, HIGH);
delay(100);
digitalWrite(speakerPin, LOW);
delay(7000);
myservo.write(65);
// send x10 commands to trun off/on lights
X10.sendCmd( hcC, 1, cmdOn );
X10.sendCmd( hcC, 3, cmdOn );
X10.sendCmd( hcC, 2, cmdOff );
int var = 0;
// Activate the continous rotation servo.
while(var < 800){
digitalWrite(servoPin,HIGH);
delayMicroseconds(1200); // 1.5ms
digitalWrite(servoPin,LOW);
delay(20); // 20ms
var++;
}
delay(50);
}
takeLowTime = true;
}
if(digitalRead(pirPin) == LOW){
if (ledStatus == 0){
digitalWrite(led1Pin, HIGH);
digitalWrite(led2Pin, LOW);
ledStatus = 1;
delay(100);
}
else{
digitalWrite(led1Pin, LOW);
digitalWrite(led2Pin, HIGH);
ledStatus = 0;
delay;
// Send x10 commands
X10.sendCmd( hcC, 1, cmdOff );
X10.sendCmd( hcC, 3, cmdOff );
X10.sendCmd( hcC, 2, cmdOn );
}
}
}
Step 4: Spider Reel
I end up using an VHS tape as a reel. I had to modify one servo to have continuous rotation. I used this guide to do so. The second servo just did the lift part.
Step 5: IoBridge Monitor
This is the bash script I used to trigger a sound as well as send a twitpic.
I used my mac os x Apache 2 server. I had to give write permissions
Step 6: Fog Machine X10 Control
I got this fog machine that comes with manual fog release switch.
I just soldered the 125VAC/10A DPDT Plug-In Relay to the switch and connected to an X10 appliance module.
Step 7: Fog Chiller
I made this low laying fog cooler following this instructable.
Step 8: Raw Video
This video just shows the basic stuff without the sounds, fog and lights.
Third Prize in the
Halloween Contest
7 Discussions
7 years ago on Introduction
you need a warning sign that says warning spiders dropping(like a ped xing sign)
8 years ago on Step 4
could we have a little more information on how the servo turns the video tape? Is there a magnet glued inside the tape? or is it by pure friction?
Reply 7 years ago on Introduction
Is pure friction. I added a metal square bracket that fits right in the VHS tape.
8 years ago on Introduction
Step 7's link to the 'Fog Chiller' instructable is dead. Nice project though. Im thinking im going to do the dropping spider this year.
Reply 7 years ago on Introduction
I guess that Instructable no longer exist :( here a link to a similar one:
8 years ago on Step 3
As of Nov 6, there's a bug in displaying certain kinds of content, including code and some kinds of ASCII art. We're working on fixing this. You don't need to do anything; the text hasn't been deleted, it's just not displaying properly.
Sorry about that and we'll have it back as soon as we can!
8 years ago on Introduction
I feel pretty sure I have trick or treated at your house once before. How very odd | https://www.instructables.com/id/Halloween-Dropping-Spider/ | CC-MAIN-2018-43 | refinedweb | 929 | 64.41 |
.
#include <sys/socket.h> ... bind (sd, (struct sockaddr *) &addr, length);
#include <sys/un.h> ... bind (sd, (struct sockaddr_un *) &addr, length);
#include <netinet/in.h> ... bind (sd, (struct sockaddr_in *) &addr, length);
In the UNIX domain, binding a name creates a named socket in the file system. Use unlink() or rm () to remove the socket.:
A SOCK_STREAM socket is discarded by calling close()..
level = SOL_SOCKET), in which case the socket option name must be specified. To manipulate options at any other level the protocol number of the desired protocol controlling the option of interest must be specified (see getprotoent() in getprotobyname()).
These two programs show how you can establish a socket connection using the above functions.
#include <sys/types.h> #include <sys/socket.h> #include <sys/un.h> #include <stdio.h> #define NSTRS 3 /* no. of strings */ #define ADDRESS "mysocket" /* addr to connect */ /* * Strings we send to the client. */ char *strs[NSTRS] = { "This is the first string from the server.\n", "This is the second string from the server.\n", "This is the third string from the server.\n" }; main() { char c; FILE *fp; int fromlen; register int i, s, ns, len; struct sockaddr_un saun, fsaun; /* * Get a socket to work with. This socket will * be in the UNIX domain, and will be a * stream socket. */ if ((s = socket(AF_UNIX, SOCK_STREAM, 0)) < 0) { perror("server: socket"); exit(1); } /* * Create the address we will be binding to. */ saun.sun_family = AF_UNIX; strcpy(saun.sun_path, ADDRESS); /* * Try to bind the address to the socket. We * unlink the name first so that the bind won't * fail. * * The third argument indicates the "length" of * the structure, not just the length of the * socket name. */ unlink(ADDRESS); len = sizeof(saun.sun_family) + strlen(saun.sun_path); if (bind(s, &saun, len) < 0) { perror("server: bind"); exit(1); } /* * Listen on the socket. */ if (listen(s, 5) < 0) { perror("server: listen"); exit(1); } /* * Accept connections. When we accept one, ns * will be connected to the client. fsaun will * contain the address of the client. */ if ((ns = accept(s, &fsaun, &fromlen)) < 0) { perror("server: accept"); exit(1); } /* * We'll use stdio for reading the socket. */ fp = fdopen(ns, "r"); /* * First we send some strings to the client. */ for (i = 0; i < NSTRS; i++) send(ns, strs[i], strlen(strs[i]), 0); /* * Then we read some strings from the client and * print them out. */ for (i = 0; i < NSTRS; i++) { while ((c = fgetc(fp)) != EOF) { putchar(c); if (c == '\n') break; } } /* * We can simply use close() to terminate the * connection, since we're done with both sides. */ close(s); exit(0); }
#include <sys/types.h> #include <sys/socket.h> #include <sys/un.h> #include <stdio.h> #define NSTRS 3 /* no. of strings */ #define ADDRESS "mysocket" /* addr to connect */ /* * Strings we send to the server. */ char *strs[NSTRS] = { "This is the first string from the client.\n", "This is the second string from the client.\n", "This is the third string from the client.\n" }; main() { char c; FILE *fp; register int i, s, len; struct sockaddr_un saun; /* * Get a socket to work with. This socket will * be in the UNIX domain, and will be a * stream socket. */ if ((s = socket(AF_UNIX, SOCK_STREAM, 0)) < 0) { perror("client: socket"); exit(1); } /* * Create the address we will be connecting to. */ saun.sun_family = AF_UNIX; strcpy(saun.sun_path, ADDRESS); /* * Try to connect to the address. For this to * succeed, the server must already have bound * this address, and must have issued a listen() * request. * * The third argument indicates the "length" of * the structure, not just the length of the * socket name. */ len = sizeof(saun.sun_family) + strlen(saun.sun_path); if (connect(s, &saun, len) < 0) { perror("client: connect"); exit(1); } /* * We'll use stdio for reading * the socket. */ fp = fdopen(s, "r"); /* * First we read some strings from the server * and print them out. */ for (i = 0; i < NSTRS; i++) { while ((c = fgetc(fp)) != EOF) { putchar(c); if (c == '\n') break; } } /* * Now we send some strings to the server. */ for (i = 0; i < NSTRS; i++) send(s, strs[i], strlen(strs[i]), 0); /* * We can simply use close() to terminate the * connection, since we're done with both sides. */ close(s); exit(0); }
Exercise 12776
Configure the above socket_server.c and socket_client.c programs for you system and compile and run them. You will need to set up socket ADDRESS definition. | http://www.cs.cf.ac.uk/Dave/C/node28.html | CC-MAIN-2015-27 | refinedweb | 722 | 77.23 |
Asked by:
FTDI driver installation in ICE
Question
All replies
Download,
extract gubbins,
put in suitable place in out-of-box-drivers,
import a pmq (or manually add) with the FTDI driver in.
Sorry, no FTDI here, they are really hard to find in Japan (everyone loves those horrible prolific chips) so I can't prove it, but it should be trivial.
=^x^=
- Please do the following steps:
- Named a folder called "FTDI" folder in Out-of-Box Drivers.
- Download the FTDI driver and put all unziped files in the FTDI folder
- Expand the Out-of-Box Drivers components to expose the FTDI folder and right click it. then choose Insert Driver Path to Pass 2 offlineServicing.
When system boots up, you will find the FTDI driver will automatically be installed.
I must be missing something. I followed the steps above to create an FTDI directory in my Out-of-Box drivers portion of distribution share. All the files are unzipped and copied directly under \FTDI
After installing the image using IBW on the target device, the FTDI does not get installed.
I end up resolving the issues by hand by copying the driver files to a thumb drive and running "updating driver" for "USB Serial Port" from Device Manager on the target device.
Does anyone have a notion of what I may be doing wrong?
Embedded Sr Software Engineer
I was able to pre-install the latest FTDI UART drivers using the dpinst.exe program and a RunSynchronuous (RS) command. The "CDM v2.10.00 WHQL Certified.exe" driver package from FTDI includes the dpinst.exe program in both x86 and amd64 versions. A separate utility, dp-chooser.exe, figures out what platform you are running. This is how the FTDI installer works natively. You can pick which dpinst program version based upon the type of platform you are installing to. You will need to extract the FTDI driver .exe file to get at the components.
I created an $OEM$ Folder at c:\Drivers\ftdi and copied all of the FTDI driver install files here. I then created an RS command with a "path = c:\Drivers\ftdi\dpinst-x86.exe /q" to perform a quiet install. The RS command is defined in Deployment and runs in pass P4.
I can plug in an FTDI USB UART device after the OS installation and the device is enumerated correctly and the FTDI driver loads. May not be the most elegant solution, but it works.
Glenn | https://social.msdn.microsoft.com/Forums/en-US/01e7c3eb-e27f-4952-a0d4-dac9e2ed5448/ftdi-driver-installation-in-ice?forum=quebeccomponentsforum | CC-MAIN-2020-50 | refinedweb | 414 | 66.33 |
Beyond Java/Contenders
From WikiContent
It was my first Class IV river, and I approached the infamous Five Falls. In the typically tame Ouachita mountain range, the Cassatot—Indian for Skull Crusher—was serious. In all honesty, I wasn't ready for the river. Unseen gremlins sent massive jets and waves of water shooting through the waterfalls and toyed with me, smashing my boat against rocks, turning me around, and flipping me over at will. Yet, book chapter, I'll touch on the major contenders and some also-rans.
The Primary Contenders
So far, I've taken an in-depth look at one language and two application development models. I just don't have the time or will to do a comprehensive treatment of languages, but this book wouldn't be complete without at least mentioning some of the major alternatives. I'll take a longer look at what I see as the major alternatives. Then, I'll mention a few alternatives that I see as less likely.
I've got a few things working against me. I like short books, so there's not enough time to do a remotely comprehensive treatment. Even if I were inclined to do so, my practical experience is limited to some Ruby , a little Smalltalk, and a few lines of Lisp in college. I'm just one Java developer, who's prejudging the overall landscape based on my limited experience. In my favor are my broad and diverse network, an excellent set of reviewers, good access to corporate opinions at major vendors and customers, and a strong track record of predicting successful technologies.
Instead of picking a winner, I'd just like to lay out the factors in favor of a language, and those against. In such a short treatment of this problem, I'm not going to be able to do any remotely complete treatments of any given language, but based on Java's history and this community, I should be able to give you a good sense of what's important.
Ruby
Of all the languages generating a buzz in the Java space, Ruby comes up the most frequently. The Java community invests passion in equal parts venom and bliss into the raging Java versus Ruby on Rails debate. This fervor interests me because Ruby, and Rails, get plenty of exposure within the Java community where more mature object-oriented languages like Python and Smalltalk do not. Exposure can translate to more exposure and more users. Developed in 1995, Ruby is relatively mature in calendar years, but it gained popularity first in Japan, and the worldwide community is just now starting to grow. Among the most promising contenders, Ruby has the interesting combination of being relatively mature and simultaneously undiscovered by the Java masses.
In favor
While Ruby doesn't have the support of something like Java, it does have pretty good commercial backing in Japan. It's got a healthy community, and awareness in the Java community. It's also got a good virtual machine. But the beauty of Ruby is primarily in the language. Ruby also tends to solve a few important problems very well:
- Ruby makes metaprogramming feel natural. Reflection is easy, and you can move and change methods quickly. Ruby's modules let you mix in important capabilities without changing any source code.
- Rails, the flagship Ruby framework, makes it easy to build web sites based on relational databases. In the past decade, no other application has been more important.
- Web-based development with other innovative approaches is easy. Ruby has at least three exploratory projects related to continuation servers.
Ruby is extremely dynamic and extensible. You can literally hook into Ruby everywhere. You can replace the methods of a whole class or a single instance at runtime. Ruby developers often introduce methods that themselves introduce other methods and behavior. The net effect is a single hook that lets you add significant capabilities to a class or instance with very little syntax.
In my opinion, metaprogramming in some form will increasingly define modern programming. That's already happening in Java, with persistence engines like Hibernate, programming hooks like interceptors, programming models like aspect-oriented programming, and language extensions like annotations. To do metaprogramming effectively, you need to be able to extend a language to fit seamlessly within a domain. Languages that make this easy will move faster than languages that don't. Java limits the ways that you can extend a class, it makes you work hard to do reflection, and it makes you use unnatural techniques like byte code enhancement, code generation, and dynamic proxies. On the other hand, Ruby handles metaprogramming with ease. For example, the Rails framework, Active Record, defines belongs_to and has_many methods describing database relationships. Each method adds additional Ruby behavior and attributes to the decorated class. At the most basic level, the Ruby language itself uses metaprogramming to describe attributes. attr_accessor :name is shorthand for this:
def name=(value) @name=value end def name @name end
You get a syntax with less repetition, and the language developers did not have to work very hard to give it to you. Of course, Java also does metaprogramming. It just doesn't do it very well.
Ruby interests me for several other reasons, too. Ruby is a chameleon with enough theoretical headroom to grow beyond Rails with ease, and a simple enough syntax to excite beginners and educators. Ruby will let you do functional programming, or play with continuations. You can write full web-based applications, or slip into scripting for rudimentary text processing. Ruby gives you a language that's theoretically pure, and practical.
Ruby might not have the extensive libraries of Java, but it's closing the gap rapidly. It's also worth mentioning that Ruby is doing so with a fraction of the developers, because Ruby is just so productive. As Java moves more and more toward metaprogramming, this productivity gap will increase.
Against
The biggest strike against Ruby right now is the lack of a strong project that lets Ruby run on the JVM. The JRuby project's vision is greater than a simple port to the JVM. So far, the project has had several stops and starts. It's not far enough along to, for example, run Ruby on Rails. Most in the Ruby community don't see the political importance of a language that runs on the JVM, but interest and participation in the project may be picking up. JRuby seeks to let you use Java classes using Ruby idioms. For example, you'll be able to use Ruby code blocks with Java collections. If Microsoft is able to woo the Ruby founders over to .NET's CLR, or if the JRuby project starts picking up momentum, you'll see one of the biggest strikes against Ruby go away. Still, the lack of a credible version that runs on a widely deployed virtual machine, be it Microsoft or Java, is a major strike against Ruby. To be fair, the JRuby project in the months just before publication has made incredible strides. It now passes over 90% of the test cases for the basic Ruby platform. When it reaches Version 1.0 and can run Ruby on Rails suitably, Ruby will become a much stronger contender. Any language that embraces and extends Java will be in a much stronger political position.
Also, Ruby does not have the excellent commercial backing of some of the other alternatives. For example, Google uses Python extensively. Though Ruby is gaining traction in Japan, and also at places like Amazon.com, it's still a relative unknown. You can't yet hire Ruby programmers in numbers, and your training options are limited. If the Rails project hits a critical mass, that will change in a hurry.
Overall
Major factors, including a comparative lack of libraries and the absence of a credible JVM implementation, argue against Ruby, but it's still a primary contender because of a possible catalyst in Rails, economic justification in productivity, and the database and web libraries that make it practical for a good set of problems in the enterprise. The language is theoretically pure and strong enough to last. You can integrate Java applications through web services and communication protocols, or C applications through a native interface. It has a virtual machine, and dialects for all major operating systems. If something challenges Java soon, I think Ruby is the most likely candidate.
Python
If ever you are looking for a test case for the requirement of a catalyst, look no further than Python. It has just about everything we're looking for—a good metamodel, a clean and readable syntax, dynamic typing, flexibility, and power. Python is also pretty natural for Java programmers. Here's a Python example from python.org:
def invert(table): index = { } # empty dictionary for key in table: value = table[key] if value not in index: index[value] = [ ] # empty list index[value].append(key) return index
You'll notice a couple of striking things about Python right off the bat. First, unlike Java, you don't have to have a full class definition. Python is equally at home as a procedural language or an object-oriented one. Second, you don't see any syntax to end a block of code because whitespace matters. Indentation determines code grouping. Like many great programming languages, Python holds appeal for both beginners and advanced programmers. There's much to like.
In favor
Python has many of the same advantages as Ruby. It's dynamically typed, object-oriented, concise, and friendlier to applications than Java. It's easy to read, very consistent, and free. You can find interesting free libraries to do everything from web development to ORM. Python has the advantages of a productive applications language, and relatively numerous libraries. You can run it on Java's virtual machine in an environment called Jython.
Python has an extensive vibrant community. You can find support, hire developers, and get consulting. The open source libraries are numerous, but nowhere near the extent of Java's. Though overall growth has been sporadic, Python has gained limited traction in spots, in flagship accounts like Google.
Against
While Python has a few good web development frameworks, it doesn't yet have a Java-killer like Rails. I'm already seeing a few Rails clones emerge, like Subway (), but none of them has the marketing punch behind Ruby on Rails. In fact, the primary strike against Python is the lack of a catalyst of any kind. The Python community is full of technical vision, but the marketing vision has so far been lacking.
Several influential Python bloggers have recognized the Ruby buzz in the Java community, and they make the point that Python doesn't yet have that compelling framework that might convert a Java developer. Java consultant Stuart Halloway moved to Python for better productivity, but he believes the Python community does not actively court the Java community. Many of them believe that Java is irrelevant.
A few minor technical details hold back Python. Some don't like the idea that whitespace is significant. That turns off some Java developers who like to condense repetitive Java constructs, such as default constructors or accessors, like this:
public String getName() {return name;} public void setName(String name) {this.name=name;}
Overzealous enforcement of anything leads to problems with programmers, and whitespace is no different. When you dogmatically enforce whitespace, you also limit your expressiveness. For example, you might type:
if ( character = = eol ) { line=file.next(); count ++; }
because it expresses a single coherent thought as a sentence. Whitespace alone isn't the problem; it's the dogmatic enforcement of endless subjects like this one that rub some developers the wrong way. The overriding Python philosophy says there should be one obvious way to do something, and the language designers often go to great lengths to maintain those conventions, sometimes sacrificing flexibility to do so. The hope is that consistency will override any disadvantages. In the past, these kinds of attitudes have limited the flexibility of a language. Unless the language designers have perfect imagination, it's often best to let a language evolve in several different ways at once. The Python leadership does have a reputation as being somewhat frosty and dogmatic on these types of issues.
You can do metaprogramming in Python, with method or function pointers and using reflection, as well as other techniques. Those that have experience in both Python and Ruby seem to think that metaprogramming is more natural in Ruby. You can work with objects or not, which is a double-edged sword. Some (like the founder of Ruby) say Python might not be object-oriented enough.
Overall
Python has most of the tangible benefits you'd expect in a dynamic language, but it lacks the intangibles. New languages either pop when they're discovered, or they don't pop at all. Python never popped at all. Python is a nonentity in the Java community. That's a shame, because Jython makes it a viable political option when languages like Ruby aren't even considered. Python proponents looking to displace Java can argue that using Python amounts to a different syntax and some different libraries, and the rest of the infrastructure remains unchanged, but the often negative Java sentiment within the Python community works against Jython. Most Python developers don't understand that Java, too, is a powerful language, based on its extensive community, which leads to more libraries and massive commercial support.
With the emergence of some kind of killer app, Python could well emerge as a Java killer. Without it, Java developers think they already know what they need to know about Python, so there's no real reason to give it a second look.
Groovy
Groovy is a new dynamic scripting language. It's built to run in the JVM. It's backed with the JCP with a JSR. It's still young, and it seems to be having problems getting to a solid, stable release.
Groovy is particularly interesting because it has none of the fundamental problems with marketing and acceptance in the Java community that the other languages have. Groovy's problem has been the execution: the speed and the implementation. So far, Groovy has lacked the sound, technical underpinnings of the other languages in this chapter, as well as a visionary to both innovate and see inventions through to a sound, stable conclusion.
In favor
I want to like Groovy. I really do. It has the marketing support, hype, and attention in the Java community. It runs in the virtual machine, ties in well to the Java language, and has political backing from Sun. James Strachan, a hero of sorts within the Java community, is the primary father, bringing an instant fanfare and credibility to the project. With a formal JSR, it's usually easier to introduce Groovy into a company as a scripting language than some other dynamic language. The syntax, though inconsistent, is terse, and the Groovy JSR supports many of the important features that dynamic languages should, at least in letter.
Against
The problem is that Groovy is just so hard to like. To this point, Groovy has been quirky, unpredictable, and full of bugs. Many features, introduced in very early versions of Groovy, remain uncompleted, and early shortcuts led to an unsound grammar. Early versions of Groovy used a hand-generated parser rather than a parser generator, such as ANTLR. After the syntax was belatedly retrofitted to ANTLR, the syntax was set in many ways, and the grammar was unwieldy.
Today, the fledgling language continues to struggle. People leading the project seem to be more interested with introducing new ideas than finishing old ones. Blogger Mike Spille was a Groovy insider who worked on the language, and later abandoned it due to significant problems with the language, technical vision, and stability. He pointed out major holes in the language and syntax around closures (a kind of code block) here:. You can also see a later heated debate between two of the early Groovy contributors on TheServerSide.com here:.
It seems like each major beta release breaks existing Groovy applications. Worse, the first major Groovy specification request broke existing applications. That's not good. Many of the core Groovy developers also seem to be leaving the original JSR team.
Overall
With a formal JSR backing it, Groovy is politically in a good place to succeed. After all, you could argue that EJB succeeded based on the reputations of the supporters, despite significant technical limitations. Groovy has some energy and hype, but a few false starts seem to be stalling the momentum. I'll undoubtedly get flamed for saying so, but right now, Groovy is much too young and too unstable to deserve serious consideration for any production application, let alone standardization.
That Groovy is buggy and unstable as a beta doesn't trouble me so much, though you'd expect core language features and syntax to be set very early, but basic features like closures don't work. I'm most concerned with the overall process. The community process standardized the Groovy language before it was mature, or even stabilized. To move forward in a productive way, Groovy must first solidify the major feature set, then recover some lost momentum, and then prove itself in some commercial niche before it will be considered as a significant candidate to replace Java anywhere. Until then, it's merely an experiment. I hope it succeeds, but I don't think it will. It simply has too far to go.
.NET
.NET is the only nonprogramming language that I've mentioned as a credible successor to Java. .NET is Microsoft's latest development platform, deserving special mention because it has a massive library, and a language-agnostic engine called the Common Language Runtime (CLR) that sits on top. If Microsoft makes .NET successful, and truly language-neutral, it could serve as a launching pad of sorts for many languages. Right now, like the JVM, the CLR has some technical issues to overcome before it can fully support dynamic languages like Ruby, but Microsoft is committed to doing so.
Language options
At some level, the programming libraries underneath .NET are far more important than the language. Their usage models frequently dictate application structure, often more than the choice of programming language. Still, Microsoft offers several programming languages, targeted at vastly different communities.
Visual Basic for .NET
Microsoft has a real problem on its hands with Visual Basic programmers. It seems many of those hundreds of thousands of active developers just don't like .NET, and they're looking for alternatives. The .NET framework changed the programming model for Visual Basic. So far, most of them either are actively deciding to pursue alternatives, or are passively waiting to upgrade. Either way, Microsoft loses. As a result, it looks like Visual Basic is in trouble.
In public, Java and .NET developers don't mix, but each community often reluctantly admits the strengths of the other. While married to a platform, Java developers have often stolen secretive longing looks at Visual Basic's productivity and user interface development framework. Visual Basic users secretly returned the flirtations, admiring Java's structure, if not productivity. I'm making an educated guess that Microsoft thought it could sneak in some more structure, believing that the BASIC syntax would trump the unfamiliar frameworks underneath. They were wrong.
Microsoft is making some moves toward satisfying the Visual Basic community. Some plans seem to favor a Visual Basic classic edition, which looks and acts more like the Visual Basic of old. To me, that move smacks of new Coke and Coca-Cola Classic, a public relations disaster.
C#
C# (pronounced see sharp) is a programming language that fills the role of Java for the .NET platform. There's not much to say about C# in a book called Beyond Java, because it's built to be similar to Java. You'll see a few minor exceptions, like reliance on unchecked exceptions rather than checked exceptions, and some syntactic sugar. Many of the recent changes in Java, like annotations and autoboxing, were introduced to keep up with .NET. For the most part, though, those looking to trade in Java and simultaneously lose their problems will find a whole new stack of problems, with a similar size and shape. C# is merely Java's evil twin.
Still, Microsoft seems willing to separate old versions of C# to a new language, under development, called C Omega . This language would potentially make some significant strides forward, and possibly even break compatibility with C#. Such a language could potentially offer the features of much more dynamic languages, with the commercial backing of Microsoft, and the CLR as a portable virtual machine. It bears watching. Still, it's proprietary, and many won't give it a serious try for that reason alone.
Other languages on the CLR
What's intriguing about .NET is not the Microsoft languages. It's the promise of open source languages on the CLR. Right now, since most of Microsoft's energy is undoubtedly focused on Visual Basic, C++, and C#, you're not going to see a library that's built to take advantage of important concepts like code blocks and continuations. Still, Microsoft actively courts insiders in the Ruby and Python communities, so you could see credible implementations of those languages soon.
A weakness and a strength
.NET and the CLR have one major problem: Microsoft. Sometimes its weight and muscle work in your favor, and sometimes they don't. It's not likely that the CLR will ever run as well on other platforms as it does on, say, Linux. With Microsoft's heavily proprietary stance and a complete lack of portability, it's tough to see the Java community embracing .NET. You may be surprised that I don't think Microsoft's posture will remain so pervasively proprietary, especially on the server side.
I've said before that market leaders want to be proprietary. All others need open standards to compete. Microsoft is simultaneously the market leader for client-side operating systems, and lumped in with everyone else (or with Internet and Enterprise development). Proprietary frameworks make sense on the client, where Microsoft has had a near-monopoly for a long time now. They make a little less sense on the server side, where they've been unable to crack the market for medium and large systems. In time, I believe that Microsoft will recognize this reality and jump on the open source software bandwagon. I'm not the only one who thinks so. I sit on the expert panel of NoFluffJustStuff, one of the most successful and influential Java conferences outside of JavaOne. Stuart Halloway, one of the most respected Java consultants in areas such as metaprogramming and reflection, feels strongly that Microsoft will be the biggest open source vendor in the world, and Dave Thomas seems to agree.
If Microsoft does happen to move toward open source software in a credible way, and the Java community recognizes this, Microsoft will open the door to Java on the CLR, and more importantly, to the languages beyond.
Minor Contenders
Now, it's time to put on an asbestos suit and my +4 plate mail. I debated whether to include any sections on Perl, Lisp, PHP, or Smalltalk. They're fantastic languages in their own right. I just don't think they're next.
If you're deeply religious about any of these languages, you can just read these one-sentence summaries, and skip to the next section: Perl's too loose and too messy, PHP is too close to the HTML, Lisp is not accessible, and Smalltalk wasn't Java.
If you already feel slighted and you must read on—if you're a language cultist and I've mentioned your pet language in also-rans, or worse, didn't mention your pet language at all—go ahead and fire up your Gmail client and your thesaurus, and drop me a nasty note. Ted Neward reviewed this book, so I can take a few more euphemisms for the word sucks. Just keep this in mind: I'm not saying that your language isn't good, or popular. I'm just saying 10 years from now, we probably won't look back at any of these languages as the Java killer.
PHP
PHP is an open source scripting language that's been gathering momentum since the early 2000s. It's a markup language that's designed to be embedded into HTML. It's very easy to quickly develop simple web applications in PHP, but those applications typically have little back-end structure. For these reasons, it's not really targeting the same niche as Java applications, though it's sometimes been pressed into service in much the same way. Here is "Hello, World" in PHP:
<html> <head> <title>Hello, World</title> </head> <body> <?php echo '<p>Hello World</p>'; ?> </body> </html>
Web programmers recognize this as an HTML scripting language. The code is processed on the server side, so pure HTML can be sent down to the client. It actually handles this kind of scripting pretty well, but it's purely a tag language. PHP's problem is the structure behind the view. It's possible to use PHP for layers behind the view, but it's awkward and cumbersome in that role.
PHP is going to make some serious noise as a pure web-based scripting language, though. In one of the strangest moves in 2005, IBM announced support for PHP. This move undoubtedly targeted the small and medium-size businesses that tend to embrace PHP. IBM can now theoretically sell them software and services to round out their implementations. PHP seems to be a natural language for those Visual Basic users who don't want to make the move to .NET. Like Visual Basic, it will be pressed into service in places where it doesn't fit as developers search for simplicity in the wrong places.
With the most basic Google skills, you can find dozens of papers that attempt to compare Java and PHP. You'll tend to find two types of comments. The PHP camp says that Java isn't productive enough, and the Java camp says that PHP isn't structured enough. I tend to agree with both of them. The primary danger with PHP for small applications is that they can grow into big PHP applications, and you're left without the structure that will let you easily maintain and extend your web applications.
Perl
Perl is a very popular language for programmers who look for raw first-cut efficiency. Perl was quite popular for shell scripts, before simpler alternatives were available. In terms of productivity, Perl has many of the characteristics of other highly productive languages. It's very expressive, terse, and dynamically typed. It gives you freedom to do what you want to do, and has a rapid feedback loop. Paul Graham calls it a great language for "hacking," or rapid experimental programming. Much of the Internet is powered by CGI Perl scripts.
Perl does have a downside. When you look at overall productivity of a language, you've also got to take things like maintenance and readability into account. Perl tends to rate very poorly among experts on a readability scale. As with Java, much of Perl's problem is cultural. Some Perl programmers would rather chop off their little finger than type four extra characters, whether the characters improve readability or not. After all, programs that were hard to write should be hard to read. Other Perl problems relate to the language itself. Perl's object orientation is obviously bolted on, and Perl has a secret handshake of sorts, in the form of many cryptic syntactic shortcuts that only the mother of Perl could love. A whole lot of us at one time or another have had some sort of love/hate relationship with Perl. It's interesting to talk about, but it's pretty much the antithesis of Java, and it's likely not going to make a dent.
Smalltalk
Smalltalk is a beautiful language invented way before its time. Smalltalk and Lisp are probably the two languages that share the most with Ruby. Smart developers used Smalltalk to build successful object-oriented applications long before Java was even a twinkle in Gossling's eye. And not-so-smart developers used Smalltalk to build some of the ugliest object-oriented code ever written. In truth, for the most part, in the mid- and late 1970s, we just didn't have the wisdom or the processing power for OOP yet, and we didn't have features like just-in-time compilers.
In Chapter 8, you saw the elegance of the Smalltalk language. It's object-oriented through and through, and the syntax is remarkably consistent. Smalltalk's syntax probably seemed strange to the masses of programmers who grew up coding COBOL, BASIC, Pascal, C, or C++. Most of the businesses I know of that actually tried Smalltalk were able to get their applications out in time, they just never were able to integrate those applications with the rest of the world.
Smalltalk never was able to lure C and C++ developers away, because it was too alien and had the perception of being too slow. As the small Smalltalk community waited for objects to emerge, Java's founders aggressively grabbed the C++ community by the throat, forced it to come along with C++ syntax and usage models, and offered solutions to solve the most pressing problems the C developers encountered. As we showed, Java was effectively a compromise between perfect OO and the C++ community. Later, IBM made a move to buy OTI, a maker of Smalltalk virtual machines. In one last push for Smalltalk, IBM built a common virtual machine into an IDE called Visual Age with the hopes that the common JVM could lend credibility to Smalltalk. It was too little, too late. We were too content in our newfound freedom, safely and freshly away from all things C++, in the arms of Java.
It's hard to imagine Smalltalk rising up from 30 years of obscurity to dominate. It's probably not going to happen. Still, you can find a small but active community of Smalltalk developers. Disney built Squeak, a Smalltalk dialect and implementation focusing on multimedia. A handful of other dialects are also still around.
In the end, Smalltalk may yet make an impact on development, but as the proving ground for ideas like continuation servers. You'll find evidence of Smalltalk's object model and syntax everywhere. Most notably, Ruby liberally borrows code blocks and idioms like returning self. I think continuation servers will ultimately play a role in web development. They just make too much sense, are too natural, and are too compelling. Smalltalk is where all the continuation research is happening.
Lisp
Lisp is an extremely powerful language that excels in its strange but pure syntax, abstract modeling, and raw efficiency. In Lisp, everything is a list, including Lisp programs. Metaprogramming in Lisp feels natural, and is quite popular. Important ideas like aspect-oriented programming and continuation servers started in Lisp. Several dialects like Dylan and Scheme appear periodically, but none has achieved much success in the commercial mainstream, beyond a macro language for the Emacs. Still, start-ups often use Lisp because once you learn it, you can be incredibly productive. Some very successful programmers like Paul Graham (author of Hackers & Painters) believe Lisp is the most expressive programming language, and they could be right.
Lisp's community has always been made up of intelligent developers, and it's still popular among academics. In fact, some of the best programming universities, like MIT, emphasize Lisp early, to get students to quickly think in the abstract, and to expose them to functional techniques.
Maybe all languages will once again return to Lisp, but I don't think that Lisp itself is the ultimate answer. It's just too alien, and it takes too much time and effort to learn.
Functional Languages
It's probably a bit too early to be talking about functional languages , because we seem to be moving toward object-oriented languages instead. Still, functional programming provides a higher abstraction and very good productivity. It's possible that some functional language could explode, with the right killer app.
Haskell and Erlang are two of a family of programming languages called functional languages. Functions are the focus of functional languages. I use the word function in the pure mathematical sense:
- Functions have no side effects. This oddity takes some getting used to for most procedural programmers, but also has significant benefits.
- Functions return values.
- You can use the return value of a function anywhere you can use the returned type.
You can do functional programming in languages like Ruby and Lisp, but for research or purity, often it's better to use a purer language. Here's a Haskell example, which computes the factorial of a number:
fact 0 = 1 fact n = n * fact (n - 1)
Then, as expected, you can compute the value like this:
fact 10
Here's a Fibonacci sequence (where each number is the sum of the previous two):
fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2)
Functional languages let you work at a higher level of abstraction. Haskell has good traction in research and academic communities, and seems to be gaining a small, vibrant commercial community. It's easy to teach, and as such, it could provide a doorway into functional programming, much like Pascal provided a doorway to procedural languages.
You can see the power of functional programming in the Erlang language. Developed at Ericsson, Erlang's main focus is concurrency. Erlang lets you easily create and use threads, and communicate between them. Erlang also improves distributed computing, because the location of threads is transparent—a thread might be in the same process as another, or on a different machine. It's productive, dynamically typed, garbage collected, and very small. There's been a recent spike of interest in Erlang for applications that need excellent support for concurrency and distribution. It's used in production at some high-profile sites. At this point, Erlang is still in its infancy as a general-purpose language. Users tend to use it in conjunction with C (for better performance), and it doesn't have any real user interface library. Still, Erlang is powerful in its niche, and it could make an impact in the intermediate future, directly or as a derivative.
The Next Big Thing
Of course, the whole premise of this book is arrogant beyond belief. I'm making an incredible number of assumptions and drawing some aggressive conclusions based on little more than a couple of dozen interviews, a keen sense of intuition, and a few massive piles of circumstantial evidence.
Java may need nothing more than a little overhaul. Maybe the problem is in the massively complex libraries, and a few rewrites with some tweaks of the language would extend Java's leadership for 10 more years. Maybe the community's culture doesn't help define our libraries. The driving vendors may do an about-face and focus more on simplifying the 80% path instead of building yet another XML-obsessed framework. The JCP could suddenly start supporting the best existing frameworks based on experience instead of standardizing a good idea that was born in a committee.
Maybe Dion Almaer is right, and the big companies that drive this industry are not remotely interested in moving away from Java, and we'll all be saddled with Java for the foreseeable future.
Maybe Jason Hunter is right, and the next big thing won't be a programming language at all. Maybe Java's all we'll ever need, and we'll use that foundation to move up the abstraction ladder. Maybe Glenn and David are both right and there won't be one next big thing, but lots of next little things, and both metaprogramming and continuations will play a significant role.
I don't know the ultimate answers, so I've leaned on my mentors and peers. The interviews in this book are the opinions of some of the people I respect the most. It's been an honor to share these few pages with them. I'm not ready to say that Java's dead, or that Ruby is next, or that continuation servers will reign supreme. I just know:
- I'm hurting right now, and my customers are, too. It's getting harder and harder to teach my customers to satisfy themselves with Java.
- Certain things, like baby-sitting a relational database with a web-based UI, should be easier in Java, after nearly 10 years of effort, but they're still cumbersome.
- The same people that dozed in conversations about other languages two years ago seem to be paying attention now. My "Beyond Java" talks, at Java conferences, are continually packed.
As for me, my eyes are wide open. I've seen what the alternatives can do. In particular, Ruby on Rails lets me build reliable code fast, and put it in front of my customer with more confidence and frequency. I didn't actively seek an alternative—on the contrary, with four Java books out and a reputation in the Java space, I've got every reason to maintain the status quo. I did find that some of the alternatives are compelling, and make for a smooth transition.
A Charge to You
If you're a Java developer and this message is troubling you, that's natural. You've got good reasons to feel threatened with this challenge of your world view. You may feel even more unsettled when someone challenges the foundation of your livelihood. I'd encourage you to put this book down and do some research of your own.
Look around. When James Duncan Davidson did, he found a language that responded to his needs for low-level user interface development. Stuart Halloway found a language that let his start-up move at the speed of his ideas. Dave Thomas found the foundation for an increasingly important publishing series. Glenn Vanderburg found languages friendlier to his beloved metaprogramming techniques.
If you decide to expand your horizons beyond Java, you may find that I'm right, and some of the alternatives I've explored here, or even some I didn't, unleash you. You'll be surfing the next wave that propels us forward.
If I'm wrong, Java will still be there for you; heck, even COBOL is still there for you. But to you, it won't be the same Java. Other languages will expand your horizons to other approaches, just as a wave of Java developers will bring our unique view of the world with us. If you spend some time in Smalltalk, you'll probably use Java's reflection more, you'll look for more opportunities to invert control by simulating code blocks, and you may well tone down your use of XML. (OK, I may have pushed things too far with that one.) If you explore continuation servers, you may look for a way to simulate that programming style in Java. If you explore Rails, you'll likely learn to pay more attention to defaults and convention. Hibernate, Spring, Struts, servlets, collections, and the JDO could all use these techniques.
Pick up your eyes by learning a language. Expand your mind to something a little more powerful, and a lot more dynamic. Warp your perspective to functional programming or continuations. Annoy your friends with a contrarian's view. Tell them that you don't think the world's flat. There's a whole universe out there, beyond Java. | http://commons.oreilly.com/wiki/index.php/Beyond_Java/Contenders | CC-MAIN-2016-26 | refinedweb | 6,672 | 63.49 |
How can I increase the resolution in a plot saved by
png?
How can I increase the resolution in a plot saved by
Please provide more information about which graphics package you are using.
I’m using the
Plots package with the
pyplot backend.
You can set the size when plotting, e.g.:
x = 1:12; y = randn(12); plot(x,y, size=(800,500)) savefig("filename.png")
Unfortunately, this is only a partial solution. The issue is that when you resize the plot to be larger, all the text stays the same size.
Now I’m looking into how to resize the text…
Looking into it further, it seems like resizing the plots actually seems to break the functionality of
heatmap:
I do not know if there is a way to resize all fonts at once, but you cant set the font size attributes:
titlefontsize, tickfontsize, legendfontsize, guidefontsize
Plots.jl’s heatmap functionality has never been reliable for me with non-default parameters. You may want to try PyPlot.jl for heatmaps–the syntax isn’t as pretty as with Plots, but there’s always a way to get the output you’re looking for. You could also try saving the plot as a vectorized pdf (you’ll need
import Plots.pdf), then converting to png using some external tool. | https://discourse.julialang.org/t/png-resolution/24330 | CC-MAIN-2022-21 | refinedweb | 221 | 64.3 |
#include <sys/pccard.h> int32_t csx_Parse_CISTPL_BATTERY(client_handle_t ch, tuple_t *tu, cistpl_battery_t *cb);_battery_t structure which contains the parsed CISTPL_BATTERY tuple information upon return from this function.
This function parses the Battery Replacement Date tuple, CISTPL_BATTERY, into a form usable by PC Card drivers.
The CISTPL_BATTERY tuple is an optional tuple which shall be present only in PC Cards with battery-backed storage. It indicates the date on which the battery was replaced, and the date on which the battery is expected to need replacement. Only one CISTPL_BATTERY tuple is allowed per PC Card.
The structure members of cistpl_battery_t are:
uint32_t rday; /* date battery last replaced */ uint32_t xday; /* date battery due for replacement */
The fields are defined as follows:
This field indicates the date on which the battery was last replaced.
This field indicates the date on which the battery should be replaced. | https://docs.oracle.com/cd/E36784_01/html/E36886/csx-parse-cistpl-battery-9f.html | CC-MAIN-2021-21 | refinedweb | 142 | 51.58 |
I know how to use return to pass back a value and/or a pointer but would like to do both.
Any examples or help would be greatly appreciated.Any examples or help would be greatly appreciated.Code:
//EX: Not sure I declared ptrStrgPlus4 correctly to have it be
// a pointer to strg+4 (ie '.') on return from the getBoth function
int num; char strg[5]="123.456"; char *ptrStrgPlus4;
x = getBoth(10, strg, *ptrStrgPlus4);
// Would like to return the num*10, plus a pointer to the '.'
int function getBoth(num, string, *stringPtr) {
// I know the next line doesn't work
*stringPtr = string+4; // Return pointer to 4th char in the string
return (num*10);
} | http://cboard.cprogramming.com/c-programming/143561-how-do-you-return-value-pointer-function-printable-thread.html | CC-MAIN-2014-35 | refinedweb | 115 | 73.58 |
Prev
Java Set Experts Index
Headers
Your browser does not support iframes.
Re: Random weighted selection...
From:
Tom Anderson <twic@urchin.earth.li>
Newsgroups:
comp.lang.java.programmer
Date:
Fri, 29 May 2009 00:34:16 +0100
Message-ID:
<alpine.DEB.1.10.0905281552540.23744@urchin.earth.li>
This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.
---910079544-1363231449-1243525163=:23744
Content-Type: TEXT/PLAIN; CHARSET=ISO-8859-15; FORMAT=flowed
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.1.10.0905290026521.14054@urchin.earth.li>
On Wed, 27 May 2009, B?gus Excepti?n wrote:
On May 10, 9:27?am, Tom Anderson <t...@urchin.earth.li> wrote:
interface Job extends Runnable {
? ? ? ? public int priority();
}
Then:
Job pickJob(List<Job> jobs, Random rnd) {
? ? ? ? int totalPriority = 0;
? ? ? ? for (Job job: jobs) totalPriority += job.priority();
? ? ? ? int index = rnd.nextInt(totalPriority);
? ? ? ? for (Job job: jobs) {
? ? ? ? ? ? ? ? index -= job.priority();
? ? ? ? ? ? ? ? if (index < 0) return job;
? ? ? ? }
? ? ? ? assert false; return null; // should be unreachable!
}
This seems to be what Seamus suggested, no?
Yes.
class QueuedJob implements Comparable<QueuedJob> {
? ? ? ? public final Job job;
? ? ? ? public final int serialNumber;
? ? ? ? public QueuedJob(Job job, ?int serialNumber) {
? ? ? ? ? ? ? ? this.job = job
? ? ? ? ? ? ? ? this.serialNumber = serialNumber;
? ? ? ? }
? ? ? ? public int compareTo(QueuedJob that) {
? ? ? ? ? ? ? ? int difference = -compare(this.job.priority(), that.job.priority());
? ? ? ? ? ? ? ? if (difference == 0) difference = compare(this.serialNumber, that.serialNumber);
? ? ? ? ? ? ? ? return difference;
? ? ? ? }
? ? ? ? private int compare(Integer a, Integer b) {
? ? ? ? ? ? ? ? return a.compareto(b);
? ? ? ? }
? ? ? ? public boolean equals(Object obj) {
? ? ? ? ? ? ? ? if ((obj == null) || !(obj instanceof QueuedJob)) return false;
? ? ? ? ? ? ? ? QueuedJob that = (QueuedJob)obj;
? ? ? ? ? ? ? ? return (this.job.equals(that.job)) && (this.serialNumber == that.serialNumber);
? ? ? ? }
}
class JobQueue {
? ? ? ? private SortedSet<QueuedJob> qjobs = new TreeSet<QueuedJob>();
? ? ? ? private int counter = 0;
? ? ? ? private Random rnd = new Random();
? ? ? ? private float p = 1.0 / 3.0; // or whatever
? ? ? ? public void enqueue(Job job) {
? ? ? ? ? ? ? ? qjobs.add(new QueuedJob(job, counter));
? ? ? ? ? ? ? ? ++counter; // ignore obvious problem with wrapping
? ? ? ? }
? ? ? ? public Job pop() {
? ? ? ? ? ? ? ? Iterator<QueuedJob> it = qjobs.iterator();
? ? ? ? ? ? ? ? while (it.hasNext()) {
? ? ? ? ? ? ? ? ? ? ? ? if (rnd.nextFloat() <= p) return pop(it);
? ? ? ? ? ? ? ? }
? ? ? ? ? ? ? ? // might not picky any of the above, so:
? ? ? ? ? ? ? ? return pop(qjobs.iterator());
? ? ? ? }
? ? ? ? private Job pop(Iterator<QueuedJob> it) {
? ? ? ? ? ? ? ? Job job = it.next().job;
? ? ? ? ? ? ? ? it.remove();
? ? ? ? ? ? ? ? remove job;
? ? ? ? }
}
Tom, is there an alternative to the costly and time consuming
iteration?
Yes - if you're talking about implementing the same probabilistic
mechanism as in the code immediately above. Instead of a TreeSet, use an
ArrayList, and keep it sorted, so that you have O(1) access to any
element. You then think about the maths: in the above scheme, every
element has a probability of being picked, and the probabilities for all
elements plus the possibility of none being picked sum to one (of course).
That is, there's a cumulative distribution over the options. You just have
to figure out what the function is, reverse it, then you can inject a
random number between 0 and 1, and get out an item, with exactly the same
probability distribution as with the iterative approach.
Having done a bit of scribbling on the back of an index card and a python
prompt, i think the magic function (where p is your probability of picking
at every step, and x is a random number from a uniform distribution
between 0 and 1) is:
index(x) = ceil(log_p(x))
And if index(x) >= number of items, then take it as 0.
Bear in mind that:
log_a(b) = log(b) / log(a)
Java doesn't have a function to do logarithms to an arbitrary base, but
that identity will let you compute them.
This was not part of what I had recalled in the initial post in this
thread, but...
What is the jobs were in an ArrayList, or other Collection(), and the
basic parameters were known about the list/collection/array.
You know what is easy/fast to learn:
The total # of elements
The highest priority
The lowest priority
How many elements will be selected (pulled out of the array)
Why couldn't you do _one_ iteration, and make your choice as to
whether to include an item at the same time that item comes up in the
iteration?
If you need to pick several items, rather than just one, then yes, you
could do them all at once.
Would sorting/ordering even be necessary? Without testing, that would
seem to be the least task intensive way to make a selection- even if not
a random weighted one.
If you're following the first algorithm, as at the top of this post, the
jobs don't need to be sorted. You do need to iterate over them. You can
pick several jobs at once (although you have to be a bit careful not to
pick the same job more than once).
You could also do something clever with enfilade trees that would let you
do an O(log n) lookup instead of an O(n) iteration, but that's a whole
other story.
tom
--
China Mieville has shown us how to be a good socialist and a bad science
fiction writer. -- The Times
---910079544-1363231449-1243525163=:23744--
Generated by PreciseInfo ™
"We know the powers that are defyikng) | https://preciseinfo.org/Convert/Articles_Java/Set_Experts/Java-Set-Experts-090529023416.html | CC-MAIN-2022-27 | refinedweb | 854 | 60.21 |
How to Write Command Line Programs Using Multiple Libraries in Swift
Writing a command line program in Swift is not hard. As like any other programs in Swift, you just turn on Xcode, code some, and run it. Then it’s done.
But, if you want to use some libraries in the program, now it becomes a trouble. Because Xcode 7 and 8 does not provide static library target for Swift yet. Which means, your command-line program executable will require separately packaged dynamic library, and that sucks.
Anyway, Swift team is providing another approach to write command-line programs. It’s “Swift Package Manager”. (or SwiftPM in short).
I have never before used a package manager to develop a multi-componented program. Becuse I have been using the IDEs only in my entire programming life. So at this time, I decided to try this package manager based programming.
See this to see how to use SwiftPM for single package (with no library) executable.
*In this article, I assume you’re familiar with basic shell commands.medium.com
To try multicomponent program, I set up some directories.
mkdir app2
mkdir lib3
Now you have these directories.
./
./app2
./lib3
Initialize executable package.
cd app2
swift package init --type executable
cd ..
And library packages.
cd lib3
swift package init --type library
cd ..
Here’s pitfall. A Swift package must be a Git repository. More specifically, it must a tagged-commit. But SwiftPM doesn’t do this automatically, so you need to make it into a Git repo, commit, and tag a version.
git init
git add . -A
git commit -a -m "Ready."
git tag 0.0.0
Configuring Dependencies
Go to application directory and edit `Package.swift`.
cd app2
vi Package.swift
Put this code in the file.
import PackageDescription
let package = Package(
name: "app2",
targets: [],
dependencies: [
.Package(url: "./../lib1", majorVersion: 0),
]
)
Notice that
url field must be a URL to a Git repository. It is supposed and should be a URL like “”, but for now, I used local file path to simplify trial procedure.
Also let’s edit source code.
vi Sources/main.swift
Put this code in the file.
@testable import lib3
print(lib3())
Generated library project contains
lib3 type, but it’s internal access only. So I used
@testable keyword to use them for now to get it.
Ok. Now load the dependencies.
swift package update
And build and run.
swift build
.build/debug/app2
You will see something like this.
lib3(text: “Hello, World!”)
Anyway, this code works only in debug build because I used
@testable keyword for quick try. If you build this project in release mode, this won’t work. Let’s fix this now.
First, go back to library directory, and update library code.
cd ..
cd lib3
vi Sources/lib3.swift
To expose the
lib3 type
publicly.
public struct lib3 {
public var text = "Hello, World!"
public init() {}
}
Don’t forget to add a
public initializer.
Add, commit and tag the commit.
git add . -A
git commit -a -m "Expose 'lib3' publicly."
git tag 0.0.1
Notice on the new tag name. It’s revision number has been increased. So SwiftPM will pull this version automatically.
Move to app directory, and update code.
cd ..
cd app2
vi Sources/main.swift
It should become like this.
import lib3
print(lib3())
Update dependencies, and build again.
cd ..
cd app2
swift package update
swift build
.build/debug/app2
Of course, now it will work in release build.
swift build --configuration release
.build/release/app2
Done. | https://medium.com/@eonil/how-to-write-command-line-programs-using-multiple-libraries-in-swift-302b26af97b | CC-MAIN-2017-43 | refinedweb | 587 | 70.8 |
Summary
Groups features based on feature attributes and optional spatial or temporal constraints.
Learn more about how Grouping Analysis works
Illustration
Usage
This tool produces an output feature class with the fields used in the analysis plus a new integer field named SS_GROUP. Default rendering is based on the SS_GROUP field and shows you which group each feature falls into. If you indicate that you want three groups, for example, each record will contain a 1, 2, or 3 for the SS_GROUP field. When No spatial constraint is selected for the Spatial Constraints parameter, the output feature class will also contain a new binary field called SS_SEED. The SS_SEED field indicates which features were used as starting points to grow groups. The number of nonzero values in the SS_SEED field will match the value you entered for the Number of Groups parameter.
This tool will optionally create a PDF report file when you specify a path for the Output Report File parameter. This report contains a variety of tables and graphs to help you understand the characteristics of the groups identified. The path to the PDF report will be included with the messages summarizing the tool execution parameters. Clicking on that path will pop open the report file. You may access the messages by hovering over the progress bar, clicking on the pop-out button, or expanding the messages section in the Geoprocessing pane. You may also access the messages for a previous run of Grouping Analysis via the Geoprocessing.
The Unique ID Field provides a way for you to link records in the Output Feature Class back to data in the original input feature class. Consequently, the Unique ID Field values must be unique for every feature and typically should be a permanent field that remains with the feature class. If you don't have a Unique ID Field in your dataset, you can easily create one by adding a new integer field to your feature class table and calculating the field values to be equal to the FID/OID field. You cannot use the FID/OID field directly for the Unique ID Field parameter.
The Analysis Fields should be numeric and should contain a variety of values. Fields with no variation (that is, the same value for every record) will be dropped from the analysis but will be included in the Output Feature Class. Categorical fields may be used with the Grouping Analysis tool if they are represented as dummy variables (a value of one for all features in a category and zeros for all other features).
The Grouping Analysis tool will construct groups with or without space or time constraints. For some applications you may not want to impose contiguity or other proximity requirements on the groups created. In those cases, you will set the Spatial Constraints parameter to No spatial constraint.
For some analyses, you will want groups to be spatially contiguous. The contiguity options are enabled for polygon feature classes and indicate features can only be part of the same group if they share an edge (Contiguity edges only) or if they share either an edge or a vertex (Contiguity edges corners) with another member of the group.
The Delaunay triangulation and K nearest neighbors options are appropriate for point or polygon features when you want to ensure all group members are proximal. These options indicate that a feature will only be included in a group if at least one other feature is a natural neighbor (Delaunay triangulation) or a K nearest neighbor. K is the number of neighbors to consider and is specified using the Number of Neighbors parameter.
In order to create groups with both space and time constraints, use the Generate Spatial Weights Matrix tool to first create a spatial weights matrix file (.swm) defining the space-time relationships among your features. Next run Grouping Analysis, setting the Spatial Constraints parameter to Get spatial weights from file and the Spatial Weights Matrix File parameter to the SWM file you created.
In order to create three-dimensional groups that take into consideration the z-values of your features, use the Generate Spatial Weights Matrix tool with the Use Z values parameter checked on to first create a spatial weights matrix file (.swm) defining the 3D relationships among your features. Next, run Grouping Analysis, setting the Spatial Constraints parameter to Get spatial weights from file and the Spatial Weights Matrix File parameter to the SWM file you created.
Additional Spatial Constraints, such as fixed distance, may be imposed by using the Generate Spatial Weights Matrix tool to first create an SWM file and then providing the path to that file for the Spatial Weights Matrix File parameter.
Defining a spatial constraint ensures compact, contiguous, or proximal groups. Including spatial variables in your list of Analysis Fields can also encourage these group attributes. Examples of spatial variables would be distance to freeway on-ramps, accessibility to job openings, proximity to shopping opportunities, measures of connectivity, and even coordinates (X, Y). Including variables representing time, day of the week, or temporal distance can encourage temporal compactness among group members.
When there is a distinct spatial pattern to your features (an example would be three separate, spatially distinct clusters), it can complicate the spatially constrained grouping algorithm. Consequently, the grouping algorithm first determines if there are any disconnected groups. If the number of disconnected groups is larger than the Number of Groups specified, the tool cannot solve and will fail with an appropriate error message. If the number of disconnected groups is exactly the same as the Number of Groups specified, the spatial configuration of the features alone determines group results, as shown in (A) below. If the Number of Groups specified is larger than the number of disconnected groups, grouping begins with the disconnected groups already determined. For example, if there are three disconnected groups and the Number of Groups specified is 4, one of the three groups will be divided to create a fourth group, as shown in (B) below.
In some cases, the Grouping Analysis tool will not be able to meet the spatial constraints imposed, and some features will not be included with any group (the SS_GROUP value will be -9999 with hollow rendering). This happens if there are features with no neighbors. To avoid this, use K nearest neighbors, which ensures all features have neighbors. Increasing the Number of Neighbors parameter will help resolve issues with disconnected groups.
While there is a tendency to want to include as many Analysis Fields as possible, for this tool, it works best to start with a single variable and build. Results are much easier to interpret with fewer analysis fields. It is also easier to determine which variables are the best discriminators when there are fewer fields.
When you select No spatial constraint for the Spatial Constraints parameter, you have three options for the Initialization Method: Find seed locations, Get seeds from field, and Use random seeds. Seeds are the features used to grow individual groups. If, for example, you enter a 3 for the Number of Groups parameter, the analysis will begin with three seed features. The default option, Find seed locations, randomly selects the first seed and makes sure that the subsequent seeds selected represent features that are far away from each other in data space. Selecting initial seeds that capture different areas of data space improves performance. Sometimes you know that specific features reflect distinct characteristics that you want represented by different groups. In that case, create a seed field to identify those distinctive features. The seed field you create should have zeros for all but the initial seed features; the initial seed features should have a value of 1. You will then select Get seeds from field for the Initialization Method parameter. If you are interested in doing some kind of sensitivity analysis to see which features are always found in the same group, you might select the Use random seeds option for the Initialization Method parameter. For this option, all of the seed features are randomly selected.
Any values of 1 in the Initialization Field will be interpreted as a seed. If there are more seed features than Number of Groups, the seed features will be randomly selected from those identified by the Initialization Field. If there are fewer seed features than specified by Number of Groups, the additional seed features will be selected so they are far away (in data space) from those identified by the Initialization Field.
Sometimes you know the Number of Groups most appropriate for your data. In the case that you don't, however, you may have to try different numbers of groups, noting which values provide the best group differentiation. When you check the Evaluate Optimal Number of Groups parameter, a pseudo F-statistic will be computed for grouping solutions with 2 through 15 groups. If no other criteria guide your choice for Number of Groups, use a number associated with one of the largest pseudo F-statistic values. The largest F-statistic values indicate solutions that perform best at maximizing both within-group similarities and between-group differences. When you specify an optional Output Report File, that PDF report will include a graph showing the F-statistic values for solutions with 2 through 15 groups.
Regardless of the Number of Groups you specify, the tool will stop if division into additional groups becomes arbitrary. Suppose, for example, that your data consists of three spatially clustered polygons and a single analysis field. If all the features in a cluster have the same analysis field value, it becomes arbitrary how any one of the individual clusters is divided after three groups have been created. If you specify more than three groups in this situation, the tool will still only create three groups. As long as at least one of the analysis fields in a group has some variation of values, division into additional groups can continue.
Groups will not be divided further if there is no variation in the analysis field values.
When you include a spatial or space-time constraint in your analysis, the pseudo F-Statistics are comparable (as long as the Input Features and Analysis Fields don't change). Consequently, you can use the F-Statistic values to determine not only optimal Number of Groups but also to help you make choices about the most effective Spatial Constraints option, Distance Method, and Number of Neighbors.
The K-Means algorithm used to partition features into groups when No spatial constraint is selected for the Spatial Constraints parameter and Find seed locations or Use random seeds is selected for the Initialization Method incorporates heuristics and may return a different result each time you run the tool (even using the same data and the same tool parameters). This is because there is a random component to finding the initial seed features used to grow the groups.
When a spatial constraint is imposed, there is no random component to the algorithm, so a single pseudo F-Statistic can be computed for groups 2 through 15, and the highest F-Statistic values can be used to determine the optimal Number of Groups for your analysis. Because the No spatial constraint option is a heuristic solution, however, determining the optimal number of groups is more involved. The F-Statistic may be different each time the tool is run, due to different initial seed features. When a distinct pattern exists in your data, however, solutions from one run to the next will be more consistent. Consequently, to help determine the optimal number of groups when the No spatial constraint option is selected, the tool solves the grouping analysis 10 times for 2, 3, 4, and up to 15 groups. Information about the distribution of these 10 solutions is then reported (min, max, mean, and median) to help you determine an optimal number of groups for your analysis.
The Grouping Analysis tool returns three derived output values for potential use in custom models and scripts. These are the pseudo F-Statistic for the Number of Groups (Output_FStat), the largest pseudo F-Statistic for groups 2 through 15 (Max_FStat), and the number of groups associated with the largest pseudo F-Statistic value (Max_FStat_Group). When you do not elect to Evaluate Optimal Number of Groups, all of the derived output variables are set to None.
The group number assigned to a set of features may change from one run to the next. For example, suppose you partition features into two groups based on an income variable. The first time you run the analysis you might see the high income features labeled as group 2 and the low income features labeled as group 1; the second time you run the same analysis, the high income features might be labeled as group 1. You might also see that some of the middle income features switch group membership from one run to another when No spatial constraint is specified.
While you can select to create a very large number of different groups, in most scenarios you will likely be partitioning features into just a few groups. Because the graphs and maps become difficult to interpret with lots of groups, no report is created when you enter a value larger than 15 for the Number of Groups parameter or select more than 15 Analysis Fields. You can increase this limitation on the maximum number of groups, however.
On machines configured with the ArcGIS language packages for Arabic and other right-to-left languages, you might notice missing text or formatting problems in the PDF Output Report File. These problems are addressed in this article.
For more information about the Output Report File, see Learn more about how Grouping Analysis works.
Syntax
GroupingAnalysis_stats (Input_Features, Unique_ID_Field, Output_Feature_Class, Number_of_Groups, Analysis_Fields, Spatial_Constraints, {Distance_Method}, {Number_of_Neighbors}, {Weights_Matrix_File}, {Initialization_Method}, {Initialization_Field}, {Output_Report_File}, {Evaluate_Optimal_Number_of_Groups})
Code sample
GroupingAnalysis example 1 (Python window)
The following Python window script demonstrates how to use the GroupingAnalysis tool.
import arcpy import arcpy.stats as SS arcpy.env.workspace = r"C:\GA" SS.GroupingAnalysis("Dist_Vandalism.shp", "TARGET_FID", "outGSF.shp", "4", "Join_Count;TOTPOP_CY;VACANT_CY;UNEMP_CY", "NO_SPATIAL_CONSRAINT", "EUCLIDEAN", "", "", "FIND_SEED_LOCATIONS", "", "outGSF.pdf", "DO_NOT_EVALUATE")
GroupingAnalysis example 2 (stand-alone script)
The following stand-alone Python script demonstrates how to use the GroupingAnalysis tool.
# Grouping Analysis of Vandalism data in a metropolitan area # using the Grouping Analysis Tool # Import system modules import arcpy, os import arcpy.stats as SS # Set geoprocessor object property to overwrite existing output, by default arcpy.gp.overwriteOutput = True try: # Set the current workspace (to avoid having to specify the full path to # the feature classes each time) arcpy.env.workspace = r"C:\GA" # Join the 911 Call Point feature class to the Block Group Polygon feature class # Process: Spatial Join fieldMappings = arcpy.FieldMappings() fieldMappings.addTable("ReportingDistricts.shp") fieldMappings.addTable("Vandalism2006.shp") sj = arcpy.SpatialJoin_analysis("ReportingDistricts.shp", "Vandalism2006.shp", "Dist_Vand.shp", "JOIN_ONE_TO_ONE", "KEEP_ALL", fieldMappings, "COMPLETELY_CONTAINS", "", "") # Use Grouping Analysis tool to create groups based on different variables or analysis fields # Process: Group Similar Features ga = SS.GroupingAnalysis("Dist_Vand.shp", "TARGET_FID", "outGSF.shp", "4", "Join_Count;TOTPOP_CY;VACANT_CY;UNEMP_CY", "NO_SPATIAL_CONSRAINT", "EUCLIDEAN", "", "", "FIND_SEED_LOCATIONS", "", "outGSF.pdf", "DO_NOT_EVALUATE") # Use Summary Statistic tool to get the Mean of variables used to group # Process: Summary Statistics SumStat = arcpy.Statistics_analysis("outGSF.shp", "outSS", "Join_Count MEAN; \ VACANT_CY MEAN;TOTPOP_CY MEAN;UNEMP_CY MEAN", "GSF_GROUP") except: # If an error occurred when running the tool, print out the error message. print(arcpy.GetMessages())
Environments
Licensing information
- ArcGIS Desktop Basic: Yes
- ArcGIS Desktop Standard: Yes
- ArcGIS Desktop Advanced: Yes | http://pro.arcgis.com/en/pro-app/tool-reference/spatial-statistics/grouping-analysis.htm | CC-MAIN-2017-34 | refinedweb | 2,583 | 50.06 |
LONDON (ICIS)--Artlant PTA has officially started producing purified terephthalic acid (PTA) in Sines, ?xml:namespace>
“Artlant PTA started running last week and it is already supplying its range of clients,” said the Portugese-based PTA producer formerly known as Artenius Sines.
There are unverified reports that having started up earlier in March, the 700,000 tonne/year plant encountered technical problems which brought the unit back down again.
Market players say that Artlant PTA is still in search of customers, but this will depend on how many are already under contract with the company.
“Traders say that [Artlant PTA] is desperately looking for customers,” one source said.
The start-up comes at a time when demand for downstream polyethylene terephthalate (PET) normally increases because of the peak bottling season but, this year, March has been a particularly slow month for sales.
“Everyone is struggling with very low demand,” one buyer of PET said, echoing comments made by other players.
PET prices have begun falling following a month of slow off-take. PTA prices have been following the bullish trend of upstream paraxylene (PX).
March prices are likely to increase by around €24/tonne ($32/tonne) from February’s €983–1,008/tonne FD (free delivered) NWE (northwest
[Artlant PTA] could not have picked a worse time to come onstream,” a PET customer said.
With PX supply already tight, and no spare material available on the spot market, most players in the European PX industry expect availability to tighten even further as Artlant PTA comes onstream.
Notional PX prices are being talked at $1,520-1,545/tonne FOB (free on board)
One trader said the Artlant PTA start-up would increase PX consumption.
“I believe that a start up of Artlant would certainly have an effect in the market, pushing it up, maybe not in one go but certainly this will start to excite demand.”
However, others said there would be no immediate effect on the PX market as the Sines plant will initially have all the feedstock it requires.
“I don’t think it’ll mean a lot,” another trader said. “For the first weeks and months they will have stock, they have contracts in place.”
The trader added that it could result in fewer volumes of PX being exported to
The PTA unit had been expected to have saleable product between the end of January and the beginning of February, but this was postponed to early March.
Artlant began testing the plant’s installed equipment in the summer of 2011. However since then, sources have said the plant has experienced difficulties starting up.
ICIS reported in February last year that the facility’s completion was originally scheduled for the fourth quarter of 2011.
Artlant PTA said its key market is Europe, but added that its target regions also include Africa, the CIS and Baltic States, the Middle East,
“With this capacity, Artlant PTA will become the second largest European producer with the most recent and modern industrial unit in the whole of
Spanish industrial plastics and packaging group La Seda de Barcelona (LSB) began building the Sines PTA plant in March 2008 with an initial investment of more than €400m ($541m), including €100m of aid from EU funds.
As part of its restructuring, LSB sold 59% of its stake in Artlant to three Portuguese investment funds in September 2010.
PTA is used as the feedstock in the production of polyester polymers, commonly designated PET and mainly used in the manufacture of packaging for the food industry and in the manufacture of polyester fibres for the textiles market.
Additional reporting by Helena Strathearn
($1 = €0.75)
Follow Caroline Murray on Twitter | http://www.icis.com/resources/news/2012/03/30/9546180/portugal-s-artlant-pta-starts-up-in-lacklustre-market/ | CC-MAIN-2014-42 | refinedweb | 613 | 55.98 |
Interview Questions from C#, JavaScript and SQLDear Friends,
Recently i attended interview got some interested questions here i'm sharing pls need correct answers
In C#
1. Consider the following classes:
public class ClassA
{
private int _value;
public ClassA()
{
}
public ClassA(int value)
{
_value = value;
}
public int Value
{
get { return _value; }
protected set { _value = value; }
}
}
public class ClassB : ClassA
{
protected ClassB(int value) : base(value * 10)
{ }
}
public class ClassC : ClassB
{
}
Show how many different ways you can create a new instance of each of the classes? (e.g. "StringBuildersb = new StringBuilder();" is one way to instantiate a StringBuilder class object instance.) Also after each object instantiation, what will be the value returned by the "Value" property of that object instance.
-----------------------------
2. Let's say that you have a doubly-linked list (a list in which each item has reference to the previous as well as the next item in the list) where each node is represented with the following class:
public class ListItemNode
{
object Value;
ListItemNodeNextItem;
ListItemNodePreviousItem;
}
The DoubleLinkedList class has a member variable named "HeadNode" of type ListItemNode. This head node is the first item in the list.
Write a method for the DoubleLinkedList class to reverse the order of the items in the list.
-------------------------------------------------
3. Consider the following two arrays:
string[] arrStrings = new string[] { "Bal", "Yuriy", "Ken", "Apple", "Ken" };
int[] arrNumbers = new int[] { 11, 2, 25, 34, 66, 25, 0, 0, 3, 4, 2, 89 };
Write a single method which can be passed either one of these arrays and it returns the list of unique items in the specified list. You cannot use LINQ to solve this problem.
-----------------------------------------------
4. Let's say you are developing an API which draws a diagram on a screen canvas. The diagram is composed of different types of shapes – circles, rectangles, lines, etc. Each shape knows how to draw itself. Multiple shapes placed at different locations within the canvas area makes up a diagram image. Define an object/class model (set of classes) which can be used to represent the shapes (just target circle, rectangle, and line shape types) and the diagram class. Note that this class model should be extensible so that other types of shapes (like pentagon, star, etc.) can be supported later on without having the change the diagram class. The canvas class (which you don't have to write) will use the diagram object to draw shapes that make up the image. You don't need to write the body of the methods, just the skeleton API design is sufficient. This should demonstrate the common OOP aspects of the skills.
In JavaScript
1. Write a JavaScript/jQuery statement to retrieve the value of a DIV element with ID "FullName" and an INPUT field with the ID "FirstName".
----------------------------
2. Write a JavaScript/jQuery statement to update the value of a DIV element with ID "FullName" and an INPUT field with the ID "FirstName".
-----------------------------------
3. Write a JavaScript/jquery statement to retrieve the 3rd button object on the page.
--------------------------------------
In SQL
1. Consider the following database table:
Promotions(CustomerID, PromotionCode)
This table has a unique index based on CustomerID and PromotionCode columns.
The table contains records for the customers who received any promotions. Let's say two of the promotions are named "PromA" and "PromB".
Write the code for the following SQL queries:
a) Build the query that will show unique Customer IDs of customers who received
'PromA', but didn't receive 'PromB'.
b) Build the query that will show the number of customers who received each
Code stored in the table. Display Promotion Code and number of customers who
received that promotion. However do not display Promotion Codes that were sent to
small number of customers (less than 30).
c) Will the following query utilize the index mentioned above?
SELECT CustomerID FROM Promotions WHERE PromotionCode = 'PromA'
-------------------------------------
2. Write the code that will give a flat hike to your employees using the following criteria:
EmployeeSalary(EID, Name, Salary)
Salary between 30000 and 40000 -- 5000 hike
Salary between 40000 and 55000 -- 7000 hike
Salary between 55000 and 65000 -- 9000 hike
-------------------------------------------
Thanks in advance :) | http://www.dotnetspider.com/forum/346350-Interview-Questions-from-C-JavaScript-and-SQL.aspx | CC-MAIN-2019-13 | refinedweb | 681 | 68.6 |
Any of you who follow the forum will know that Caz software have put a good chunk of work into Buddy over the last year.
Caz are using SqlBuddy internally for their own needs and in return are incorporating bug fixes back into the code base. Cheers guys :-)
I've decided that a new release is way way overdue, hence 0.0.70. This release includes include many bug fixes and a few enhancements.
I can only appologise for not doing this sooner, I'm contracting now and time for such hobby projects is rather short. It's great that others are willing to contribute to the project, I hope we can get more folk on board to push it forward.... read more
Yep, after a long summer we've decided to put a few hours in and do a new release! See the release notes for changes. Admittedly, it's sunday evening and I've rushed the release a little, so let me know if there's any problems.
I'd like to also use this opportunity to welcome Thomas Hansen to the team. Thomas is hoping to use Buddy at his company, and is therefore making some much needed bug fixes and improvements! ... read more
I've had a number of enquiries about the Buddy project being dead. This isn't the case!!! Unfortunately work has paused for summer, because I decided to see as much of the Sun, my girlfriend, and the world as possible! However, I am now working on Buddy again.
There's one catch though... I'm actually putting effort into a re-write that supports PostgreSQL, SQL Server 2000, Access 2000 (and next MySQL!). I haven't made this Open Source yet on here, becuase I want to get it to a stage where it's ready for group development, and at least matches the current Buddy on features. ... read more
As promised, we're releasing more regularly. Buddy is undergoing some drastic changes, more specifically:
- New editor! The kind chaps from SharpDevelop have allowed us to use there amazing text editor. We've only scratched the surface with it, but soon we'll have many funky features in addition to the syntax colouring.
- SqlHelper changes - this now has icons, and is slightly more intelligent when offering help. It's being re-written, so doesn't function as smoothely as it should. ... read more
Well, it's been too long since our last release (12 December). We've decided to try and release every 2 weeks. This means that releases will be smaller, but at least you get the new features as-and-when they arrive.
The new Buddy is looking fairly promising. We've added:
- a basic database explorer
- some help
- basic schema reporting
- lots more!
So check it out, and provide feedback. We're still on the lookout for people, so please feel free to jump on board if you're interested. We're currently interested in:... read more
Yep, we thought it was time for another release. This program is still in it's early days, but we're making some good progress.
For those who use SqlServer or MSDE, this should show some potential of being a useful tool.
So come and check out the new release, we've got the foundations of the following features in :
- Syntax Highlighting ( basic, but ok )
- New VS.NET menus
- A File Explorer
- Open/Save Sql Files
- A Scratchpad... read more
Well, Jonathan and I have taken the project a few more steps futher in this release - version 0.0.6 Alpha.
SqlBuddy now allows you to make multiple selects, edit your results, copy them as HTML or XML. The SqlHelper popups are getting better too. For example, you can now select multiple items in the list.
We feel that some of the conceptual issues such as the sql-history are much better now. I had a lot of re-wiring to do on that one! ... read more
I decided that the code was too complicated, so I have refactored it a little to make it better. If anyone wants to get involved, it should be easier noe! Improvements include:
- Is one VS project file rather than two.
- Old, redundant c# classes removed
- Many classes renamed to make more sense.
- Confusing namespaces renamed.
- helpful Readme.txt files throughout.
The CVS tree for SqlBuddy is now set-up correctly! Or at least much better! Whoopie - it wasn't easy for a newby like myself.
Having learnt a little more about CVS (luckily work granted me 2 days research) - I have managed to configure it so that it doesn't screw up the VS.NET project file. So, you should be able to download and compile without too much messing around ( although I suspect the references may need pointing to the bin folder.... read more | http://sourceforge.net/p/sqlbuddy/news/ | CC-MAIN-2014-41 | refinedweb | 810 | 74.29 |
Talk:Wiki organisation
Discuss Wiki organisation here:
Contents
- 1 Page Name conventions/guidelines
- 2 The various wiki purposes
- 3 Consolidate all the "wiki-fiddling" pages together?
- 4 Community pages
- 5 Clean up the wiki
- 6 Navigation by use-cases
- 7 Category:Proposed features was moved to top level
- 8 Replace most of switch2osm.org links with Deploying your own Slippy Map
- 9 Rules for arbitrary titles
Page Name conventions/guidelines
This looks like a great place to develop some Page Name conventions/guidelines. I've linked to it from the relevant sections on the talk pages for both Cleanup and Guidelines pages. I've also added it to the category to give it a bit more exposure, previously only being found upon a search or read of the initial authors talk page.
Martin Renvoize 10:56, 7 December 2009 (UTC)
Copy of my suggestions from other talk pages.
A Wiki page naming policy really needs to be created and mentioned somewhere, probably within this guidelines page. It would be nice to all be singing from the same hymn sheet.
What types of pages are on the Wiki and what types of pages do we want? Martin Renvoize 12:48, 3 December 2009 (UTC)
My thoughts so far, but please add/subtract to the list;
- MapProject - All pages related to actually mapping! i.e Places, Types (i.e Cycle Network, Countryside, Walking Networks etc), etc. A description of the task, status and persons involved should go here along with links to "Sub Projects" if necessary.
- Parties are somewhat related to this.
- WikiProject - All pages related directly to Wiki maintenance. i.e Wiki Clean Up, Wiki Suggestions, Wiki Notices, Template Suggestions. etc. The technical Wiki Stuff ONLY.
- OSMProject - Those pages relating directly to OSM, i.e an introduction page, the beginners guides, tagging information, tag proposals, api's and developer links, press initiative. Anything directly tied to the actual OSM concept and technology, NOT just loosely tied by the actual data.
Finally
- Portal Pages. Like a giant introduction page for each page type with the category list included. I have no idea what the best way to implement portals are but I think many people don't really understand the category system. I think that having the "top level" category for each page type as much more of a page would be more intuitive than having a separate page linked to from the category. Does that make sense. An example is the WikiProject England page. I've converted it into a "Portal" as such, but it would work much better if the overall information was within the "Category" page with a nice auto-updating list of related pages (i.e the categories).
I think an easiest way to implement this would be to create some templates/guidelines specific to each page type.)
- Are you suggesting page name prefixes? or just clarity of the type of page we're dealing with?
- I've said before, I don't really like the prefix "WikiProject" , and would support getting rid of it (although getting rid of it would be a massive hassle) So that for example WikiProject United Kingdom becomes United Kingdom. Easier and more elegant for linking to, and consistent with other pages such as Cambridge which seem to work just fine without any messy prefix.
- For the same reason I don't like other ideas for page name prefixes. They detract from the simplicity of the wiki, making it harder and more wordy to link to pages. It's an attempt to create order and hierarchy within page names, when order and hierarchy should be evident in the linking structure, and the categories. It solves a problem which doesn't really exist. People worry about page name clashes, but if somebody wants to create a piece of OSM software called 'Cambridge', then we simply name their wiki page "Cambridge software" and put a disambiguation link at the top of the place page. ...or if the software begins to rival the place in terms of significance to OSMers, we might decide to move the current page to "Cambridge (City)" (Hypothetical example. Calling software "Cambridge" would clearly be stupid!) The point is we don't need prefixes or sub pages or any of that page naming cruft.
- Clarity of the type of page we're dealing with, would be a good thing, and I like the idea of having page structure examples for people to follow.
- I find category pages (actual pages in the 'Category' namespace) to be much easier to understand if they contain no more than a few sentences at the top of the page, followed by the auto-generated list. But I wonder if you can "transclude" category lists. not sure.
- -- Harry Wood 12:21, 8 December 2009 (UTC)
Thanks for the reply Harry, it is very enlightening. I agree, I don't like the naming "cruft", and wasn't suggesting more prefix's (the idea did initially start as prefix's, but no longer), just clarity of the type of page we're dealing with! I'de love to get rid of all the WikiProject prefix's, 907 to go and counting!
Structured examples was exactly what I was getting at; partly for people seeking advice on how to create a page, but also for those cleaning up the Wiki as something to loosely conform to. Example; I've looked at a number of pages and thought,"Tthat needs a cleanup", but then struggled to work out what should go where and whether the page should be merged into others, or split into separate bits.
As for my idea of what a portal should be. I agree, would be better to "transclude" the category for this aim. Still a "work in progress" in my head for now.
Martin Renvoize 23:08, 8 December 2009 (UTC)
Moved from Talk:Wiki_guidelines:
- How about instead of changing "WikiProject" to "MapProject" for mapping projects, we change instead to "Mapping"? So instead of "WikiProject England" or "MapProject England", why not the short and catchy "Mapping England"? I admit that MappingDC provided the inspiration. :-) --seav 11:04, 19 January 2010 (UTC)
- I think this has already been kinda answered above. So far the consensus seems to point to removal of prefixes, but a more clear set of templates on what each page type is used for? --Martin Renvoize 11:25, 19 January 2010 (UTC)
- Well that's my preferred page naming approach. I'd prefer the page to be called "England" rather than "WikiProject England" or "Mapping England", BUT...
- The hassle involved in trying to shunt all the links over to a more elegant short name... It just makes me recoil in horror to think about it. It's not something I want to attempt, and I would not recommend anyone else attempting it either because if it were half finished, or done badly it will just annoy a lot of people, for a benefit which is rather subtle. I don't like the prefix, but there are bigger problems to worry about. Maybe this kind of mass moving/relinking task could work as a Wiki Cleanup Drive activity.
- -- Harry Wood 17:35, 27 August 2010 (BST)
Groups?
Should there be a template for groups, perhaps a redirect to a Map Features section or a Category? I couldn't help noticing that height=* is in group Descriptions and not group Properties, which might or might be blamed for its lack of appearance in Map Features. --goldfndr 10:24, 15 December 2009 (UTC)
Hmm, good point, although, i'm not acutally sure what a group is; being fairly new here. Enlighten me how you think it fits into the structure. Is it, a mapping data type (like, tags, keys, nodes, relations etc) or a wiki only data type (i.e categories, can't think of another?)--Martin Renvoize 18:18, 15 December 2009 (UTC)
Categories for places
Why do we have auto-generated categories for Towns in Some Area, Cities in Some Area etc. and not just Category:Some Area with all those towns/cities/villages/suburbs inside? -- Zverik 17:48, 27 August 2010 (BST)
The various wiki purposes
This text used to be at the top of the WikiProject Cleanup page. I think it got "cleaned up" , but it reflects how I see wiki organisation. :
"
We are using the wiki for various purposes:
- PR material - presenting us to the public (introductions and promotional material)
- Help information - to help newbies use our data, and/or get involved in the project.
- Manuals/HOWTos - Information going into some detail on using OSM software
- Development Information - detailed technical information for developing
- Community at Locations - events lists, and links to local community contact channels, photos etc
- Mapping status at locations - keeping track of mapping progress for a specific location and coordinating mapping efforts
These are not clear categories. A trail of links could easily cut across from PR Material to Help information to Manuals to Development pages. Indeed one page may be aiming to fulfil several wiki purposes. This is not necessarily a bad thing, and certainly with a web structured wiki it is somewhat inevitable, but it does present a problem when trying to get a clear view of the overall structure.
"
Basically I don't see the organisation as a very rigid thing, and I think any attempt to make it so may be doomed to failure anyway. The trick is to try and find a way of being both flexible and well organised.
-- Harry Wood 12:21, 8 December 2009 (UTC)
- I don't understand the "Information on Locations" purpose. Can you give an example (page)? --Cantho (talk) 09:18, 27 January 2014 (UTC)
- OK actually I've just changed "Information on Locations" to "Community at Locations'" which hopefully is clearer. I was actually originally thinking of a slightly different purpose, which a lot of "city pages" are tackling at the moment, but ... it's a bit out-dated and pointless.
- Way back in 2006 people were creating wiki pages about different cities purely to give us a place to put a sample rendering image. This was before openstreetmap has a map display! There's obviously less need for that nowadays, but still quite a lot of wiki pages which do nothing but show a map (at least we're mostly showing auto-updated maps nowadays) and saying things like "X is a city in Y". I've heard it argued that this might be useful in itself, as a google rankings trick, but mostly I'd say that kind of generic information is a bit useless nowadays except as a placeholder for "Community at locations" and "Mapping status at locations"
- ...and actually "Community at locations" is far more important that "Mapping status at locations". I haven't really seen very successful example of mapping status information being kept up to date in a way which helps the community much, except perhaps in German cities. The best purpose a wiki page about a city can fulfil, is stuff about how to contact the community, and possibly a space for organising events (but like the mapping status info, events info is only as good as the people willing to maintain it) Increasingly I'm thinking these wiki pages should be trimmed down to just link to facebook pages/ twitter accounts / wherever local communities are actually congregating.
- -- Harry Wood (talk) 18:02, 2 September 2014 (UTC)
- I'm not sure if there's cities with a living OSM community and an empty template wiki page for the city. Is it that what you're aiming at? --Cantho (talk) 15:16, 14 September 2014 (UTC)
- There's a clean-up target there for sure. Empty template pages, or otherwise poorly maintained wiki pages. There's plenty of examples of quite lively local communities at city or even country level, for which the wiki totally fails to reflect what's going on. Failed over-ambitious attempts at doing "Mapping status at locations" and not enough "Community at Locations". e.g. the page has some crappy old mapping status tracking table which hasn't been updated since 2008, meanwhile there's a lively community gathered in a facebook group or having regular events with meetup.com. Clearly the most useful thing for the wiki to do in that case is have a big fat prominent link to help people find the active community. And people need to have the confidence to blow away old status information if it's not useful. I'm not just talking about for obscure 3rd-world places. A lot of wiki pages on states and cities of the U.S. are in need of a tidy up. -- Harry Wood (talk) 16:23, 12 May 2016 (UTC)
- Cleanup is a huge task which requires sorting and collecting many pages so that they can be effectively found when searched for, correctly categorized (in all languages) with languages sorted correctly an navigatable in a way that is similar and unified across languages, with corerct naming conventions.
- Cleaning up old pages is difficult when they are in fact hard to find and not categorized at all, or simplify categorized the wrong way.
- Even in the English-only part of the wiki, categories need cleanup and consistant organization. And many pages have been created with extremely poor linking, and absolutely no care at all about how they could be translated. And then left unmaintained as is. Lots of things are then spread everywhere in different states, and this wiki has become overtime extremely difficult to navigate whjen we really search for something: what we find (sometimes) has never been updated or even corrected since the day they were written. And most writers don't care at all about the orthography and terminology (complicating the task when searching for things).
- The cleanup project is huge, but even before attemptin to cleanup the content (possibly merging dusplicate pages), we need to categorize it, collect all we can find in coherent sets of related information: this is this task that further helps correcting the rest within pages themselves.
- I've spent countless hours since several years trying to reunite the categories and correctly separating and organizing the languages, I've created (or corrected) various templates to help resolve these issues, notably because this wiki is and should remain international and open to all active languages of the world, and as easily to use for English native speakers in UK or users in other parts of the world: we need more people involved locally, working more easily in their language, we can't stay only with a few people in UK trying to do something in other parts of the world where they've never travelled and for which all they have is a few approximately translated resources, and anonymous satellite imagery). This work consist in lot of small incremental changes, lots of searches, resolving red links in many places, fixing broken redirects, fixing the orthography (notably in page names), fixing some layout issues (for languages written in RTL scripts), building or improving reusable templates that will help translators to do their work with lower efforts.
- But this work is rarely appreciated. Many people (even translators) don't realize all what has been made to facilitate their work, or don't understand why some things are done in one way rather than another. Or don't understand why a page is incrementally updated multiple times (sometimes in a short time, because further updates or corrections require first performing searches or checking other pages).
- Thanks, MediaWiki provides us with useful Special:Pages. But there are still tricks to do in the MediaWiki syntax (notably because this wiki is not Wikipedia and does not support some extensions found in other international wikis like Wikimedia Commons). For long this wiki used an outdated version of MediaWiki, and we still have limitations in terms of capabilities (this wiki runs on a much smaller server than Wikimedia Commons). There's no support for Lua module scripting, no support for Wikidata or any external database.
- Things can be very complex and cannot be corrected without many intermediate steps (if we don't want to break many pages at the same time). We need a gradual upgrade before we can simplify what is no longer needed. — Verdy_p (talk) 17:14, 12 May 2016 (UTC)
- You really cannot divorce organising wiki pages from their creation. This wiki isn’t and can’t be anyone’s private playpen; it is a community resource to promote and help develop OpenStreetMap, otherwise it loses its point and may even deserve to be shut down. Ghost town categories that people don’t want to populate because of a hostile atmosphere are as bad as the ghost town status pages Harry was talking about above. Bear in mind that that without large numbers of motivated contributors there will be no multilingual pages here, I reserve judgement on whether or not that is a good thing.--Andrew (talk) 21:53, 19 May 2016 (UTC)
- What are you talking about? Which kind of "divorce"? What are the "ghost towns"? And how are they related to multilingual contents? The "hostile atmosphere" was also not spoken about. I really don't understand any meaningful word in your remark.
- All I did personally was helping collecting the many pages that were extremely badly linked, difficult to search for, and then difficult to translate in a useful way, as they were largely disorganized. This is part of the general cleanup that this wiki needs. Progresses are slow yes, because they are incremental. But the more we progress, the more there are new useful (and more acurate contributions, not just in English but in many more languages. And there's more pairs of languages being worked on to extend the collaboration.
- Look more precisely, I've never changed the general organisation which was already present, but I unified them so that they offer the same compatible framework across languages, with the same tools offered to them as much as possible. Somtimes I've found duplicates that were maintained separately even of being maintained only once. Everywhere I made navigation possible in one click, even if a translation was missing in a given language it was easier to return to one's prefered language.
- Also there are tons of bad assumptions in tricky details of some templates. As much as possible I avoided breaking things by maintaining a maximum compatibility or the transition using transitory parameters in templats to help make the transition (until the rest is cleaned up and these comptibility things can be removed when no longer needed.
- But yes, this is a difficult task if we don't want to break everything and not mixup completely the contents. Most of the work is to help organizing things, I did not mean that any existing contents had to be deleted and in fact almost everything is kept, including for historical reasons (to know from where we started, but also because there are remaining things that people started to use in the past and they don't know where others have progressed to make things better. OSM is a worldwide community, not just a British one. We are building a map of the World and the best maps will come with data from local people where they live. These people must be able to recruit locally and have support data in their language.
- For many years, this wiki has suffered from many people trying to do similar things but in different ways and over time this has cumulated a lot of contradicting practices. But consider all I've done since months, the wiki has largely been enhanced to be more usable and easier to search and navigate. I've documented also many things that were not. Corrected many true errors left all around (creating many broken links, broken HTML that caused very bad layout or incompatiblities across browsers (notably for mobile users). But this task will never end and now there's a good stable base for the most important things.
- But still not all is perfect, and notably the "Map Features" page that has grown too much and becomes impossible to maintain. But we still need more documentation for more tags, but thye won't fit in the single Map Features page. That's why we must prepare the way to split it (we have no other choice). But we cannot split it into convenient subsets without first categorizing the content correctly (and accurately) in subgroups. And performing cleanup between some category topics that are mostly the same or not grouped correctly in meaningful way. How do yo want we do that without doing many incremental steps and searches?
- Note that I am not concentrating my efforts only to English, as much as possible I do that across all languages at once, so they all benefit the same general framework for navigation (actual translation of content is not my direct priority but it's easier to do when things have been prepared for that).
- I've not deleted any content, I will not correct linguistic errors in many languages, native speakers will do that better or faster than me. Just consider from where we started months ago, and when I initiated the Languages navigation bar which works everywhere, even if it has some limitations for languages other than those in the "top-50". I know that you want to change some things there, but these changes are in fact minor (as long as you don't break them more). When I started there was content usable only in English and German, but they were done compeltely differently. There were some other contents in Russian. 7 languages had their own dedicated namespaces but working differently and without working links between each other. Outside these 7 languages it was completely impossible to navigate. Even English native speakers did not know where was the content in other languages, even if they could read part of it.
- Now people may want to register themselves or not in this wiki, this is their choice, and there's a place for that with Users categories if they want to connect each other. Nobody is required to register. But nobody should be required to use only the English language. — Verdy_p (talk) 07:17, 20 May 2016 (UTC)
Consolidate all the "wiki-fiddling" pages together?
Currently we have Wiki organisation, Wiki guidelines, WikiProject Cleanup, and Wiki Cleanup Drive. Can we consolidate all of these things into 1 or 2 pages? Maybe one for what things should be (organisation/guidelines) and another for coordinating efforts to bring the pages to what they should be (cleanup). We all know that the wiki is a mess. --seav 04:43, 25 February 2010 (UTC)
- Well it started with just WikiProject Cleanup. I think Peter originally decided to split them out. We now have enough content on Wiki guidelines, that its quite useful to have it as a separate page. -- Harry Wood 19:40, 4 October 2010 (BST)
Community pages
They are pages, mainly Project_of_the_week & Community Updates and also HOT, and, why not, of the Wikiteam, for community animation.
I have started creataing templates for the community Updates, helping publication. I had made templates using mediawiki tricks for the Haiti Project (see the kind of feed WikiProject_Haiti/News)
It is possible to increase the reuse the content of those pages ( and maybe of tag pages...) in other pages by using whole content templates that can render the page differently, to make a summary of the Community updates, of the Project of the Week, of the HOT news appear in the main page (like the image of the week), to get the main info of a tag anywhere...
I put an example on my sandbox : User:FrViPofm/Bac_à_sable. Yes the layout can be improuved :-)
FrViPofm 11:05, 10 February 2011 (UTC)
Clean up the wiki
Here is my suggestion in order to clean up this wiki, which seems to be a little messed up:
- Namespace: some namespace should be added: "Key", "Tag", "Relation", "Project" (or "Wikiproject"?). No page should have a ":" in the middle of the name (ex: "France:Paris" (and Paris exists too!))
- In the namespace 0 ("{{ns:0}}:" returns ":") should be teh place for cities and in the namespace ("{{ns:4}}:" returns "Wiki:") should be the place for wiki organization (so this page should be moved there together with translation page))
- Translations: Prefix should be avoided (otherwise you have to edit wiki files in order to add every new language) in order to prefere subpages (like in Meta Wikimedia): NO "IT:Main Page", YES "Main Page/it". In this case it is possible to manage translation in a simpler way with the template:Languages. In order to permit the traslation of the title a new feature should be added in MediaWiki:Common.js in order to permit translation of the title with a parameter in Languages template
- Subpages: someone should enable the subpage option in order to add this feature to namespace 0
- cities: they should be collected in categories by land (Cities in Italy) or - better - by other kind of division (City in province of Milan). Every page rilated to a city (example, public transport) should be a subpage of the city (NO "Milan metropolitan", YES "Milan/Public transports/Metropolitan").
Other feature I will add later :) --★ → Airon 90 16:31, 9 November 2012 (UTC)
- So number3, move every page which has a language prefix, to have a language postfix (and subpage) instead. That's a colossal change. Maybe it would be an improvement with some technical advantages, but I don't see how it's ever going to happen without creating a massive amount of disruption.
- If we do number 1 at the moment, add a 'key' namespace, then we'd have to multiply up and have a 'DE:key' namespace too, and every other combination. This would be horrible, so we can't really do number 1 without first doing number 3.
- I don't really understand number 2, but I think you're saying we should have an 'OpenStreetMap:' namespace to keep all meta-pages. That's a very wikipedia concept. It's not so important to us that we keep meta content separate from content. The key thing to realise with this is, this wiki is "merely" the documentation for the OpenStreetMap project. The wiki is not the project. The wiki serves the project. We don't have the same clear separation of 'articles' versus other things like you see on wikipedia. Nevertheless there may be a case for moving some meta things into a different namespace.... but same problem above applies.
- Number 4. This is already enabled. To be honest I wish it wasn't because in my opinion people get quite carried away with using subpages to try to group everything hierarchically, instead of just making sure things are cross-linked sensibly. See my subpages rant here.
- Number 5 Categorisation is a good idea. I believe we are already do a bit of that e.g. Category:Cities in Italy. In England there's some hierarchy happening: Category:Regions in England. Can't say I use them myself, but it's there for the classification pedants. Again this is not wikipedia, and the aim is not to create a city page for every city in the world. These pages are about coordinating a community of mappers. Subpages may be a good idea for the case you've given, or a transport network in a city.
- Overall you seem to be proposing some pretty sweeping changes here. I tend to think some restructuring might be good, but for some ideas the pay-off in terms of some minor technical advantages, would not be worth the turmoil and broken links. You're new to the OpenStreetMap wiki (welcome!), so I would suggest to you that you start a little less ambitiously. There's plenty of other clean-up work to do, including some important changes across lots of pages (Wiki maintenance tasks) Also remember that the wiki is not the project. You've just fired talk page messages at 18 different OpenStreetMappers, but you'll discover a fair proportion of these people don't care that much about discussing wiki cleanup. They're big OpenStreetMappers, but not big OSM wiki editors. Doing some OSM mapping is a great way to gain an understanding of what's really important in OpenStreetMap, and this will help guide you towards the most important aspects of wiki cleanup.
- -- Harry Wood 14:00, 12 November 2012 (UTC)
- Disruption can be avoided by using bots, which can move pages and find links to change. It's not so much difficult but our code is not simple to write.
- You're right about the first paragraph (1 implies 3 already done).
- About the second paragraph: wiki organisation is not a page of namespace 0 but it should be a page of the namespace 4 (the correct link should be "OpenStreetMap:Wiki organization" or "Project:Wiki organization" too), Milan is a page of namespace 0.
- In conclusion, I don't own a tracking device, so I can't help the project, but I love to much wikis so I give a help there.
- I know that I wrote to all the sysops but I think this is an important thread to be discussed. --★ → Airon 90 14:46, 13 November 2012 (UTC)
- As for 1/2 I don't the content of this wiki well enough to judge whether the current fake namespaces are really a bad thing, nor what the boundaries of content/project namespaces should be; 4 also depends on them because currently subpages are surely needed in main namespace.
- However, 3) is the wrong way to address that problem (it's the old-old-superold way Meta and other wikis did it), so in the end you depend on Talk:Wiki_Translation#Translate_extension which is the correct solution (and the main cleanup problem of this wiki). --Nemo 10:30, 17 November 2012 (UTC)
Key sites: Proposal to restructure the main page, Contribute map data (as an example of a navigation page) and Proposal for a navigation concept.
Inspired by a presentation about the redesign of openstreetmap.org I propose to organise the navigation of this wiki by use cases. This is already partially done as the main page clusters some entry points to the content by groups like "Contributing" or "Developers". But, for example, "Software" (in the portal block on the current main page) groups features, not use cases. This group of entry points dont't help satisfying one same intention of a user. Secondly I would like to consequently group related entry points together. "Related entry points" in this meaning refers to a relation between use cases. A good counter-example is the very left column of this wiki. It clusters some maybe "most important" links. But this doesn't help at all to orient people. Navigation should make content accessible, of course, but it also needs to help people to orient oneself. This leads to my third intention: I would like to lighten the main page and consequently introduce navigation pages. "Hiding" content a bit more doesn't disturb too much the accessibility, if you have underlying navigation pages leading to all the content. On the other hand, it helps a lot in orienting people and give them a fast overview of what they can find in this wiki. Finally, I would like to document this navigation concept. Thus, people extending the navigation can better understand the main idea and keep a coherent navigation.
As a first step I would like to
- Link an existing/ create a new navigation page for every group of use cases. These navigation sites share a common layout, which makes them easily distinguishable from content sites.
- Clean up the main page. See my Proposal to restructure the main page.
- Create a new page explaining this navigation concept.
--Cantho (talk) 17:06, 26 January 2014 (UTC)
- You should link this proposal also at Talk:Main_Page since this one seems to be involved. I have only quickly skimmed this lot of text. Two weeks? That is too short, in my opinion. You will not get that much comments in this short amount of time. Not everybody has the next two weeks free. ;-) --Aseerel4c26 (talk) 13:04, 28 January 2014 (UTC)
- Yeah, it was pretty much too long :) I shortened it (and removed also the proposal with the two weeks). --Cantho (talk) 19:58, 28 January 2014 (UTC)
- Created Proposal to restructure the main page and shortened the text here a bit more. I also started to (re-)write the navigation pages. See the result at Using Openstreetmap. --Cantho (talk) 08:07, 2 February 2014 (UTC)
- In my opinion, the main problem with your proposed main page is that it's too long and has too much text to read. Currently the navigation fits on a screen without scrolling. I'm worried that many users will give up instead of carefully reading through the entire list.
- There's also some problematic details such as lumping together technical details for developers with really basic "user manuals", and I also think some of the links are just not important enough to warrant inclusion on the main page. But it makes only sense to talk about details when the basic concept is nailed down, and right now it just seems too verbose to be an effective navigation. --Tordanik 17:24, 7 February 2014 (UTC)
- I shortened it heavily. Now the software development of and with OSM is left together. For each group of use cases there are two links to major content and a last link to a corresponding navigation page. Some groups have a bigger link as entry point for newbies. I will continue work on the navigation pages. Please add more criticism if you have! --Cantho (talk) 06:51, 8 February 2014 (UTC)
- This looks pretty good now, I could see that as a working main page replacement. Some feedback:
- The "Use OpenStreetMap" section has web services and GPS devices, but desktop software and mobile apps are missing. I suggest to add (perhaps instead of the third item?) a link to Software.
- Added them, removed the second item (links to education and research). The idea is to have two bullets with links to major use cases and a third one linking to a navigation page, which covers all the rest. Thus the main page stays short, but the major content stays accessible. --Cantho (talk) 05:11, 10 February 2014 (UTC)
- The icon selection seems somewhat odd based on what I associate with them. In particular, swapping the "use" and "develop" icons would fit better imo.
- For the links from the current main page that you chose to not include (Notes, Imports, Help), I'd be interested in your reasons for the decision.
- Basically, for me an attractive main page must not include douzens of links. This is dounting to newbies and people who dont have an overview of the wiki's content. To give an idea where to start, two major use cases are bulletet for each group of use cases. To keep the other major content accessible, there are the "more..." links at the end of each group. They lead to overview pages (I call them navigation pages). I will try to find a good layout (fitting to the main page layout) which makes them identifiable as navigation pages and fast to use.
- Concerning the links you mentioned: Notes and Imports are a way to contribute geodata, thus I will write them in the corresponding navigation page. I thought the map features are much more important and the Mapping projects are more worth to attract people to them. The help page I just don't find much helpful :) It is too broad, guiding to map features, wikipedia, the develop page... If you need help on, for example, contributing map data, the beginners guide is linked right next to the heading on the main page, you don't need a help page for that. What I think would be useful instead is a link "new to this wiki" or "how to use this wiki", explaining amongst others the search box in the top right. But I don't stick to the selected links! For me it's important to lighten the main page and have a clear structure with a concept behind. Thanks for your feedback so far! --Cantho (talk) 05:11, 10 February 2014 (UTC)
- That's all I can think of right now. --Tordanik 13:01, 8 February 2014 (UTC)
I wrote an example of a navigation page. --Cantho (talk) 07:28, 10 February 2014 (UTC)
- I was not studying it very intensively as I am working on something else now, but I can say that I like to graphic layout. I believe it is much nicer then it was before. I like it so far. Chrabros (talk) 10:15, 8 April 2014 (UTC)
- There is a substantial difference in the length of the left and right column, this makes the end of the right column lost by first visitors, therefor move the news box back to the left column (as on the existing Main Page) and re-instroduce the Portals box also in the left column will balance out the two columns. I like the simplification and the icons of the new navigations/links section. Corresponding navigation pages need to be made before this can be put onto the live Main Page --Skippern (talk) 17:14, 8 April 2014 (UTC)
- Thanks for your feedback! I skipped the Portals box to reduce overloading of the main page and to move a lot of direct content links to the navigation pages. I will ensure that all important content stays accessible (I will provide a list with all main page links I removed and where to access them in the new layout).
- Concerning the column lengths, I agree. But I would like to keep the boxes on the right, to keep the main entry points in the middle column and the additional stuff in the right column separated. I propose to reduce the visibility of events and news (on the main page) to three items each. This will also solve the length issue. What do you think? --Cantho (talk) 06:32, 9 April 2014 (UTC)
- I have mixed opinions. I'm not a fan of the proposed main page as it changes things for very little benefit. The current page is well categorised (view, contribute, develop, ...) and the proposed format reorganises this but little more. I also don't like how the beginners guide is not obvious and that the Portals and Meta Info boxes have been removed. I am however a fan of the Contribute page but my concern is that you'll end up with a lot of duplication - for example, you have a link to legal stuff on the contribute page, but this would probably also go on the about page. I'm not sure what you'd put on the proposed About and More Help page. The problems with the wiki are far beyond navigation ones. We have a tendency for pages to be too long, out of date and overly complex. --RobJN (talk) 22:11, 8 April 2014 (UTC)
- Thanks for your feedback! I don't think that the current main page is well organized. It structures the entry points somehow, leading first to a overloaded main page with dozens of links and second to partly feature oriented navigation like the "software" group in the Portals block. The benefit of my proposal is not to give yet another arrangement of all the links with some more icons, but to propose a concept on which all future decisions about navigation can be based. The main idea is to design navigation through use cases (and only through use cases). Another important aspect is to lighten the main page. Thus the portals and meta info boxes are removed and underlying navigation pages are established. This hides content a little bit more without bothering content accessibility.
- The beginners guide is intentionally only obvious to people who want to contribute map data and look at the corresponding block.
- There will definitely be some duplication as you described. But I don't see a problem with multiple navigation paths to desired content. That's different from content duplication. --Cantho (talk) 06:32, 9 April 2014 (UTC)
- Regarding the main page: The main article links (as they would be called in the normal pages) are intermixed with some picked topics. I think this is not that easy to understand. I would expect the link in the heading itself or below it. The list of usecases and subtopics looks quite cluttered / unstructured to me. But I am not sure why... In general I think the structure into usecases is good. E.g. on the current main page there is "map features" and "mapping projects" flying around - I am not sure why, maybe because they can belong to several usecases? And the "meta" box is quite essential. Clicking through (to which usecase?) is too much effort. If some system is down, it should be visbile! All in all: what about taking the current main page and just replacing the usecases with your more clear ones?
Regarding the usecase page: good idea in principle, but isn't this all covered by categories? What about cleaning up / structuring the category system? Then one could just link the "contributing" category.
I am not sure if it is worth the effort... Yours is not much more than an idea - not something finished which can be put into as replacement. Someone should look into the old discussions and the reasons for the current layout. It should be made clear why the old decisions are not the best ones anymore. And, last but not least, I do not want to invest much time into this. I think the current situation is not that bad. There are many wiki pages which need work more urgently (all that, taking mapping aside...). We are too few people here... in general and, more drastically, in the wiki. --Aseerel4c26 (talk) 21:55, 10 April 2014 (UTC)
- Thanks for your feedback! Sorry, I don't understand what you mean with the main article links and the picked topics. Can you explain more or give an example? Regarding the meta box: I don't think that a lot of people visit the wiki to check if all systems are running, and those who do are more advanced users who are able to find a subpage using the search box. But if you keep at wanting that box, we can add it again. Regarding the portals box I instist more to remove it. It adds too much links to the main page, and it adds a feature-oriented/ randomly-organised navigation, which disturbs in my opinion. The categories can definitely help to structure the content, but you cannot easily explain links, order blocks of links in your favorite order and use images as all done in the example navigation page. Don't worry about the effort, I will finish the two missing navigation pages and also I will write a page explaining the navigation concept, thus in future nobody needs to read tons of discussions to understand it :) --Cantho (talk) 07:24, 13 April 2014 (UTC)
- Example for the third section: main article (you call it navigation pages) links is Contribute_map_data and the picked topics is e.g. Map Features. Main article links could be like in Elements (by the way: it was broken for 14 days now due to unfortunate template edits!) for uniformity in the wiki.
- Status: No, maybe not to "check if all systems are running" but to see what is wrong (or if it is just for them) if something does NOT work. Yes, you could design a template which only shows up if something is NOT working. --Aseerel4c26 (talk) 01:23, 16 April 2014 (UTC)
- Ah, now I understand your point with the main article and picked topics. I will think about it. The meta box is back again, together with a new layout (see below) :) The idea with a information only showing up if something is wrong sounds good! Maybe later... Good night --Cantho (talk) 01:45, 16 April 2014 (UTC)
- Now the links to the navigation pages are bold. --Cantho (talk) 12:26, 16 April 2014 (UTC)
- Still they are list elements like the direct topic links. What about a "… more" as last list entry? That at least is differing by a distinctive text. But I think the best place is below the images, where you now e.g. linked the beginner's guide (not clear why exactly this one is linked there...). Oh, by the way: you know about the nogo to use tables for layout, right? ;-P --Aseerel4c26 (talk) 17:37, 16 April 2014 (UTC)
- I highlighted it a bit more, also with a "...more" beginning. The big link below the image is reserved for an entry point for newbies, that at least is the idea. Thus the first attention is directed to that beginner's entry point. The second attention comes to the list below, containing some most important related links (to be discovered by newbies and to be fast accessible for advanced users) followed by a link to the navigation page giving a more comprehensive overview. Thus the user can discover the content step by step in a reasonable order. To me it was more important to direct newbies to a proper entry point than providing everyone first with the navigation page. I think the navigation is still accessible, but true, people have to read through the three-item-list before they find it. --Cantho (talk) 16:31, 17 April 2014 (UTC)
- To get rid of table layouts in a really good way, I would need access to the css-files. Other solutions have also drawbacks, like documented here. --Cantho (talk) 16:57, 17 April 2014 (UTC)
Hi there. Thanks for your work, it is true the main page needs some work. I am sorry I don't have the time to read through all the previous comments at the moment, but here is my general feeling:
- I believe the Licensing and Help parts are very important - people need to know how to legally use the data, and where to look for help.
- It has been said before, the column length issue. However, I do agree that the Meta info box is not necessary in the front page - I imagine not many people actually use it, and people who need will find the information in another page.
Cheers Chtfn (talk) 14:25, 11 April 2014 (UTC)
- Thanks for your feedback! There will be links to licensing and help stuff in the navigation pages (as already in the example navigation page). The column length issue will be reduced by limiting news and events to three items on the main page (with links to the whole lists). --Cantho (talk) 07:24, 13 April 2014 (UTC)
I now shortened the right column by limiting news and events to three items each, with links for further reading. According to the wish of Aseerel4c26 I re-added the meta box. --Cantho (talk) 16:53, 15 April 2014 (UTC)
- Limiting the calendar to 3 events completely defeats its purpose. It should allow visitors to discover and prepare for events in the future, not just today's and tomorrow's events. --Tordanik 21:08, 15 April 2014 (UTC)
OK, I changed the layout so that we have all events on the main page without overhanging column and still a separation between navigation and other stuff. I also added a help link. The content of section "Other ways to contribute..." moved to a navigation page How to contribute. The meta box is still there, but further down, which hopefully gives a good compromise between "quite essential" and "not necessary" :) --Cantho (talk) 01:36, 16 April 2014 (UTC)
I added a link to osm.org in the Use OpenStreetMap-section. --LordOfMaps (talk) 07:47, 18 April 2014 (UTC)
- I would like to reserve the bigger link below the image for an introductory page, further description you find in my new proposal for a navigation concept. What do you think of how I changed the main page proposal again? I kept the world map link and kicked out the get maps links. --Cantho (talk) 21:26, 18 April 2014 (UTC)
- I like this proposal, and I agree to apply it to the Main page. --Władysław Komorek (talk) 10:38, 23 May 2014 (UTC)
Icons
I want to propose the following icons:
--LordOfMaps (talk) 19:19, 17 April 2014 (UTC)
Omitted links
There is now a list of omitted links, indicating how users should be able to find them (starting on the proposed new main page). --Cantho (talk) 11:23, 18 April 2014 (UTC)
Not yet accessible are Twitter and OSM_Blogs, which I propose to incorporate into the news box. --Cantho (talk) 11:25, 18 April 2014 (UTC)
I wrote down my Proposal for a navigation concept, which I propose to include as a section to Wiki organisation. --Cantho (talk) 21:06, 18 April 2014 (UTC)
I approve this proposal. Since the main page which follows this concept was adopted i would propose to add this text.--Jojo4u (talk) 15:05, 18 July 2015 (UTC)
Proposal to change the main page
I now proposed to change the main page. Please give your feedback/ vote. --Cantho (talk) 10:39, 23 May 2014 (UTC)
Category:Proposed features was moved to top level
It was hidden in Technical for some reason. Proposal process is way better with Template:Proposal Page and Proposal_process#Proposal_list.
Duplicate cat Category:Proposals_admin is now HIDDENCAT and located in Category:Categories>Wiki>Category:Proposals_admin Xxzme (talk) 07:53, 23 February 2015 (UTC)
Replace most of switch2osm.org links with Deploying your own Slippy Map
Reasons:
- inaccessible for translators and as result only English and French guides. You can see it right now and it was so 1-2 years ago.
- inaccessible for readers (not a wiki! no talk page to ask question)
- lack of recent promotional materials or consistent updates
- it doesn't matter where guide is located, it will be obsolete
- Category:Technical guide is not ready yet to fully replace switch2osm.org but it is better starting point now. I guess switch2osm.org was created for that reason. Over-categorization in technical articles was significantly reduced by me, now we have to update important guides with more recent examples and good external links (if we still need any)
Only 2 pages should be left with switch2osm.org link:
Xxzme (talk) 04:16, 17 September 2015 (UTC)
- I sort of agree in that I think there's a good principle of wiki linking which I should probably express on the guidlines somewhere. It's basically an extension of what I wrote about wikipedia Wiki guidelines#Wikipedia linking. Wikis are good at doing internal wiki linking. We should interlink often. External links can be clumsy by comparison. So therefore I imagine we should probably see more interlinking with the Deploying your own Slippy Map page, than we do external linking to switch2osm.org
- But you're being too extreme when you say Only 2 pages should be left with switch2osm.org link. No need for such a sweeping change. switch2osm.org is the site developed and curated by RichardF and communications working group to help people with certain types of technical problems around switching. It's part of "our family of websites", so naturally will see quite a lot of linking throughout the wiki (and elsewhere on OSM related websites) Basically if that's the most useful link to provide, then let's provide it.
- Incidentally Deploying your own Slippy Map is a weird clumsy name for a wiki page. It might be more natural to link to it more if it had a better name. The move would need to be done carefully though
- -- Harry Wood (talk) 11:14, 17 September 2015 (UTC)
Rules for arbitrary titles
Hi, I am imagining that, as in Wikipedia, there are rules for create arbitrary new pages: a page title must in the correct namespace context. Examples:
- All contents about a country or OSM Chapter are subpages. E.g. Brazil (BR) have as root WikiProject_Brazil, and any Brazilian's project must be a WikiProject_Brazil's subpage (e.g. WikiProject_Brazil/Modelos_de_Contrato).
- All key documentation must be into the key: namespace, e.g. key:wikidata.
- Any personal content or personal's initiative without use or utility for the community, must be an user's subpage.
- etc.
- No one here is "the owner" of the page, except in the case of the user's page or an user's subpage.
So, where the rules? --Krauss (talk) 16:31, 30 August 2018 (UTC)
- Well, I would take a look at Wiki_organisation#Pages_naming_convention. From the points mentioned, the first and second one is definitely wrong (not regulated), the third one is questionable (Why writing unrelated things in this wiki anyway? Chances would be high to get this deleted/marked as spam.).
- Please remember that this is a smaller wiki than Wikipedia. There are not rules there for every kind of incident. Some things are not regulated, follow conventions only or are uncontrolled/decided in a case by case analysis.
- If you tell me about your intentions, I may be able to assist when creating a new page... --U30303020 (talk) 22:07, 30 August 2018 (UTC)
- Hi @U30303020, the main "dilema", where we need some help at this moment, is about correct namespace for our "near personal" projects — when we drafting, we still do not know if the project will be widely used by the community or not. See this other example: need to be a language (pt-BR) subpage? a WikiProject_Brazil subpage? a user:author subpage?
- The example was placed correctly.
- The pt-BR-namespace was removed, for some reason I do not know right now (I was not involved in it).
- There are two options:
- -- main namespace page (like your example)
- -- user subpage (if it beocmes more popular, we can move it to the main namespace)
- In case of a translation, this page will be moved anyway. U30303020 (talk) 08:29, 3 September 2018 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Wiki_organisation | CC-MAIN-2019-39 | refinedweb | 9,092 | 60.95 |
In SAP you can add different files to the materials. Files added
to the materials in SAP are shown on the product details page in
the Sana web store, on the Attachments tab.
For example, you are selling electronics or some complex
equipment used in mechanical engineering, and you need to provide
some manuals to your customers online. These documents can be
attached to the material in SAP and shown on the product details
page in the Sana web store.
Sana also provides a possibility to attach files to products and
sales documents from the file system on the Web server. For more
information, see "Product
and Order Attachments".
The SAP user must have the necessary permissions to be able to
add attachments. and then: Create >
Create Attachment. Find the necessary file and add
it to the material.
By clicking Attachment list, you can see the
list of all attachments added to the material. You can open the
attachment list only when at least one attachment has been added to
the material. In the Service: Attachment list
window, you can also add and remove attachments.
In SAP you can choose which material attachments you want to
show in your Sana webstore.
In the main
menu of the Sana add-on
(/n/sanaecom/webstore), click Attachments
Overview (/n/sanaecom/att_ovrvew).
Step 1: Enter the Webstore Id
and select Material Attachments. You can use
Input Parameters as a filter to narrow search
results and show only those materials that you need. Click
"Execute".
Step 2: In the Attachments and Product
Images window you can see the list of materials and files
attached to them. If you select the Visibility
checkbox, then your material attachments will be shown on the
product details pages in the Sana webstore.
At the top of the window you can see the buttons Select
All, Unselect All and Update
Visibility which can be useful for quick managing of
material attachments visibility.
When attachments are added to the materials in SAP, you need to
rebuild the product index. Open Sana Admin and click:
Tools > Scheduled tasks. Run
Rebuild index for the Product
import task. | https://help.sana-commerce.com/sana-commerce-93/erp-user-guide/sap/material-attachments | CC-MAIN-2020-29 | refinedweb | 355 | 55.34 |
name 'driver' not defined, Selenium Webdriver python3
meaning of names dictionary
name dictionary
names for girls
names for boys
name generator
name meaning search engine
behind the name
I'm trying to open a website using Selenium WebDriver Chrome, but haven't even gotten to that as my code keeps producing errors. I have already fixed one by doubling the '' in the directory for Chrome Driver.
I am using Pycharm. I would like to know why this is happening, and a fix for it.
The error is:
Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'driver' is not defined
and this is my code:
from selenium import webdriver Browser = driver.Chrome(r'''C:\Users\ballc\Downloads\chromedriver_win32\chromedriver.exe''')
from selenium import webdriver
you can see the module name is
webdriver, so you should use that name. As the error states,
driver is not defined (you never defined it).
so..
Browser = driver.Chrome()
should be:
browser = webdriver.Chrome()
Given name, synonyms: title, denomination, designation, honorific, tag, epithet, label, naam, moniker, handle, appellation, cognomen, allonym, anonym, appellative Name definition is - a word or phrase that constitutes the distinctive designation of a person or thing. How to use name in a sentence. a word or phrase that constitutes the distinctive designation of a person or thing; a word or symbol used in logic to designate an entity…
Try this code :
from selenium import webdriver driver = webdriver.Chrome(executable_path = r'D:/Automation/chromedriver.exe') driver.get("")
Name, "as usual, the big race will lure the top names" Name definition, a word or a combination of words by which a person, place, or thing, a body or class, or any object of thought is designated, called, or known. See more.
Name, synonyms: celebrity, star, superstar, VIP, famous person, important person, leading light, celebutante, big name, luminary, mogul, person of note, dignitary, personage, worthy, expert, authority, lion, celeb, somebody, megastar, big noise, big shot, bigwig, big cheese, big gun, big wheel, big fish Define name. name synonyms, name pronunciation, name translation, English dictionary definition of name. n. 1. a. A word or words by which an entity is designated and distinguished from others.
Baby Names at BabyNames.com, "hundreds of diseases had not yet been isolated or named" Explore popular baby names and selection tips, learn baby name meanings, get ideas for unique boy and girl baby names from the editors of Parents magazine.
Behind the Name: The Meaning and History of First Names, synonyms: call, give a name to, dub, label, style, term, title, entitle, baptize, christen, clepe, denominate, called, by the name of, baptized, christened, known as, under the name of, dubbed, entitled, styled, termed, described as, labeled A name sticks with us from birth until death and although we often have no control over it, our names can define us. We can all thank our parents for our name. You can search for your own name and pull out amazing information from the massive name database.
Tuscaloosa teen wins NASA's "Name the Rover" contest, names Mars helicopter, specify (an amount, time, or place) as something desired, suggested, or decided on..
- you imported
webdriver, what should be
driver?
- @PRMoureu is it not supposed to be webdriver? I just used driver instead, it said "no module named driver"
- you are supposed to import
webdriverbut why not using it instead of
driver?
- @PRMoureu i did import webdriver do you mean driver.Chrome?
- yup! what
driver.Chromeis supposed to be ? why don't you want to use
webdriver.Chromeinstead ? | http://thetopsites.net/article/50887238.shtml | CC-MAIN-2020-50 | refinedweb | 589 | 58.72 |
One of the reason why Java language has been so useful and used widely is the set of APIs that comes with the language (and 3rd party APIs like iText etc). Using these APIs one do a whole lot unimaginable stuff.
Java Reflection API are one of such APIs that extend the horizon of a Java programmer and enables him to code some really great stuffs..
Dynamic Java.
Let us see an example of Dynamic class loading using Java Reflection API. Following is our DemoClass that needs to be loaded dynamically and method demoMethod() needs to be called.
class DemoClass { public String demoMethod(String demoParam) { System.out.println("Parameter passed: " + demoParam); return DemoClass.class.getName(); } }
So to load above class file dynamically following code can be used.
public class DynamicClassLoadingExample { public static void main(String[] args) { try { ClassLoader myClassLoader = ClassLoader.getSystemClassLoader(); // Step 2: Define a class to be loaded. String classNameToBeLoaded = "net.viralpatel.itext.pdf.DemoClass"; // Step 3: Load the class Class myClass = myClassLoader.loadClass(classNameToBeLoaded); // Step 4: create a new instance of that class Object whatInstance = myClass.newInstance(); String methodParameter = "a quick brown fox"; // Step 5: get the method, with proper parameter signature. // The second parameter is the parameter type. // There can be multiple parameters for the method we are trying to call, // hence the use of array. Method myMethod = myClass.getMethod("demoMethod", new Class[] { String.class }); // Step 6: // Calling the real method. Passing methodParameter as // parameter. You can pass multiple parameters based on // the signature of the method you are calling. Hence // there is an array. String returnValue = (String) myMethod.invoke(whatInstance, new Object[] { methodParameter }); System.out.println("The value returned from the method is:" + returnValue); } catch (SecurityException e) { e.printStackTrace(); } catch (IllegalArgumentException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (InstantiationException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } catch (NoSuchMethodException e) { e.printStackTrace(); } catch (InvocationTargetException e) { e.printStackTrace(); } } }
The above code is pretty much self explanatory. We have used
ClassLoader.getSystemClassLoader() method to get instance of class java.lang.ClassLoader. We have loaded our demo class DemoClass using method loadClass() of ClassLoader and invoked the desired method.
Sweet quick refresher of using Java Reflection. I love the ability to read private methods and invoke them using Java reflection.
Java reflection can help us load only certain required classes by taking the decision at run time instead of loading bunch of classes during compile time.
very good explanation of java run time environment this is really gud article for those people who started a carrier in java for initial information……
very good example, and if my method has more than one parameter then what are the changes should be done in the above code
@Madan – If your method has more than one parameter than you can pass it while calling invoke() method like below.
thank you so much.
Plllz can any one help me i want load a class but i have only fille.class ,,,i haven’t .java file how can i make it ???
@Amira: You can load .class file dynamically by just placing .class file in your project’s (JVM’s) classpath and calling
Clasloader.loadClass()method as specified in above example.
So when we have multiple classes, multiple methods to be loaded, we would need the loader to be aware of classname , method1name(params..) etc for each class/method right?
Also you say you can place the .class file in your projects classpath – How is this done exactly?
your .class which have class name, method name with all the required parameters pre-available in order to load the .class file.
Hey!!
I am getting NoClassDefFoundError while executing the above code block.
Here is an stake trace:
Exception in thread “main” java.lang.NoClassDefFoundError: net/viralpatel/itext
pdf/DemoClass (wrong name: DemoClass) DynamicClassLoadingExample.main(DynamicClassLoadingExample.java:16)
—–
can you help me to sort it out….
thanks
Hi I have doubt u told this one as dynamically loading classes.my doubt is when class loader is running as a program , already loaded class is got changed now class loader will pick the latest changed one ????.
No. Newly updated class won’t be reflected. Container will retain old version class in memory. To load newly updated class. you must start your server
Sorry for above comment. I improve that.
No. Newly updated class won’t be reflected. Container will retain old version class in memory. To load newly updated class. you must restart your server
Hi,
This is a very nice example to learn dynamic class loading very quickly.
Here the class being loaded have default constructor. But what if we have to load a class which don’t have default constructor? Is there any way to load that also?
Thanks Viral, You are doing great job, your site is so helpful…
Can someone explain me this line??
String classNameToBeLoaded = “net.viralpatel.itext.pdf.DemoClass”
I need to know hwats the package in my project
Thanks Viral. This site is really helpful to me
hey viralpatel can u tell me how to excute the above code……..urgent
Thanks Viral. This page is also good for understanding Java reflection
Hey JKG,
net.viralpatel.itext.pdf.DemoClass has DemoClass as java class name and rest of the things before is a package name. You can give any name in package like –
test.DemoClass but the good way is which is given above..
hey viral, could you plz let me know how to use this code for an constructor instead of a method??
thanx
Hi
What is the difference when we create an object of the base class and call its method?
why we should do reflection here ?
thank you for this code of example it is great to me ?
So what if the class was not in the classpath when the VM started. What if I wanted to add the class to the classpath, then load the class afterwards. Doesn’t the Browser do something like that when loading Applets?
hi, thanks for your usefull example, but I have a problem, when I edit .java file and save it, in runtime when I reload it, it doesn’t change? Can you help me please?
how can I reload a class ?
What a simple and useful way to explain..
Thanks a lot for this information
Hi ViralPatel,
How about to call all the mandatory methods in a class.
It is very difficult to write right with reflection.
Regards,
Sai
Object whatInstance = myClass.newInstance();
with that instance i can directly call method of that class .I want to understand then what is the significance of invoking method using above code .?
If I want to load a class passing a own class, if I do it:
Method method = tempClass.getMethod(“printData”,person.core.getClass());
If the method to invoke is
public void printData(Person person){
System.out.println(person.getName())
}
I recived a java.lang.NoSuchMethodException
Sorry, there are a mistake in the upper example. I write my real case:
The loader:
ICore core = new ICore();
urlClassLoader = new URLClassLoader(new URL[]{new URL(“”+f.getAbsolutePath())});
Class tempClass = urlClassLoader.loadClass(modulePackage);
Constructor constructor = tempClass.getConstructor();
Object tmpConstructor = constructor.newInstance();
Method method = tempClass.getMethod(methodName,new Class[]{String.class,ICore.class});
return (ArrayList ) method.invoke(tmpConstructor,info,core);
The ICore class:
public class ICore {
private String name;
public ICore(){
this.name = “david”;
}
}
the target method:
public class Controller {
public ArrayList getData(String input,ICore core){
ArrayList list = new ArrayList();
System.out.println(core.getName());
}
}
If I do this I have next error:
java.lang.NoSuchMethodException: Controller.getData(ICore)
What is happening? how can I do it?
Can this work without compiling the .java into a .class file? | https://viralpatel.net/blogs/java-dynamic-class-loading-java-reflection-api/ | CC-MAIN-2019-09 | refinedweb | 1,274 | 61.02 |
A common practice when selecting a naming convention is to use verb first resulting in names that read like an English phrase like GetThisThing. While establishing and sticking with a naming convention is wise, other factors are presented that may prove helpful, especially grouping concerns.
Although English words are used as the structure of virtually all programming languages, not all developers speak English as their native tongue. Non-English developers will frequently use English terms for their public classes and may go as far as commenting their code in English.
However the English sentence structure may not be the best when we are attempting to form a valuable descriptive phrase. We should consider the benefits of different sentence structures, most notably the position of the verb, when we form names.
If we have a class that has a value that we need to get, set and toggle we might have functions GetThisThing, SetThisThing and ToggleThisThing. We write our class with our three functions plus 38 more. A month later when it is no longer fresh in our minds, or when Joe Developer in the other department needs to make use of our class and work with an instance of ThisThing it is natural to use IntelliSense to find the functions we need.
We can assume that if we start typing .g that IntelliSense will quickly get us to the GetThisThing function. The same is true for our set function. However, since we don't remember or are not familiar with the code, we may not know that there is a toggle function that we require and end up doing the get, toggle, set sequence ourselves. IntelliSense will show us the toggle function, but not until we start with .t. When there are a lot of functions in our class, we are unlikely to notice it.
If we put the verb on the end, our function names would be ThisThingGet, ThisThingSet, ThisThingToggle. Now when we are working with an instance of the class and we need to work with ThisThing and we start typing .t we end up with all of our functions that address ThisThing together.
The use of namespaces allows us to group functional areas together which aids in code manageability. Grouping functions within a class by beginning names with its group accomplishes the same for a class.
We need to balance readability in an English sense and usability from a developer’s perspective. After all, the code is intended to be read by developers and not a general audience.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
DirectorWare wrote:"Personally rather than using regions, I make heavy use of partial classes and separate logical segments of a class into physically separate files. This allows particular segments to naturally appear in separate tabs and be instantly visible in the file tree. For example I might have Class.cs and Class.Xml.cs to separate XML serialization methods."
Dave Jellison wrote:"I see no need for this, imho. Taking your example I would write a generic class (and have one for XML serialization e.g. (static class) XmlSerailizer.Serialize(string path), which also handles caching the XmlSerialization classes because of the reflection overhead due to loading a new type).
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/script/Articles/View.aspx?aid=28113 | CC-MAIN-2016-44 | refinedweb | 580 | 62.07 |
Table of contents
Created
8 August 2011
Requirements
Prerequisite knowledge
This guide assumes you are familiar with the Flash Professional workspace and have a basic knowledge of working with FLA files and ActionScript.
User level:
- White (256, 256, 256)
- Black (0, 0, 0):
- White (0xFFFFFF)
- Black (0x000000):
- Transparent white (0x00FFFFFF)
- Opaque white (0xFFFFFFFF):
- Counting in hexadecimal (Kirupa Chinnathambi)
- Reading RGB/aRGB color values (Kirupa Chinnathambi)
- Tracing ARGB (32-bit) hex values (Grant Skinner):
- Brightness shifts an objects color between black and white
- Tint adds a tint to the object
- Alpha adjusts the transparency of the object
- Advanced adjusts combinations of brightness, tint, and alpha
To change the alpha (transparency) of a graphic:
- Create a new ActionScript 3 FLA file and name it alpha.fla.
- Rename Layer 1 to assets.
- Draw a shape on frame 1 and convert it to a movie clip symbol (Modify > Convert to Symbol) named Shape.
Note: If you use a TLF text field, you can skip Step 2. For classic text or any type of drawing object, however, you need to convert the asset to a movie clip or button symbol before applying the color effect.
- Select the shape instance on the Stage and change the Color Effect menu from None to Alpha.
- Drag the Alpha slider toward 0% and notice that the transparency of the graphic changes.
- Return the Color Effect menu to None to turn off the effect.
- Save the file.:
- Create a new ActionScript 3 FLA file and name it alpha:
shape_mc.alpha = .5;
- Run the Test Movie (Control > Test Movie) command to see the effect.
- Save the file.
To change the color of a graphic:
- Create a new ActionScript 3 FLA file and name it color:
import flash.geom.ColorTransform; // Change the color of the shape var ct:ColorTransform = new ColorTransform(); ct.color = 0x0099FF; shape_mc.transform.colorTransform = ct;
- Run the Test Movie (Control > Test Movie) command to see the effect.
- Save the file.:
- Kuler online
- Kuler help pages
- Kuler Services | https://www.adobe.com/devnet/flash/learning_guide/graphic_effects/part02.html | CC-MAIN-2018-47 | refinedweb | 328 | 62.98 |
JBoss ESB 4.2 Milestone Release 3
Trailblazer Guide
JBESB 3.
This guide is most relevant to engineers who are responsible for using JBoss ESB 4.2 Milestone Release 3 installations and want to know how deploy and tes the Trailblazer found under the Samples.
You will need the JBossESB distribution, source or binary to run the trailblazer. You will also need an instance of JBoss Application Server installed with JBossWS and EJB3 support. You can use the App. Server installer from JBoss and install using the EJB3 Profile.
To test the email notification of quotes, you will require a mail server or the information from your ISP/company email server.
This guide contains the following chapters:
Chapter 1, Overview: an overview of the loanbroker trailblazer scenario used in JBossESB.
Chapter 2, In Depth Look: a more detailed look at the various artifacts that make up the trailblazer.
Chapter 3, Deploying and Testing the TB: how to compile, deploy, and test the trailblazer.
The following conventions are used in this guide:
Table 1 Formatting Conventions
In addition to this guide, the following guides are available in the JBoss ESB 4.2 Milestone Release 3 documentation set:
JBoss ESB 4.2 Milestone Release 3 Administration Guide: How to manage the ESB..
Chapter 1
Trailblazer
The Trailblazer is meant to show a commonly understood use-case where the JBossESB can be used to solve the integration problem at hand. The TB is loosely based on the Enterprise Applications Integration book (). The scenario is very simple - a user is shopping around for a bank loan with the best terms, rate, etc. A loan broker will act as middle-man between the user and the banks. The LoanBroker will gather all the required information from the user, and the pass it on to each bank. As the quotes are received from the various banks, the LoanBroker will pass those back to the requesting user. This is a common practice in the financial services world today – it's a model used for insurance quotes, mortgage quotes, and so on.
A simple scenario as described above, actually puts forth several integration challenges. Each bank has it's own data feed structure (xml, delimited, positional, etc), it's own communication protocol (file, jms, ftp, etc), and finally the responses from each of these is very unique to each. A LoanBroker acting as the agent for these institutions must be able to accommodate each scenario, without expecting the bank to adjust anything. The bank's provide a service, and have a clearly defined contract in which to carry out that service. It's our job as the LoanBroker developer to ensure we can be as flexible and adaptable as possible to handle a variety of possible communication protocols, data formats and so on.
This is where JBossESB comes in. Traditionally, an organization would create custom code and scripts to manage the end to end integration between the LoanBroker and each bank. (aka point-to-point interfaces). This is cumbersome, and messy when it comes to maintenance. Adding new banks, and new protocols is not easy. JBossESB gives us a central framework for developing a solution built around a common set of services which can be applied over and over to each unique bank requirement. Adding a new bank then becomes trivial, and support is a lot simpler when you only need to work on one common codebase.
The diagram below shows the scenario at a high level:
* the diagram above is not using any specific notation or style (some of you might be expecting the EIP symbols).
Chapter 2
In Depth Look
The client is a simple JSP page, which routes the submit to a waiting web service. The Loan Request consists of the typical information you would expect from such a request: A social security number (ssn), some personal information like name, address, and so on, as well as loan specific information – loan amount, etc.
The web service, which is responsible for receiving the loan requests is a JSR-181 based annotated web service. An annotated web service let's you take any pojo and expose the methods as being capable of receiving requests. The class looks as follows:
package
org.JBoss.soa.esb.samples.trailblazer.web;
import javax.jws.WebMethod;
import javax.jws.WebService;
import javax.jws.soap.SOAPBinding;
import org.apache.log4j.Logger;
import org.JBoss.soa.esb.samples.trailblazer.loanbroker.LoanBroker;
/**
* The Loan broker web service, which will handle a loan request.
*/
@WebService(name = "LoanBrokerWS",
targetNamespace = "")
@SOAPBinding(style = SOAPBinding.Style.RPC)
public class LoanBrokerWS
{
private static Logger logger = Logger.getLogger(LoanBrokerWS.class);
@WebMethod
// method name is .NET friendly
public void RequestLoan(WebCustomer customer) { logger.info("WebCustomer received: \n" + customer);
LoanBroker broker = new LoanBroker(); broker.processLoanRequest(customer);
}
}
JSR-181 annotated pojo web services are a very easy and powerful way to expose plain old java classes as web services. The JBossESB does not have built in support for web services yet, but since we are working in Java, there is no reason why you cannot combine your own web services with the JBossESB services, which is what was done in the trailblazer. The class above is the server side web service. You still need to provide the client, the JSP in this case the client stubs to communicate with the web service. JBossIDE which has a web service client side generator to create these classes if you are looking for a tool to use for this.
The most important piece in the web service, is the line which invokes the LoanBroker object, and passes a customer object for processing.
The Loan Broker is a standard java application, which makes use of the services available in the JBossESB to get data to and from the banks, and then finally back to the customer as an email response.
Let's look first at the ESB components required for processing a loan request.
In this release, the Bank bundled include a JMS based bank, and a File based bank. Each has it's own unique data requirements and data formats. These are external entities. In a real production world scenario, these might be internal systems, accessible within your own network, or they may be external providers which you will need to communicate with through some protocol. Needless to say, for this example, we are not focusing on aspects like security, authentication, and other concerns which you would most certainly face. We are focusing solely on the JBossESB components and some sample configurations which you could use to create a similar scenario.
JBossESB has a concept of “Gateway” and “ESB Aware” services. ESB Aware services are able to communicate with the ESB directly using native APIs found in the ESB. These APIs for instance require that you use a Message object. Since the LoanBroker is java based, and has access to the JBossESB APIs, it will be an ESB Aware service. The banks on the other hand, are NON-ESB Aware services. They have no idea, nor should they know anything about the ESB. It is the job of the services in the ESB to facilitate communication to and from the banks, as well as data transformation to/from and so on. These services (the Banks) will interact with the JBossESB through what we call Gateway Services. To read more on the differences between the two, please see the Programmer's Guide.
Let's look at just how you configure the various services in JBossESB. Inside the <TRAILBLAZER_ROOT>/esb/conf/jbossesb.xml you will see the following deployed services:
<?xml
version =
"1.0" encoding
=
"UTF-8"?>
<JBossesb
xmlns=""
parameterReloadSecs="50">
<providers>
<jms-provider
<jms-bus
<jms-message-filter
</jms-bus>
<jms-bus
<jms-message-filter
</jms-bus>
<jms-bus
<jms-message-filter
</jms-bus>
</jms-provider>
</providers>
<services>
<service category="trailblazer" name="creditagency" description="Credit Agency Service">
<listeners>
<jms-listener
</listeners>
<actions>
<action
xlass="org.JBoss.soa.esb.samples.trailblazer.actions.CreditAgencyActions" process="processCreditRequest" name="fido"> </action>
</actions>
</service>
>
</services>
</JBossesb>
The config above uses a configuration structure which is described in much more detail in Chapter 5 of the JBossESB Programmer's Guide. The config for the TB describes several communication providers, listed in the <providers> section, all consisting of JMS in this example, and using JBossMQ as the actual JMS transport. Next, several <services> are listed, starting with the creditagency, and the various JMS bank services for sending and receiving data from the banks. The banks have their own config files, which must be configured to use and reply on the queues described above. Please see <TRAILBLAZER_ROOT>/banks/bank.properties.
The LoanBroker makes use of the services described above, in the following lines of code:
public
void
processLoanRequest(WebCustomer wCustomer){
Customer customer = getCustomer(wCustomer);
//keep the customer in a file someplace for later use, if needed CustomerMasterFile.addCustomer(String.valueOf(customer.ssn), customer); //step 1 - send to credit agency for credit score if available //uses 2way courier for a response
sendToCreditAgency(customer);
//step 2 - send to JMS Bank
sendToJMSBank(customer);
}
The sendToCreditAgency is where an interaction with the ESB takes place. Please see the code for more detailed listing. The sections below illustrate the important parts:
courier.setReplyToEpr(replyEPR);
//wait for 5 seconds then give up
replyMessage = courier.pickup(5000);
We set the courier's ReplyToEpr with an EPR we create, then we tell the courier to pickup the response for us, waiting a maximum of 5 seconds. For more detailed information on how Couriers and 2WayCourier's work, please see the Programmer's Guide.
The interaction with the Banks uses a simpler, asynchronous API – there is no waiting for a reply from the banks. The bank replies come in on their own queue, and the GatewayService defined for that purpose fires it off to an action class to handle the response. See the listing from the jbossesb.xml:
>
The important element above is, that the org.JBoss.soa.esb.samples.trailblazer.actions.BankResponseActions is the class that is defined as being responsible for handling the bank JMS responses. The property process=”processResponseFromJMSBank” tells the service which method in this class will actually do the work. Below is a code snippet from this method:
public
Message processResponseFromJMSBank(Message message) throws
Exception {
_message = message;
_logger.debug("message received: \n" + new String(message.getBody().getContents()));
//get the response from the bank and set it in the customer
ConfigTree tree = ConfigTree.fromXml(new String(message.getBody().getContents()));
String quoteID = tree.getFirstTextChild("quoteId");
String rate = tree.getFirstTextChild("interestRate");
String errorCode = tree.getFirstTextChild("errorCode");
String ssn = tree.getFirstTextChild("customerUID");
String email = tree.getFirstTextChild("customerEmail");
ProcessEmail procEmail = new ProcessEmail(email, quoteID, rate, errorCode, ssn);
procEmail.sendEmail();
return message;
}
The code above retrieves the contents of the payload from the Message.getBody().getContents(). Those contents are then used to populate some strings, which are eventually used to fill in the email which goes back to the customer.
The sequence diagram below illustrates the full set of calls that are made in the trailblaz- | http://docs.jboss.org/jbossesb/docs/4.2MR3/manuals/html/TBGuide.html | CC-MAIN-2015-32 | refinedweb | 1,829 | 55.95 |
// Licensed under the terms of the GNU GPL, version 2
//
#include
#include
#include
#include
#include
#include "tools.h"
static u8 *nor = NULL;
static void new_dir(const char *n)
{
mkdir(n, 0777);
if (chdir(n) < 0)
fail("chdir");
}
static void do_toc(u8 *ptr)
{
u32 n_entries;
u32 i;
u8 *p;
u8 *tmp;
u64 size;
char name[0x20];
n_entries = be32(ptr + 0x04);
p = ptr + 0x10;
for(i = 0; i < n_entries; i++) {
memcpy(name, p + 16, 0x20);
if (strncmp(name, "asecure_loader", 0x20) == 0) {
new_dir("asecure_loader");
do_toc(ptr + be64(p));
if (chdir("..") < 0)
fail("chdir(..)");
} else if (strncmp(name, "ros", 3) == 0) {
new_dir(name);
do_toc(ptr + be64(p) + 0x10);
if (chdir("..") < 0)
fail("chdir(..)");
} else {
tmp = ptr + be64(p);
size = be64(p + 0x08);
if (be32(tmp + 0x10) == 0x53434500) {
tmp += 0x10;
size -= 0x10;
}
memcpy_to_file(name, tmp, size);
}
p += 0x30;
}
}
static void modifyimage(u8 *ptr)
{
u32 i;
u8 temp;
for(i=0; i 3)
modifyimage(nor);
do_toc(nor + 0x400);
return 0;
}
16MB
[] Big block [x] Raw
Pages per Block: 32
Block Count: 1024
256MB
[X] Big block [x] Raw
Pages per Block: 64
Block Count: 2048
512
[X] Big block [x] Raw
Pages per Block: 64
Block Count: 4096
• Please Register at PS3News.com or Login to make comments on Site News articles. Thanks!
107 Comments - Go to Forum Thread »
ProgSkeet Diagram and Flasher Changelog: 08/04/11
ProgSkeet Diagram and Flasher Changelog: 08/05/11
Also from uf6667 (twitter.com/uf6667/status/99504102089830400) comes an updated NorUnpack (pastebin.cc/index.php?show=60) to support ProgSkeet dumps - Usage: "./norunpack 1"
ProgSkeet Flasher Changelog 08/07/11:
ProgSkeet Flasher Changelog 08/08/11:
ProgSkeet Flasher Changelog 08/11/11:
Note: Some NOR flash devices have bad sectors (usually just a few bytes, but enough for entire systems not to work). These will be reported in C:\ProgSkeet.log If you have such a device (for example, I heard this happens with RSOD on PS3), you should relocate the sectors to somewhere else or replace the NOR flash device.
110811-A:
110811-B:
110811-C:
110811-D:
To quote, roughly translated: LS Team is pleased to present a new review of products for your changes to consoles. Today we will test the promising ProgSkeet, as we already reported in our news
We will start with a small reminder: ProgSkeet is a program to dump all the flash and flash memory (NAND / NOR), so for us it will be more designed for Xbox 360, Wii, PS3 already supported. The team said that other devices will be supported later as satellite receivers dreambox. So this will allow you to retrieve an image or flash memory of your console, but also debricker your console or to the Dual-NAND.
This kind of product already exists but ProgSkeet Team put forward a speed unmatched dump before. We will now see if the team has delivered.
s3nint3!s3nint3!
The progskeet comes Ready To Use, and is already programmed but is easily updatable via Injectus or Proflash 3 (support JTAG programming cables should also be created) for future updates to the microcontroller. At first sight, one notices that the progskeet is a quality product that is not at all "cheap" with its almost 2mm PCB and solder treated.
s3nint3!
Like any device connecting to the PC, install the drivers available on the official site, nothing complicated.
s3nint3!s3nint3!s3nint3!s3nint3!
We must now verify that your ProgSkeet works well for this, we use the software and we do UF6667 "test shorts."
s3nint3!
If you have "No Short found! Device Properly working" brand, your controller is functional and the drivers are installed correctly.
s3nint3!
You can now start the welds.
s3nint3!s3nint3!s3nint3!
Now connect your PC to your Progskeet powered console and click a NAND. You should see the ID down with XXXX XXXX changed the ID of your NAND. In my case, a Jasper 512 is a Samsung nand with ID EC DC 10 95 54.
s3nint3!
We must now choose what you want to dump, you can retrieve the information in the manufacturers datasheet, just look at the reference of the chip and do a google search. I put what it takes to set up Xbox.
After just under 10 minutes, my 512MB nand was dumped, can be seen at an average speed of 0.88mo / s. For comparison, to dump a module with 16MB USB SPI whatever, it takes more than 6min ...
s3nint3!
So we can say that at speed, there is no photo ... the team has clearly delivered on its promise.
s3nint3!
Now a little more balance with the lowest for the 360:
Cons:
Pros:
Acknowledgements:
Just updated the DIAGRAMS COLLECTION RAR file in download area.
Programmer now works with NOR Flash too. It can flash partially in less than a minute
ProgSkeet Diagram and Flasher Changelog: 08/03/11
I recently got the opportunity to ask a few questions to hacker uf6667 about a project recently announced as nearing completion: ProgSkeet or "The Last Mile", a device that has the ability to bypass the PS3′s security and downgrade it's firmware.
Hacking and homebrew on the PS3 have been a contentious issue ever since Sony removed the ability to install Linux on the PS3 when hacker Geohot took advantage of it and broke through the PS3′s security. This sparked what has been a tumultuous couple of years for Sony and gamers, seeing the PS3′s data encryption key released to the public and PSN being brought offline.
But it isn't over yet, as new devices, such as ProgSkeet begin to pop-up, but what does the developer of this latest device have to say?
What exactly is ProgSkeet and what does it enable users to do?
It's a device that allows you to read and program non-volatile memory (NVM), in the strictest meaning... What can be done with it? Well, say you have this new car and want to mess with its calibration data - pop the hood, find the ROM (NAND/NOR, supports both), solder it to it, hook it up to your computer via USB, fire up the software, do the work. This is NOT limited to cars - it can also program flash that is contained in PS3, PSP, PSP Go, Wii and Xbox 360. Large block, small block, doesn't matter!
For what purpose was ProgSkeet designed and developed?
Programmers used in the industry cost thousands of euros and there is no cheaper solution to this problem... until now. It can match both performance as well as low price range to complete the purpose... Sure it'll kill some businesses because their engineers are still paying for student loans but hey, what do you care?
How does ProgSkeet compare to say, the PS Jailbreak Dongle or Infectus?
Let me make one thing clear: it was NOT designed for PS3 in mind. It's just lucky that PS3 supports NVM. Comparing it to PS Jailbreak is like comparing apples to oranges.
But back to the subject - Infectus is good for what it does but it's limited in both speed as well as support. See, Infectus can reach speeds like 50kB/s whereas ProgSkeet can do 1MB/s without hassle. Plus, the customer is king - in this case, the user can either choose one of the available configurations for flashing or add new ones (MoviNAND for example) - you no longer have to wait for a team to update their software to accustom your needs - you can just read the datasheet by yourself and add the configuration.
The question is: do you want a product that is never updated, limited in capabilities and slow (Infectus) or do you want something that's fast, has steady support and keeps adding features?
PS3 (NAND+NOR), 360 (NAND, big block + small block), Wii (big block) are already supported. You can add more if they are not yet supported. Just need like 10 pieces of information from the datasheet.
When did you start development on ProgSkeet and why?
I am NOT "the cook" but he tells me he started in like 2007 or something for a school project with a USB core of his own. This year he introduced it to me and I started working on the software, suggesting changes in the hardware, etc. I have sht loads of things I can flash but no device to do the job. I don't fancy reversing a TAP for each and every device. Too much hassle.
When will ProgSkeet release and how much will it cost?
Production will be done July 15th (I was told), so expect it very soon. Software just needs some ironing here and there but will be functional by the time it can be ordered.
Expect the cost to be less than 50EUR (I don't know the exact cost but for sure not over 50EUR).
After seeing Sony's treatment of GeoHot, are you worried that they will come after you as well?
Well, this is a complicated question. GeoHot "extorted" Sony on paper ("If you want it secure bla bla bla hire me") and accepted donations at some point. That's him making money after a failed extortion. In this case, we don't give a sht about Sony. I don't even play PS3. If you want to use it on PS3, go right ahead, then each and every producer of flashers will be prosecuted or in this case: persecuted.
"The cook" was *explicitly* advised to remove the sniffing abilities to get rid of any kind of troubles. Originally, it was intended as a programmer, logic analyzer, and pattern generator - all very legal in their own purpose - however, in a potential civil lawsuit, this could escalate to criminal lawsuit, due to European/International IP laws, specifically concerning intended circumvention of TPMs. It's like distributing hunting rifles without a licence: you're bound to get fcked when the first idiot commits homicide/suicide. Don't make it possible and you're free from worries.
In conclusion, if you're into homebrew: be excited, but if you're just interested in playing games then there's nothing to worry about. | http://www.ps3news.com/ps3-hacks-jailbreak/progskeet-v1-1-crystal-blue-limited-edition-for-ps3-arrives/page-18/ | CC-MAIN-2014-41 | refinedweb | 1,688 | 68.7 |
$ cnpm install react-ilib
React-ilib is a library of React components that wrap ilib classes to make it easy to use iLib within React apps.
There are various types of components in this library:
Formatter components. These use the ilib formatter classes to format various things locale-sensitively.
Input components. Components that implement various locale-sensitive widgets that allow users to input locale-sensitive information. These use the ilib formatter classes to glean information about the fields required to create a set of HTML form input elements.
Localization components. Components that are used to translate strings in a React-idiomatic way and specify locales for all ilib-based components.
This library will eventually cover all of the ilib formatter classes, but currently it is in development, and it does not cover them all yet.
The address formatter component is called
AddressFmt.
import AddressFmt from 'react-ilib/lib/AddressFmt'; <AddressFmt address={Address} separator={<br/>}
Format an iLib Address as a string.
The date/time formatter component is called
DateFmt.
import DateFmt from 'react-ilib/src/DateFmt'; <DateFmt date={date-like}
Format an iLib Address as a string. Props can contain:
The list formatting component is called
ListFmt.
import ListFmt from 'react-ilib/lib/ListFmt'; <ListFmt list={Array.<string>}
Format an array of strings as a list
The unit formatting component is called
UnitFmt. This formats that sizes of various measurements such as length, mass, digital storage size, etc.
import UnitFmt from 'react-ilib/lib/UnitFmt'; <UnitFmt measure={Measurement} locale="string" wrapper={<span/>} className="string" id="string" length="string" style="string" autoScale={boolean} autoConvert={boolean} usage="string" maxFractionDigits={number} minFractionDigits={number} significantDigits={number} roundingMode="string"
Format an ilib Measurement as a string. Props can contain:
An example of using this component to format a measurement:
import MeasurementFactory from 'ilib-es6/lib/MeasurementFactory'; import UnitFmt from 'react-ilib/lib/UnitFmt'; let m = MeasurementFactory({ measure: 24, unit: "mph" }); // this will convert to metric for Germany and format as a vehicle speed let str = (<span>Die Geschwindigkeit des Autos ist <UnitFmt locale="de-DE" measure={m} autoScale={true} autoConvert={true}.</span>); // str is now "<span>Die Geschwindigkeit des Autos ist 39 Kilometer pro Stunde.</span>"
TBD
<LocaleContext locale="string" rb={ResBundle}> <App/> </LocaleContext>
<LocaleDataProvider locale="string" translationsDir="string">
To translate text to another language inside of your React app, you can use the
Translate component.
import Translate from 'react-ilib/lib/Translate'; import Plural from 'react-ilib/lib/Plural'; import Parameter from 'react-ilib/lib/Parameter'; <Translate id="string" description="string" locale="string" wrapper={<span/>} <Parameter name="string" description="string" value={any} wrapper={null}
Translate a string using iLib's ResBundle class. The string to translate appears in the body of the component.
It is highly recommended that entire sentences and phrases are wrapped with a Translate component, rather than individual snippets of text because it is difficult for the translators to know the grammatical context for those little snippets. Whole sentences and phrases are much easier to translate properly and produce much higher quality products.
To this end, the body of the Translate component may contain HTML or other components in the middle of it to allow you to wrap the whole sentence. If the string contains HTML or subcomponents, then those tags will be copied into the appropriate spot in the translated string before the final translated output is generated.
In order to allow translators to move these components around as required by the grammar of their target language, the components are hidden behind XML-like codes to create a coded string. This has a number of advantages:
The translators cannot mess up the syntax of the component props or HTML attributes
The translators are not tempted to translate things that they shouldn't, such as the names of CSS classes.
The translators cannot inject nefarious javascript code into the middle of the translated string and thereby perform an injection attack.
The engineers may change the contents of these tags at will without causing a retranslation.
Here is an example string with subcomponents and the resulting source string to translate:
var str = ( <Translate> This is a <Link to={url}>link to another website</Link> in the middle of the string. </Translate> );
The extracted string would be:
This is a <c0>link to another website</c0> in the middle of the string.
The "c" stands for component (XML tags have to begin with a letter), and the number is the index of the component in the string. Translated to German, this might be:
Dies ist ein <c0>Link zu einer anderen Website</c0> in der Mitte der Zeichenfolge.
It is also highly recommended that engineers fill out the description prop for every string. The value of this prop is sent along with the string to the translators, and should contain a description of how the string is used in the UI, what the intent was, any grammatical hints, and anything else that a translator may need to know about the string without seeing the UI for themselves.
Examples of good descriptions:
Plural components give a string to use for a particular plural category. The Translate component will pick the right plural string to use based on the value of its count prop, which is required if any plural strings appear in the body of the Translate component. Plural components may only appear inside of Translate component or else they will not do anything other than rendering their string, which is probably not what you want. The category prop is required for all plurals. The value of the category prop should be one of "zero", "one", "two", "few", "many", and "other". These are defined in the Unicode CLDR description of plural category rules. Additionally, the category can be any string that the ilib IString.choiceFormat method accepts.
For English source strings, only the "one" and the "other" categories are required. Translators will add strings for other types of categories if necessary for the grammar of their target language. For example, the Russian translator will translate for the "one", "two", "few", and "other" categories, and the Translate component will choose the correct one given the value of its count prop.
Parameter components are placeholders for values that get substituted into the string after the translated string is retrieved. The Parameter component renders the value of its value prop into the given spot in the string.
Strings can be extracted from your application using the ilib localization tool. The localization tool (loctool for short) can search through a project to find js and jsx files to extract strings from, and output the results into XLIFF format files that can be sent to translators. The resulting translated XLIFF files can then be used to generate ilib resource files in js format. These files can then be used along with the ResBundle class to load in translations.
<div class="mainbody fxcs"> <Header> <Translate description="Main body header"> Files to Upload </Translate> </Header> </div>
Notes:
<div class="mainbody fxcs"> <Translate values={{num: fileCache.cntReady}}> Number of files in cache: <Parameter name="num"/> </Translate> </div>
Notes:
<div class="mainbody fxcs"> <Translate count={fileCache.cntReady} values={{cacheName: fileCache.cacheName}}> <Plural category="one">There is <Parameter name="count"/> file in the <Parameter name="cacheName"/> cache.</Plural> <Plural category="other">There are <Parameter name="count"/> files in the <Parameter name="cacheName"/> cache.</Plural> </Translate> </div>
Notes:
If you want to use this library, you must include ilib in your application's package.json with a version higher than 14.0.0. Ilib versions 13.X and earlier will not work.
If you are using React with webpack as its bundler, you will need to use the ilib webpack loader and ilib webpack plugin to ensure that all the locale data you need is available in your webpack bundle. See the documentation in the ilib webpack loader for more details.
This library has not been tested with react-native yet, and there is no guarantee that any of it will work under react-native. If you do get it working for yourself, please let us know. Or better yet, send us a PR on github!. | https://developer.aliyun.com/mirror/npm/package/react-ilib/v/0.1.0 | CC-MAIN-2020-40 | refinedweb | 1,351 | 52.49 |
timeit – Time the execution of small bits of Python code.¶
The timeit module provides a simple interface for determining the execution time of small bits of Python code. It uses a platform-specific time function to provide the most accurate time calculation possible. It reduces the impact of startup or shutdown costs on the time calculation by executing the code repeatedly.
Module Contents¶
timeit defines a single public class, Timer. The constructor for Timer takes a statement to be timed, and a setup statement (to initialize variables, for example). The Python statements should be strings and can include embedded newlines.
The timeit() method runs the setup statement one time, then executes the primary statement repeatedly and returns the amount of time which passes. The argument to timeit() controls how many times to run the statement; the default is 1,000,000.
Basic Example¶
To illustrate how the various arguments to Timer are used, here is a simple example which prints an identifying value when each statement is executed:
import timeit # using setitem t = timeit.Timer("print 'main statement'", "print 'setup'") print 'TIMEIT:' print t.timeit(2) print 'REPEAT:' print t.repeat(3, 2)
When run, the output is:
$ python timeit_example.py TIMEIT: setup main statement main statement 1.90734863281e-06 REPEAT: setup main statement main statement setup main statement main statement setup main statement main statement [9.5367431640625e-07, 9.5367431640625e-07, 1.1920928955078125e-06]
When called, timeit() runs the setup statement one time, then calls the main statement count times. It returns a single floating point value representing the amount of time it took to run the main statement count times.
When repeat() is used, it calls timeit() severeal times (3 in this case) and all of the responses are returned in a list.
Storing Values in a Dictionary¶
For a more complex example, let’s compare the amount of time it takes to populate a dictionary with a large number of values using a variety of methods. First, a few constants are needed to configure the Timer. We’ll be using a list of tuples containing strings and integers. The Timer will be storing the integers in a dictionary using the strings as keys.
# {{{cog include('timeit/timeit_dictionary.py', 'header')}}} import timeit import sys # A few constants range_size=1000 count=1000 setup_statement="l = [ (str(x), x) for x in range(%d) ]; d = {}" % range_size # {{{end}}}
Next, we can define a short utility function to print the results in a useful format. The timeit() method returns the amount of time it takes to execute the statement repeatedly. The output of show_results() converts that into the amount of time it takes per iteration, and then further reduces the value to the amount of time it takes to store one item in the dictionary (as averages, of course).
# {{{cog include('timeit/timeit_dictionary.py', 'show_results')}}} def show_results(result): "Print results in terms of microseconds per pass and per item." global count, range_size per_pass = 1000000 * (result / count) print '%.2f usec/pass' % per_pass, per_item = per_pass / range_size print '%.2f usec/item' % per_item print "%d items" % range_size print "%d iterations" % count print # {{{end}}}
To establish a baseline, the first configuration tested will use __setitem__(). All of the other variations avoid overwriting values already in the dictionary, so this simple version should be the fastest.
Notice that the first argument to Timer is a multi-line string, with indention preserved to ensure that it parses correctly when run. The second argument is a constant established above to initialize the list of values and the dictionary.
# {{{cog include('timeit/timeit_dictionary.py', 'setitem')}}} # Using __setitem__ without checking for existing values first print '__setitem__:\t', sys.stdout.flush() # using setitem t = timeit.Timer(""" for s, i in l: d[s] = i """, setup_statement) show_results(t.timeit(number=count)) # {{{end}}}
The next variation uses setdefault() to ensure that values already in the dictionary are not overwritten.
# {{{cog include('timeit/timeit_dictionary.py', 'setdefault')}}} # Using setdefault print 'setdefault:\t', sys.stdout.flush() t = timeit.Timer(""" for s, i in l: d.setdefault(s, i) """, setup_statement) show_results(t.timeit(number=count)) # {{{end}}}
Another way to avoid overwriting existing values is to use has_key() to check the contents of the dictionary explicitly.
# {{{cog include('timeit/timeit_dictionary.py', 'has_key')}}} # Using has_key print 'has_key:\t', sys.stdout.flush() # using setitem t = timeit.Timer(""" for s, i in l: if not d.has_key(s): d[s] = i """, setup_statement) show_results(t.timeit(number=count)) # {{{end}}}
Or by adding the value only if we receive a KeyError exception when looking for the existing value.
# {{{cog include('timeit/timeit_dictionary.py', 'exception')}}} # Using exceptions print 'KeyError:\t', sys.stdout.flush() # using setitem t = timeit.Timer(""" for s, i in l: try: existing = d[s] except KeyError: d[s] = i """, setup_statement) show_results(t.timeit(number=count)) # {{{end}}}
And the last method we will test is the (relatively) new form using “in” to determine if a dictionary has a particular key.
# {{{cog include('timeit/timeit_dictionary.py', 'in')}}} # Using "in" print '"not in":\t', sys.stdout.flush() # using setitem t = timeit.Timer(""" for s, i in l: if s not in d: d[s] = i """, setup_statement) show_results(t.timeit(number=count)) # {{{end}}}
When run, the script produces output similar to this:
$ python timeit_dictionary.py 1000 items 1000 iterations __setitem__: 107.40 usec/pass 0.11 usec/item setdefault: 228.97 usec/pass 0.23 usec/item has_key: 183.76 usec/pass 0.18 usec/item KeyError: 120.74 usec/pass 0.12 usec/item "not in": 92.42 usec/pass 0.09 usec/item
Those times are for a MacBook Pro running Python 2.6. Your times will be different. Experiment with the range_size and count variables, since different combinations will produce different results.
From the Command Line¶
In addition to the programmatic interface, timeit provides a command line interface for testing modules without instrumentation.
To run the module, use the new -m option to find the module and treat it as the main program:
$ python -m timeit
For example, to get help:
$ python -m timeit -h Tool --: separate options from statement, use when statement starts with -.
The statement argument works a little differently than the argument to Timer. Instead of one long string, you pass each line of the instructions as a separate command line argument. To indent lines (such as inside a loop), embed spaces in the string by enclosing the whole thing in quotes. For example:
$ python -m timeit -s "d={}" "for i in range(1000):" " d[str(i)] = i" 1000 loops, best of 3: 289 usec per loop
It is also possible to define a function with more complex code, then import the module and call the function from the command line:
def test_setitem(range_size=1000): l = [ (str(x), x) for x in range(range_size) ] d = {} for s, i in l: d[s] = i
Then to run the test:
$ python -m timeit "import timeit_setitem; timeit_setitem.test_setitem()\ " 1000 loops, best of 3: 417 usec per loop | https://pymotw.com/2/timeit/index.html | CC-MAIN-2017-22 | refinedweb | 1,152 | 65.93 |
Today we’re going to explore a wonderful feature that the Python library offers to you out of the box: the serialization. To serialize an object means to transform it in a format that can be stored, so as to be able to deserialize it later, recreating the original object from the serialized format. To do all these operations we will use the pickle module.
Pickling
Pickling is the name of the serialization process in Python. By pickling, we can convert an object hierarchy to a binary format (usually not human readable) that can be stored. To pickle an object we just need to import the pickle module and call the dumps() function passing the object to be pickled as a parameter.
For example:) print ("Would you like to see her pickled? Here she is!") print (my_pickled_mary)
So, in the example above, we have created an instance of a sheep class and then we’ve pickled it, transforming our sheep instance into a simple array of bytes.
It’s been easy, hasn’t it?
Now we can easily store the bytes array on a binary file or in a database field and restore it from our storage support in a later time to transform back this bunch of bytes in an object hierarchy.
Note that if you want to create a file with a pickled object, you can use the dump() method (instead of the dumps() one) passing also an opened binary file and the pickling result will be stored in the file automatically.
To do so, the previous example could be changed like this:) binary_file = open('my_pickled_mary.bin',mode='wb') my_pickled_mary = pickle.dump(mary, binary_file) binary_file.close()
Unpickling
The process that takes a binary array and converts it to an object hierarchy is called unpickling.
The unpickling process is done by using the load() function of the pickle module and returns a complete object hierarchy from a simple bytes array. Let’s try to use the load function on the example above:
import pickle class Animal: def __init__(self, number_of_paws, color): self.number_of_paws = number_of_paws self.color = color class Sheep(Animal): def __init__(self, color): Animal.__init__(self, 4, color) # Step 1: Let's create the sheep Mary mary = Sheep("white") # Step 2: Let's pickle Mary my_pickled_mary = pickle.dumps(mary) # Step 3: Now, let's unpickle our sheep Mary creating another instance, another sheep... Dolly! dolly = pickle.loads(my_pickled_mary) # Dolly and Mary are two different objects, in fact if we specify another color for dolly # there are no conseguencies for Mary dolly.color = "black" print (str.format("Dolly is {0} ", dolly.color)) print (str.format("Mary is {0} ", mary.color))
In this example you can see that after having pickled the first sheep object (Mary) we have unpickled it to another variable (Dolly) and so we have — in a sense — cloned Mary to create Dolly (Yes, we’re cloning sheep… lol!).
It goes without saying that changing an attribute value on one of these objects the other one remain untouched because we haven’t just copied the reference to the original object, we have actually cloned the original object and its state to create a perfect copy in a completely different instance.
Note: in this example we have cloned an object using the trick of pickling it and unpickling the resulting binary stream in another variable.
This is ok and there are several languages where this approach could even be advised, but if you need to clone an object in Python it’s probably better to use the copy module of the standard lib. Since it’s designed to clone objects, it works far better.
Some notes about pickling
All I’ve said so far is just to whet your appetite because there’s a lot more we could say about pickling. One important thing to be known is that there are several types (or protocols) of pickling because this technic is evolving as the language evolves.
So, there are currently 5 protocols of pickling:
According to the official documentation: objects.
Another thing that is important to keep in mind is that not every object is picklable. Some objects (like DB connections, handles to opened files etc…) can’t be pickled and trying to pickle an unpicklable object (or to unpickle an object that is not a valid pickle), a pickle.PickleError exception or one of its subclasses (PicklingError and UnpicklingError) is raised.
For example:
import pickle my_custom_pickle = bytes("this is unpicklable", encoding="UTF-8") # this next statement will raise a _pickle.UnpicklingError my_new_object = pickle.loads(my_custom_pickle)
The problem when you have unpicklable object in the hierarchy of the object you want to pickle is that this prevents you to serialize (and store) the entire object. Fortunately, Python offers you two convenient methods to specify what you want to pickle and how to re-initialize (during the unpickling process) the objects that you haven’t pickled before. These methods are __setstate__() and __getstate__()
For example:
import pickle class my_zen_class: number_of_meditations = 0 def __init__(self, name): self.number_of_meditations = 0 self.name = name def meditate(self): self.number_of_meditations = self.number_of_meditations + 1 def __getstate__(self): # this method is called when you are # going to pickle the class, to know what to pickle state = self.__dict__.copy() # You will never get the Buddha state if you count # meditations, so # don't pickle this counter, the next time you will just # start meditating from scratch :) del state['number_of_meditations'] return state def __setstate__(self, state): # this method is called when you are going to # unpickle the class, # if you need some initialization after the # unpickling you can do it here. self.__dict__.update(state) # I start meditating my_zen_object = my_zen_class("Dave") for i in range(100): my_zen_object.meditate() # Now I pickle my meditation experience print(str.format("I'm {0}, and I've meditated {1} times'", my_zen_object.name, my_zen_object.number_of_meditations)) my_pickled_zen_object = pickle.dumps(my_zen_object) my_zen_object = None # Now I get my meditation experience back my_new_zen_object = pickle.loads(my_pickled_zen_object) # As expected, the number_of_meditations property # has not been restored because it hasn't been pickled print(str.format("I'm {0}, and I don't have a beginner mind yet because I've meditated only {1} times'", my_new_zen_object.name, my_new_zen_object.number_of_meditations))
This was just a brief introduction to the pickle module, for more information about the pickle module visit the official documentation and if you want more information about the “beginners mind”, buy Zen Mind, Beginner’s Mind by Shunryu Suzuki using this sponsored Amazon links and you will help to support The Python corner 🙂
Enjoy! | https://www.thepythoncorner.com/2016/12/object-serialization-in-python/ | CC-MAIN-2020-05 | refinedweb | 1,087 | 53.1 |
dbrown2 at yahoo.com schrieb: > I typically use IDLE for editing and debug. On Windows at least, IDLE > and the standard turtle graphics module do not mix. I think both use > Tkinter. > > For now I use IPython and Jedit when using the turtle module but it's > not a great solution for me. Is there any known work-around to use > turtle with IDLE? If not is there a planned fix for the problem. I > find turtle is a convenient way to do simple graphics without having > to install external modules or deal with event handlers or windowing > details. > > -- David With Python 2.2 and IDLE 0.8 it was no problem to use turtle.py (and Tkinter in general) interactively, because tha application and IDLE itself ran in the same process (using the same mainloop()). As this turned out to be a problem when writing programs of some complextiy the decision was made to change the way IDLE runs programs, namely in their own processes with their own namespaces. *Un*fortunately this proved to be a problem for interactively using and exploring Tkinter. ==== >>> in the Shell window. This way you can execute turtle graphics commands immediately and also explore and construct GUIs with Tkinter interactively. But keep in mind, that this makes program development more error prone. Normally this does not matter when developing tiny turtle graphics programs. I use to use two versions of IDLE (and I have two coresponding icons on my desktop). The one with the -n - option for interactive explorations, the other one, configured like it comes out of the box, for developing more complex programs. HTH, Gregor | https://mail.python.org/pipermail/python-list/2004-March/251592.html | CC-MAIN-2017-04 | refinedweb | 273 | 66.44 |
Is This Content Helpful?
We're glad to know this article was helpful.
How can we make this better?
Contact our Support Team
How can I check for null values in the Field Calculator using Python?
Starting at ArcGIS for Desktop 10.1, null values in an attribute table are returned as the string 'None' in the Field Calculator using Python. Knowing this, an if/elif statement can be used to find whether values are null or not. Here is an example script that checks if a field contains null values:
Code:
Expression:
findNulls(!fieldA!)
Expression Type:
PYTHON_9.3
Code Block:
def findNulls(fieldValue):
if fieldValue is None:
return "null values"
elif fieldValue is not None:
return "no nulls here" | http://support.esri.com/en/technical-article/000011740 | CC-MAIN-2017-43 | refinedweb | 120 | 76.82 |
Coding convention
The chapter presents coding convention used in the implementation files of Phoenix-RTOS.
File label
Each operating system source file is marked with label with the following structure.
/* * Phoenix-RTOS * * Operating system kernel * * pmap - machine dependent part of VM subsystem (ARM) * * Copyright 2014-2015 Phoenix Systems * Copyright 2005-2006 Pawel Pisarczyk * Author: Pawel Pisarczyk, Radoslaw F. Wawrzusiak, Jacek Popko * * This file is part of Phoenix-RTOS. * * %LICENSE% */
Main label blocks are separated with empty line. The first label block informs that file is the part of Phoenix-RTOS operating system. In next block the information about the operating system module is provided. In this example, the file belongs to operating system kernel. Third label block describes the file functionality. In presented example label, the file implements
pmap interface - the hardware dependent part of memory management subsystem for managing the MMU or MPU (part of HAL). Fourth label block presents copyright notices and authors of the file. Newest copyrights are located on the top. Copyrights are associated with dates informing about the development periods separated with comas. In the example label the file was developed in years 2014-2015 and in the earlier period of 2005-2006. Presented file has three authors sorted according to the importance of their contribution. All names are presented. Next block contains the information that file belongs to the operating system project. The %LICENSE% macro is used to inject the license conditions.
Labels in each file should be constructed according to presented rules. Modification of these rules is not allowed.
Indentation
Code indentation is based on tabulator. It is not allowed to make indentation with space character. The source code used for development tests (e.g. printf debug) should be entered without indentation. The following code presents correctly formatted code with one line (
lib_printf) entered for debug purposes. The inserted line should be removed in the final code.
int main(void) { _hal_init(); hal_consolePrint(ATTR_BOLD, "Phoenix-RTOS microkernel v. " VERSION "\n"); _vm_init(&main_common.kmap, &main_common.kernel); _proc_init(&main_common.kmap, &main_common.kernel); _syscalls_init(); lib_printf("DEBUG: starting first process...\n"); /* Start init process */ proc_start(main_initthr, NULL, (const char *)"init"); /* Start scheduling, leave current stack */ hal_cpuEnableInterrupts(); hal_cpuReschedule(); return 0; }
Source files
Separate source files should be created for each operating system module. Source files are grouped in directories which names correspond to the names of subsystems.
Functions
Functions should be short and not too complex in terms of logic. The function should do one thing only. Functions should be separated with two newline characters.
Function names
Function names should be created according to the following schema
[_]<subsystem>_<functionality> where
<subsystem> is the name of subsystem or file to which function belongs and
<functionality> is the brief sentence explaining the implemented functionality. The subsystem name should be a one word without the underline characters. The functionality could be expressed using many words but without the underlines. In such case camelCase should be used.
For example function for kernel space memory allocation could be named
vm_kmalloc(). Function for creating a new thread could be named
proc_threadCreate().
The underline character at the start of the function name means that function is not synchronized and its usage by two parallel threads demands the external synchronization. Good example of such function is the internal function for adding the node to the red-black tree or the internal function for adding the item to the list.
Functions used internally in C file should be declared as static. Functions used only inside the selected subsystem could be named with the name of the module instead of the name of subsystem. Functions exported outside the subsystem must be named with subsystem name only.
Function length
Function should be not longer than 200 lines of code and not shorter than 10 lines of code.
Variables
Variable should be named with one short words without the underline characters. If one word is not enough for variable name than use camelCase. When defining a variable assign it a value, do no assume that is't value is zero.
Local variables
Local variables should be defined before the function code according to ANSI C 89 standard. The stack usage and number of local variables should be minimized. Static local variables are not allowed.
void *_kmalloc_alloc(u8 hdridx, u8 idx) { void *b; vm_zone_t *z = kmalloc_common.sizes[idx]; b = _vm_zalloc(z, NULL); kmalloc_common.allocsz += (1 << idx); if (idx == hdridx) kmalloc_common.hdrblocks--; if (z->used == z->blocks) { _vm_zoneRemove(&kmalloc_common.sizes[idx], z); _vm_zoneAdd(&kmalloc_common.used, z); } return b; }
Global variables
Global variables should be used only is their absolutely necessary. You should avoid using globally initialised variables. If they are used, global variables can only be placed in common structures. The structure should be named after the system module that implements it, followed by _common. Example notation is shown below.
struct { spinlock_t spinlock; } pmap_common;
Operators
One space character should be used after and before the following binary and ternary operators:
= + - < > * / % | & ^ <= >= == != ? :
No space should be used after the following unary operators:
& * + - ~ !
The
sizeof and
typeofare treated as functions and are to be used with accordance to the following notation:
sizeof(x) typeof(x)
In case of increment
++ and decrement
-- operators following rules should be applied. If they are postfix, no space should be used before the operator. If they are prefix, no space should be used after the operator.
Conditional expressions
Notation of conditional expression is presented below.
if (expr) line 1 if (expr0) { line 1 ... } else if (expr1) { line 1 ... } else { line 1 ... }
A space should be used after a keyword of the conditional instruction. Opening and closing braces should be used only if the body of the conditional instruction is longer than one line. The opening brace should be put in the same line as the keyword of the conditional instruction. The closing brace should be placed after the last line of the conditional istruction in a new line.
Type definition
New types can only be defined if it is absolutely necessary.
When the C programming language is used only C language comments should be used. It means that only
/* */ are allowed and
// are not to be used at all. A two line comment is presented below.
/* * line 1 * line 2 */
One line comment should look like the following example.
/* comment */
All comments should be brief and placed only in essential parts of the code. Comments are not the place to copy parts of the specifications. Nor are they the place to express programmers novel writing skills.
The use of any kind of documentation generator (e.g. doxygen) is strictly forbidden.
Preprocessor
The header with the `#include" preprocessing directive should be placed after the label. The example header notation is shown below.
#include "pmap.h" #include "spinlock.h" #include "string.h" #include "console.h" #include "stm32.h
It is advised not to use MACROS in the code.
It is not advised to use preprocessor conditionals like
#if or `ifdef'. The use of preprocessing conditionals makes it harder to follow the code logic. If it is absolute necessary to use preprocessing conditionals, they ought to be formatted as the following example.
#ifndef NOMMU process->mapp = &process->map; process->amap = NULL; vm_mapCreate(process->mapp, (void *)VADDR_MIN, process_common.kmap->start); /* Create pmap */ p = vm_pageAlloc(SIZE_PAGE, PAGE_OWNER_KERNEL | PAGE_KERNEL_PTABLE); vaddr = vm_mmap(process_common.kmap, process_common.kmap->start, p, 1 << p->idx, NULL, 0, 0); pmap_create(&process->mapp->pmap, &process_common.kmap->pmap, p, vaddr); #else process->mapp = process_common.kmap; process->amap = NULL; process->lazy = 1; #endif
Operating system messages
Following notation for operating system messages should be applied. Message should start from a subsystem name, which should be followed by colon and a message body. An example is shown below.
lib_printf("main: Starting syspage programs (%d) and init\n", syspage->progssz); | http://phoenix-rtos.com/documentation/coding | CC-MAIN-2020-34 | refinedweb | 1,284 | 51.04 |
The QWSInputMethod class provides international input methods for Qt/Embedded. More...
#include <QWSInputMethod>
Inherits QObject..
Use QWSServer::setCurrentInputMethod() to install an input method.
This class is still subject to change.
Constructs a new input method
Destructs the input method uninstalling it if it is currently installed..
Implemented in subclasses to handle mouse presses/releases within the preedit text. The parameter x is the offset within the string that was sent with the InputMethodCompose event. state is either QWSServer::MousePress or QWSServer::MouseRelease
if state < 0 then the mouse event is inside the widget, but outside the preedit text
QWSServer::MouseOutside is sent when clicking in a different widget.
The default implementation resets the input method on all mouse presses.
This event handler is implemented in subclasses to receive replies to an input method query.
The specified property and result contain the property queried and the result returned in the reply.
See also sendIMQuery().
Implemented in subclasses to reset the state of the input method.
The default implementation calls sendIMEvent() with empty preedit and commit strings, if the input method is in compose mode.
Sends an input method event with commit string commitString. This is a convenience function for sendEvent().
If replaceLength is greater than 0, the commit string will replace replaceLength characters of the receiving widget's previous text, starting at replaceFrom relative to the start of the preedit string.
This will cause the input method to leave compose mode.
See also sendEvent, sendPreeditString, and QInputMethodEvent.
Causes a QIMEvent to be sent to the focus widget.
txt is the preedit string if state is QWSServer::IMCompose, or the commit string if state is QWSServer::IMEnd.
If state is QWSServer::IMCompose, cpos is the cursor position within the preedit string, and selLen is the number of characters (starting at cpos) that should be marked as selected by the input widget receiving the event.
Use sendEvent(), sendPreeditString() or sendCommitString() instead.
Sends an input method event with preedit string preeditString and cursor position cursorPosition. selectionLength is the number of characters to be marked as selected (starting at cursorPosition). If selectionLength is negative, the text before cursorPosition is marked.
This is a convenience function for sendEvent()..
See also sendEvent, sendCommitString, and QInputMethodEvent.
Sends an input method query for the specified property.
Reimplement the virtual function queryResponse() to receive responses to input method queries.
See also queryResponse().
Handles update events, including resets and focus changes.
Reimplementations must call the base implementation for all cases that it does not handle itself.
type is a value defined in QWSIMUpdateCommand::UpdateType. | http://doc.trolltech.com/4.0/qwsinputmethod.html | crawl-001 | refinedweb | 423 | 50.73 |
#include <itkImageFileWriter.h>
Inheritance diagram for itk::ImageFileWriter< TInputImage >:
ImageFileWriter writes its input data to a single output file. ImageFileWriter interfaces with an ImageIO class to write out the data. If you wish to write data into a series of files (e.g., a slice per file) use ImageSeriesWriter.
A pluggable factory pattern is used that allows different kinds of writers to be registered (even at run time) without having to modify the code in this class. You can either manually instantiate the ImageIO object and associate it with the ImageFileWriter, or let the class figure it out from the extension. Normally just setting the filename with a suitable suffix (".png", ".jpg", etc) and setting the input to the writer is enough to get the writer to work properly.
ImageIOBase
Definition at line 77 of file itkImageFileWriter.h. | http://www.itk.org/Doxygen16/html/classitk_1_1ImageFileWriter.html | crawl-003 | refinedweb | 138 | 56.35 |
Ok, so here it is (download); I finally had the chance to finish this sample and put it in a form that others can learn from. Various folks both from the Silverlight team and the CRM team were involved in the process of finding ways to make it work (not listing their names here to keep their privacy) so thank you for that. The sample is extremely simple and is just meant to illustrate how to call the CRM service from within Silverlight 2 using the proxy that Silverlight generates. The two key pieces of the sample are:
· A method to wire up the CRM service: It has special logic that injects the CRM authentication token so that CRM can effectively process the calls (see my previous post for a bit more background).
· Usage of DynamicEntity when retrieving data from CRM.
Remember, the sample is provided “AS IS”; I bet there are many others out there that have come up with other patterns/helper-methods to achieve similar results.
Happy coding
Update:
I realized that I didn’t include explicit instruction on how to run this sample (I always assume that my audience is geek enough to understand everything I ramble he he) so here they are:
Requirements:
· Visual Studio 2008 + Silverlight SDK + Silverlight Tools for Visual Studio 2008
· Remember that in order for Silverlight to be able to call a web service the root of the server hosting the service must contain a clientaccesspolicy.xml (or equivalent) file stating that the server allows clients to issue calls to it. The project includes a sample clientaccesspolicy.xml that you can drop in the <crmwebroot> of your server. For more information about these security measures refer to the Silverlight documentation.
·
Instructions:
1. Make sure to drop the policy file (clientaccesspolicy.xml) under the Web root of your CRM server.
2. On page.xaml.cs change the values of the global variables to point to your server:
a. _serverUrl
b. _organizationName
3. Open the project, compile it and run it J
a. If you have troubles compiling/running the project make sure you don’t have namespace conflicts (check this post )
4. If you want to host the sample outside of VS well, you will need to setup your own IIS website and/or put your files under the \ISV\[yoursubfolder] folder (check documentation on the CRM SDK)
PingBack from
im trying out your new example atm
theres one problem
it aborts with a
runtime error at line 453
Sys.InvalidOperationExeption:InitializeError error #2103 in control 'Xaml1' invalid or not well-formed application. check manifest
(translated by myself, if the errormessage isnt too good ;) )
it that a known problem? could it be a problem with blocked access? (sitting behind a corp. firewall which prevents access here and there)
anything i could try out myself? didnt found a way to debug, cause it aborts even before any silverlight code is reached
greetings
Sivlerstarter
hmm it seems like i ve found the(?) problem
the service reference (CrmSdk) is not reachable
could that be the problem?
Silverstarter: Make sure you are setting the url of your server properly; also look at the comments at the top of Page.cs ... you need to have a crossdomain.xml file in the root of your CRM server (I included such file as part of the sample you can can drop it in the root of your crm server). For more information about cross domanin policies and silverlight please refer to the silverlight documentation.
Dear humlezg
I have same problem when try to open any page have SilverLight component this page not on my pc it in any other site
and i tried to open the same pages form other machin and it successed !!! the error appain as Javascript error as "Error: Sys.InvalidOperationException:InitializeError Error#2103 in control'Xaml1':2103An error has occurred"
Also I already install VS2005, VS2008 on the same Machine
Aswanee:
You probably have an older version of the silverlight RUNTIME. Make sure you uninstall any previous beta versions of the Silverlight runtime and install the RTM version. If that doesn't solve your problem I would suggest you consult the silverlight forums/support.
Hi, i just downloaded and tested the project. I think the problem was that if you go into the properties of the project, you'll find that the namespace is "CRMSilverlightDirect" instead of "CrmSilverlightDirect". when I changed that to make it consistent to the rest of the project, it worked great for me! thanks for posting this sample!
I was doing some research on Silverlight 2 & Dynamics CRM integration and found this post by Humberto
There are multiple ways that you can call the CRM Services from Silverlight 2. I’ve
Do you have this up and running somewhere that I could see before going through the download/install.
Hi,
Great post!
If I understand correctly, since Silverlight webservice calls are made asynchronously, theoretically the following could happen:
t0: DisplayAccounts(…); >>> _lastIssuedRequest = RetrieveAccounts;
t1: DisplayContacts(…); >>> _lastIssuedRequest = RetrieveContacts;
t2: Service_ExecuteCompleted (for t0) >>> _lastIssuedRequest == RetrieveContacts; (oops! completed event was for RetrieveAccounts)
t3: Service_ExecuteCompleted (for t1) >>> _lastIssuedRequest == RetrieveContacts
Is there another way to determine which method initiated the request that resulted in the Service_ExecuteCompleted event?
Thanks in advance!
rudgr
Yes you are correct, that can happen when working with asynch calls. I noticed the potential problem when developing the sample but didn't spend much time dealing with it. The only alternative that comes to mind would be to recursively "chain" your calls... that would make them sequential;... the code would be somewhat ugly tough.
Any idea how to get the GUID of the CRM user doing the request? (i.e. WhoAmIRequest?)
Thanks!
The sample is not working on my end with SL2 RTM and VS 2008. After resolving the above mentioned errors, I am getting this error
"The name already exists in the tree"
Yes, the WhoAmIRequest will give the what you are looking for. Just cradt the request (using the proxy that SL generates) and use the .execute method
Gjukawa,
I haven't seen that error before. Sorry :/
Hi
So I fixed the namespace problem as per Jocelyns post and now I get the page loaded and I can enter the seach critera.
Next we get below. Any idea. Is it that crossdomain stuff?
System.ServiceModel.CommunicationException was unhandled by user code
Message=."
StackTrace: CrmSilverlightDirect.CrmSdk.CrmServiceSoapClient.CrmServiceSoapClientChannel.EndExecute(IAsyncResult result)
at CrmSilverlightDirect.CrmSdk.CrmServiceSoapClient.CrmSilverlightDirect.CrmSdk.CrmServiceSoap.EndExecute(IAsyncResult result)
at CrmSilverlightDirect.CrmSdk.CrmServiceSoapClient.OnEndExecute:
After installing the code and ruuing the project i m getting below error:
I have change the URL and Orgarnization name also.
I have updated the service refernce also.
Please let me where i m wrong..
Thanks & Regards,
Mahesh
Mahesh not sure what is it that you are facing but the error seems to indicate that you modified a Xaml piece (in page.xaml perhaps??). Make sure your xaml is valid and that all the VS tools that you have are for SL 2 (I haven't tried it with the SL beta 3 tools).
Regards
VJ APU,
Yes, the exception seems to be pretty self explanatory. Look at for more info
Most of the time there CRM service reference which i have added is throwing a TIMEOUT Error.
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement | http://blogs.msdn.com/lezamax/archive/2008/11/05/sample-silverlight-2-and-crm.aspx | crawl-002 | refinedweb | 1,234 | 61.97 |
Inline type hints for polymorphic results of copy_from_string and friends
When dealing with functions that can return anything, such as
copy_from_string, it is sometimes difficult to get the types straight when the returned thing is polymorphic.
For instance:
import StdEnv import dynamic_string Start = (reverse` [1..5], reverse` ['a'..'e']) where reverse_s = copy_to_string reverse (reverse`,_) = copy_from_string reverse_s
This fails with a type error, since
reverse` cannot take a
[Int] in the first element of the tuple and a
[Char] in the second element.
I then tried to make the polymorphism explicit:
cast_rev :: (A.a: [a] -> [a]) -> [a] -> [a] cast_rev f = f Start = (cast_rev reverse` [1..5], cast_rev reverse` ['a'..'e'])
But that fails as well:
Type error [type.icl,6,Start]:"argument 1 of cast_rev : reverse`" cannot unify demanded type with offered type: [E.12 ] -> [E.12 ] [E.16 ] -> [E.16 ]
Would allowing inline type hints be possible, like so:
(reverse` :: A.a: [a] -> [a],_) = copy_from_string reverse_s
(I'm not sure if it is possible/a good idea to overload this syntax for dynamics.)
Or is there a reasonable workaround?
To upload designs, you'll need to enable LFS and have admin enable hashed storage. More information | https://gitlab.science.ru.nl/clean-compiler-and-rts/compiler/-/issues/18 | CC-MAIN-2021-04 | refinedweb | 198 | 65.93 |
Java 1A Practical Class
Table of Contents
The implementation of PatternLife you wrote last week is brittle in the sense that the program does not cope well when input data is malformed or missing. This week you will improve PatternLife using Java exceptions to handle erroneous or missing input data. In addition, you will learn how to read files from disk and from a website and use the retrieved data to initialise a Game of Life.:
Try invoking your copy of PatternLife from last week as follows:
java -jar crsid-tick3.jar
java -jar crsid-tick3.jar "Glider:Richard Guy:20:20:1:"
java -jar crsid-tick3.jar "Glider:Richard Guy:twenty:20:1:1:010 001 111"
What does your program print out in each of the above cases? It's likely that in each case your implementation will print out a stack trace which describes an error in the program. Here is a typical stack trace from a student submission:
crsid@machine~> java -jar crsid-tick3.jar "Glider:Richard Guy:20:20:1:" Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 5 at uk.ac.cam.your-crsid.tick3.Pattern.<init>(Pattern.java:48) at uk.ac.cam.your-crsid.tick3.PatternLife.main(PatternLife.java:96)
In this case the input string
"Glider:Richard Guy:20:20:1:" provided on the command line to the program did not conform correctly to the specification described in Workbook 3. The stack trace explains where the error in the program occurred. The first line of the stack trace explains that an exception of the type
java.lang.ArrayIndexOutOfBoundsException occurred when the sixth element of the array was accessed. The remaining lines provide a little history of program execution which led the computer to make the array access which generated the exception. In this case, program execution was taking place at line 48 of
Pattern.java when the error occurred; this location was reached because the method on line 48 of
Pattern.java was invoked by the constructor on line 96 of
PatternLife.java. The detail in the stack trace helps the programmer determine why the error occurred and provides clues on how to fix it.
The exception
java.lang.ArrayIndexOutOfBoundsException is actually a class inside the package
java.lang. The
java.lang package is special in Java because, unlike classes in all other packages, the contents of this package are always available in a Java program. Consequently, you can write
ArrayIndexOutOfBoundsException instead of
java.lang.ArrayIndexOutOfBoundsException.
Take a second look at each of the errors generated by your code with the three test cases mentioned at the start of this section. Can you determine which assumptions were made by your program which led to the error occurring? In some cases you can avoid generating errors by checking inputs carefully before using them; in other cases you will need to write additional code to catch the error and handle it. For example, you can probably avoid an exception of the type
ArrayIndexOutOfBoundsException by checking the
length field of the array before accessing particular elements of the array. In contrast, exceptions of the type
NumberFormatException need to be caught and handled appropriately.
If you need to handle an error, then you can do this by using a try-catch block. Consider the following example:
int width; try { width = Integer.parseInt("twenty"); //error: not an integer value } catch (NumberFormatException error) { //handle the error, perhaps by using a default: width = 10; }
The above code attempts to convert the Java string
"twenty" into a number, which fails since the contents of the string doesn't contain digits describing an integer literal. The static method
parseInt then throws an exception of type
NumberFormatException which is caught by the try-catch block. In the case above, the programmer has decided to hard-code the value of
width to
10. In some cases, using a default value like this is satisfactory. In the case of PatternLife, providing a default value for
width is not ideal because the programmer cannot know the size of the world the user wishes to simulate—this is why the format string provides the information in the first place!
In cases where no default value is sensible, the only option is to
throw an exception, as opposed to a normal return value, back to the calling method in the hope that this method might know what to do to handle the error. Ultimately, the programmer might not know what to do at any point in the program, in which case all the programmer can do is display an error message to the user. You will explore how to throw exceptions between methods after the next exercise.
In more complex cases, you may need to handle multiple types of exception separately. You can attach multiple
catch blocks to a single
try block as shown in the following example:
try { //code which may generate multiple types of exception } catch (TypeAException a) { //handle TypeAException here } catch (TypeBException b) { //handle TypeBException here }
The error handling you provided in the
Repeat class above works well for the small example at hand, but passing around strings containing messages for the user is cumbersome and messy. As you have already seen for
Integer.parseInt, Java provides a mechanism for passing exceptions (as opposed to return values) between methods. In Java terminology, we say that a method throws an exception. For example, the
Integer.parseInt method throws an exception of type
NumberFormatException.
To throw an exception you use the keyword
throw. If the exception is thrown inside the body of a try-catch block, execution passes to the first line of the
catch body which catches an exception of the appropriate type. If the call to
throw is not contained within the body of a try-catch block, then the exception is propagated back to the method which invoked the current method, and so on recursively, until an enclosing try-catch block is found. If no try-catch block exists, then the java runtime halts the program and prints a stack trace, just as we saw earlier. Here is an example:
package uk.ac.cam.your-crsid.tick4; class ExceptionTest { public static void main(String[] args) { System.out.print("C"); try { a(); } catch (Exception e) { System.out.print(e.getMessage()); } System.out.println("A"); } public static void a() throws Exception { System.out.print("S"); b(); System.out.print("J"); } public static void b() throws Exception { System.out.print("T"); if (1+2+3==6) throw new Exception("1"); System.out.print("V"); } }
In the above example you should have noticed that methods
a and
b have an extra phrase
throws Exception appended on the end of the method prototype. This phrase is required, and informs the programmer and the Java compiler that this method may throw an exception of type
Exception. If you forget to type
throws Exception, then you will get a compile error; you may like to temporarily delete the phrase from your copy of
ExceptionTest to see the compile error.
A new exception can be defined by creating a new class and declaring that it is of type
Exception. For example the following code snippet creates a new exception called
PatternFormatException:
package uk.ac.cam.your-crsid.tick4; public class PatternFormatException extends Exception { }
This code should be placed in a file called
PatternFormatException.java inside a suitable directory structure to match the package declaration, just as you would do for any other class in Java. You can place methods and fields inside
PatternFormatException, just as you would in other Java classes. The syntax "
extends Exception" indicates that
PatternFormatException is of type
Exception. This is an example of inheritance in Java; you will learn more about inheritance in Workbook 5. In this workbook we will limit use of inheritance to the creation of new types of exception as shown above.
As you saw in the example above, if you throw a
PatternFormatException inside a method body and do not enclose the use of
throw inside a try-catch block, you should append "
throws PatternFormatException" on to the end of the method prototype. A method can throw more than one type of exception, in which case the method prototype should include a comma separated list of exceptions, such as "
throws PatternFormatException, NumberFormatException".
Java actually supports two types of exception: checked exceptions and unchecked exceptions, and some of the common exceptions in Java, such as
NumberFormatException, are unchecked exceptions. A piece of code which may potentially throw a checked exception must either catch it in a try-catch block or declare that the method body may throw the exception; an unchecked exception does not need to be caught or declared thrown. When defining your own exceptions it is generally good programming practise to use checked exceptions (by inheriting from
Exception as shown earlier), and you should do so in all cases in this course.
In the rest of this Workbook you will improve the facilities used to load patterns in your implementation of Conway's Game of Life so that, by the end of this workbook, your program will be able to load patterns from files in the filesystem, or download them from websites. To do this we are going to investigate the Input-Output (IO) facilities available in the Java standard library. Handling input and output is a common source of errors in most programming languages because lots of things can go wrong: files might not exist, the contents of the file may be corrupt, or the network connection may disappear whilst data is being retrieved. Good IO programming requires careful checking of error conditions.
The Java IO standard library has two main methods of accessing data: Streams and Readers. Both of these mechanisms use exceptions to communicate erroneous states to the programmer using the library. A Stream is used for reading and writing sequences of binary data—examples might be images or Java class files. A Reader is used for reading and writing sequences of characters—such as text files, or in case the case of this workbook, strings which specify the state of the world in the Game of Life. In principle, sequences of characters can be read using a Stream, however character data can be saved in a variety of different formats which the programmer would then have to interpret and decode. In contrast, a Reader presents the same interface to character data regardless of the underlying format.
Start a web browser and take a look at Sun's documentation for the
Reader class, paying particular attention to the methods defined for reading characters. For example, the method prototype
int read(char[] cbuf) describes a method which reads data into a
char array and may throw an
IOException if an error occurs during the reading process; the return value indicates the number of characters read or -1 if no more data is available. You may have noticed that the
Reader class is an abstract class; the details of what an abstract class is and how to use it will be described in Workbook 5. This week it is sufficient to appreciate that an abstract class provides a specification which describes how a specific implementation of a "
Reader" must behave. For example,
FileReader provides a concrete implementation of
Reader, and is able to read data from files in the filesystem.
Now is an appropriate point to explore how
System.out.println works. The
System class is part of the package
java.lang and is therefore available by default. If you look for the class
System in Sun's documentation, you see that it has a public static field called
out of type
PrintStream.[1] If you view the documentation for
PrintStream you will see that the field
out supports a variety of method calls including the now familiar
println method. For completeness, the interested reader might like to explore what
System.err and
System.in do too.
Your final task this week is to write a new class called
PatternLoader, which is capable of loading patterns from the disk or downloading them from a website. Create a new class with the following contents, making sure you give the class the correct filename and you place it in an appropriate directory structure:
package uk.ac.cam.your-crsid.tick4; import java.io.Reader; import java.io.IOException; import java.util.List; public class PatternLoader { public static List<Pattern> load(Reader r) throws IOException { //TODO: Complete the implementation of this method. } }
This class introduces a number of new concepts which require further explanation. You should read the rest of this section of the workbook before completing your implementation of
PatternLoader.
In your implementation of
PatternLoader you will need to make use of some classes in the standard library such as
Reader which you looked up in the documentation earlier. To save you from typing
java.io.Reader at every point in the program when you want to refer to the
Reader class, the code above makes use of the
import statement. The statement "
import java.io.Reader;" tells the compiler that all occurrences of
Reader in the source file actually refer to
java.io.Reader. Using the
import statement will save you some typing, make your code more readable, and provide you with an explicit list of dependencies for the program at the top of the source file.
There is nothing special about classes defined in the standard library. For example, including
import uk.ac.cam.your-crsid.tick1.TestBit;
at the top of a Java source file would allow you to write
TestBit to refer to your implementation of
uk.ac.cam.your-crsid.tick1.TestBit you wrote for Tick 1.
You may recall from last week that a
static method is associated with a class rather than an instance of a class. Therefore you can make use of
PatternLoader just as you used
PackedLong in previous weeks—as a library of useful methods which you can call without first creating an instance of class
PatternLoader. For example, to call the
load method from another class, you simply write
PatternLoader.load followed by a reference to a
Reader object inside round brackets.
The
load method takes a single argument of type
Reader. When the load method is invoked, a specific kind of
Reader will be provided (for example, a
FileReader). By specifying the type of the argument to
load as
Reader the method is agnostic to the actual type of
Reader provided: the implementation of
load does not need to consider where the data is coming from—it can simply read characters using the support provided by the particular instance of
Reader provided by the calling method.
The return type of the
load method is
List<Pattern>. A
List is another class from the Java standard library. A
List records an ordered sequence of items and the main difference between a
List and a Java array is that a list can change its size dynamically: the programmer can add or delete items to it without stating how large it should be in advance. The phrase "
<Pattern>" is an example of something called Java generics, the details of which are beyond the scope of this course.
This year, all you need to know is how to use classes which use Java generics. As you've seen already, all you need to do is provide the class you want to use inside the angle brackets (
< and
>). For example,
List<Pattern> is a
List which stores elements of type
Pattern; you will learn more about Java generics next year.
The phrase "
throws IOException" states that the
load method may throw an exception of type
IOException. The
IOException class is defined as part of the Java standard library and is used to communicate that something unexpected happened whilst data was read or written. For example, if the network connection to the computer breaks whilst a Java program is downloading content from a website, then the
Reader object may throw an
IOException.
To complete
PatternLoader you will need to implement the method
load, which should read all the data available from the
Reader object reference
r, and create a
List<Pattern> object. The type of the return value provides a strong hint that your implementation of the
load method may well find several pattern strings available in the input. Therefore some method of separating patterns in the input stream is required.
A common technique for separating text data in Unix-like systems such as Linux is to look for "new line" characters, which in Java are written using the character literal
'\n' and appear as new lines when printed. In contrast, Windows usually uses separate characters for "new line" (
'\n') and "carriage return" (
'\r') and therefore it's also common to see the two character string
"\r\n" as a line separator. You might like to try writing a simple test program which executes:
System.out.println("A sentence on one line.\nThis is on a second line.");
and examine the output. This course will use a Unix-style line separator to place multiple patterns into a single file.
The methods provided by
Reader do not provide a mechanism for dividing the input based on the presence of new line characters. This is because the
Reader class provides low-level access to character data. The functionality to split on new lines is provided by
BufferedReader; this functionality is possible with
BufferedReader because the class caches data read internally, allowing the class to search for new line characters in its cache. If you check the documentation for
BufferedReader you will see it provides a
readLine method which will read a line from the underlying reader and return a reference to a
String object containing the data, or alternatively return
null if there are no more lines to be read. The method
readLine will function correctly regardless of whether Unix- or Windows-style line separators are used. You can create a reference to a
BufferedReader object by passing an instance of the
Reader object in as an argument to the constructor:
BufferedReader buff = new BufferedReader(r);
To complete your implementation of
load you will also need to create an instance of
List to save Patterns as you load them:
List<Pattern> resultList = new LinkedList<Pattern>();
Just as we saw earlier with the
Reader class, the
List class may have multiple implementations; in the case above, we use the
LinkedList implementation. Given an instance of type
List you can then add objects of the correct type as follows:
Pattern p = .... resultList.add(p);
You can determine the current number of elements stored in a
List object by using the
size method, and retrieve elements using the
get method; Sun's documentation contains further detail which you will need to review. There is also a special for-loop syntax for Java Collection objects such as
List which allows you to iterate though all the elements in the list:
for(Pattern p: resultList) { //p references each element of "resultList" in order so that first time //round the loop, p references the first element, second time round the //second element, and so on. The loop terminates when "resultList" has //no more elements. }
Now add the following two methods to your implementation of
PatternLoader:
public static List<Pattern> loadFromURL(String url) throws IOException { URL destination = new URL(url); URLConnection conn = destination.openConnection(); return load(new InputStreamReader(conn.getInputStream())); } public static List<Pattern> loadFromDisk(String filename) throws IOException { return load(new FileReader(filename)); }
These two methods use your
load method to load patterns from either a file on disk or a website. They do this by constructing a suitable
Reader object and passing a reference to it to your method. Since your method is agnostic to the type of
Reader provided, your implementation of
load will function with data from either disk or from the web. You will need to add
import statements to describe the location of the extra classes used inside the method bodies of
loadFromURL and
loadFromDisk; you can find the full names for the classes by looking them up in the Java documentation.
Copy your implementation of
PatternLife which you wrote earlier in this workbook, rename it
LoaderLife and put it inside the package
uk.ac.cam.your-crsid.tick4. You should modify
LoaderLife so that an invocation of
LoaderLife with a single argument will print out the details of all the valid patterns found in a file or on a website. Each valid pattern should be prefixed by the index number of the pattern in the source file or webpage, starting at zero. For example, if the filename
MyPatterns.txt is provided as the single argument, and the file
MyPatterns.txt contains a single pattern describing a Glider, then your program should output:
crsid@machine:~> java uk.ac.cam.your-crsid.tick4.LoaderLife MyPatterns.txt 0) Glider:Richard Guy (1970):20:20:1:1:010 001 111 crsid@machine:~>
Similarly, if your program receives a valid URL as a single argument on the command line then your program should load data from the URL and display any valid patterns found. For example,
crsid@machine:~> java uk.ac.cam.your-crsid.tick4.LoaderLife \ 0) Glider:Richard Guy (1970):8:8:1:1:010 001 111 1) [additional patterns should be listed here] 2) ... crsid@machine:~>
The URL used in the example above contains many interesting worlds which you might like to view. You may also like to load entries provided by students who have completed Tick 3*, which are available from:
To complete this part of the Tick successfully, you will need some method of determining whether the string provided on the command line is a filename or a URL. You might like to use the
startsWith method of the
String class to determine whether the string starts with "
http://" or not.
If your implementation of
LoaderLife is invoked with two arguments on the command line, your program should should treat the first argument as a pattern source as above, and the second as the pattern index to initialize a world with, and display successive generations of the world to the user as you have done in previous weeks. For example, the following invocation of
LoaderLife
crsid@machine:~> java uk.ac.cam.your-crsid.tick4.LoaderLife \ 0 - ________ ___#____ _#_#____ __##____ ________ ________ ________ ________
displays successive generations of a Glider just as
PatternLife did last week.
Important: your program should handle all exceptions gracefully by printing an error message to the user describing what has gone wrong and exiting cleanly. You will find it useful to pipe (
|) the output of your Java program into the command line program
less to view long lists of patterns such as those downloadable from the course website; if you do so q can be used to quit the program
less once you have located the index of a pattern you would like to view. In other words, you can type:
crsid@machine:~> java uk.ac.cam.your-crsid.tick4.LoaderLife \ | less
Once you believe you have completed all the exercises in this workbook successfully, you should produce a jar file called
crsid-tick4.jar with the following contents:
META-INF META-INF/MANIFEST.MF uk/ac/cam/your-crsid/tick4/Repeat.java uk/ac/cam/your-crsid/tick4/Repeat.class uk/ac/cam/your-crsid/tick4/ExceptionTest.java uk/ac/cam/your-crsid/tick4/ExceptionTest.class uk/ac/cam/your-crsid/tick4/Pattern.java uk/ac/cam/your-crsid/tick4/Pattern.class uk/ac/cam/your-crsid/tick4/PatternLife.java uk/ac/cam/your-crsid/tick4/PatternLife.class uk/ac/cam/your-crsid/tick4/PatternLoader.java uk/ac/cam/your-crsid/tick4/PatternLoader.class uk/ac/cam/your-crsid/tick4/PatternFormatException.java uk/ac/cam/your-crsid/tick4/PatternFormatException.class uk/ac/cam/your-crsid/tick4/LoaderLife.java uk/ac/cam/your-crsid/tick4/LoaderLife.class
You should set the entry point of the jar file to
uk.ac.cam.your-crsid.tick4.LoaderLife so you can execute your implementation of
LoaderLife without explicitly specifying a class to execute. To submit your work, email your jar file to
ticks1a-java@cl.cam.ac.uk. | http://www.cl.cam.ac.uk/teaching/1011/ProgJava/workbook4.html | CC-MAIN-2016-30 | refinedweb | 4,034 | 50.87 |
Want to see the full-length video right now for free?Sign In with GitHub for Free Access.
Here we'll be looking at Django
View classes, which are similar to Rails
controller actions.
from django.views import generic from .models import Post class PostListView(generic.ListView): model = Post class PostDetailView(generic.DetailView): model = Post class PostCreateView(generic.CreateView): model = Post fields = ('title', 'body', 'published', 'date')
In the case of these classes, it may look like very little is going on, but in
fact we are opining into implicit behavior implemented on the parent classes
(
generic.ListView,
generic.DetailView, etc) which are combined via the
Template Method Pattern.
At a high level, the Template Method Pattern makes use of an outline or skeleton of method calls, where we can choose to override the behavior of any of these child method calls that makes up the sequence by providing a concrete implementation. By using the template method pattern, we're able to strike a nice balance between avoiding boilerplate, while still providing explicit points to configure and override the behavior of our system.
In our example, we began with the default behavior for the
PostListView
which will display all
Posts. If instead we wanted to hide draft
Posts, we
can simple implement the
get_queryset method to return only the desired
def get_queryset(self): return Post.objects.filter(published=True)
What's nice here is that we were able to change just that one small piece,
specifically which
Posts to query, without needing to alter anything about
how things are sorted or rendered or otherwise displayed.
For each generic view there is a specific set of methods that will be called, giving well-defined points to override the behavior. This, for instance, is the Method Flowchart for the generic.ListView:
dispatch()
get_template_names()
get_queryset()
get_context_object_name()
get_context_data()
get()
render_to_response()
One of the key features of the Template Method Pattern is that it causes you to write code that highlights the things that are differed. The common pieces are hidden away and what's left are the bits that make your code unique.
One potential downfall is that using the Template Method Pattern increases the amount a developer needs to know and be comfortable remembering. Thankfully, the Django generic views are nicely focused classes with largely a single responsibility. In addition, the Python and Django communities do a great job with documentation which goes a long way with a pattern like this. That said, it's always good to be aware of the trade offs, and with the template method pattern's simplicity and ease of extension does come a potentially higher learning curve to fully master it.
On the topic of documentation, one particularly good spot to visit is the Classy Class-Based Views page.
Another area where Django shines is in the Single Responsibility
Principle. To dive into this, we'll take a look at the
PostCreateView
class. In our default implementation, the form being rendered on the page was
created from a form class dynamically generated by Django for us. That said,
we can easily take this over by implementing our own.
from .forms import PostForm # ... others omitted class PostCreateView(generic.CreateView): model = Post def get_form_class(self): return PostForm
Here rather than using the generated form class, we're importing and using our own. That form class is implemented as:
from django import forms from .models import Post class PostForm(forms.ModelForm): class Meta: model = Post fields = ('title', 'body', 'published', 'date')
The above is now an explicit version of the exact form Django had previously been generating for us. Now we can go ahead and override some of the default behavior, specifically changing out the way the date fields are rendered:
class PostForm(forms.ModelForm): class Meta: model = Post fields = ('title', 'body', 'published', 'date') + widgets = { + 'date': foms.SelectDateWidget, + }
With the above as context, we can take a look into how Django provides us with really nice dividing lines of responsibility. Thus far we've looked at:
Modeland save it.
These objects all know how to work with each other to build a full page, but individually they are very focused simple objects.
While we might not be writing Python or Django code in the near future, we can often benefit from taking a look at how others are approaching the same sorts of problems we solve each day and perhaps borrow a few things. In this case, Django's use of the Template Method Pattern, as well as its excellent use of Single Responsibility and a clear execution model are an excellent model to allow for code that hit's the sweet spot between concise and focused, while remaining flexible and easy to change. | https://thoughtbot.com/upcase/videos/design-patterns-in-django-and-python | CC-MAIN-2022-21 | refinedweb | 782 | 52.6 |
Creating the Featured Products component
To display our featured products block we can create a custom component called HomePageFeatured that just loops over the featured products. But how to we fill the component with data? We will create a new Higher Order Component that will drop the featured products data into our component. Let's start that now.
First let's create a
containers directory in
custom and in that directory let's create a
homepage directory. In that directory let's create a
featured.gql file. This is where our GraphQL query will placed.
To that file let's add this code:
query featuredQuery($shopId: ID!) { featuredProductsByShop(shopId: $shopId) { nodes { _id ...on CatalogItemProduct { product { title, description, slug media { URLs { thumbnail small } } } } } } }
So let's break this down a little bit. First we declared that we are creating a query and we gave it a name
featuredQuery, and said that we take a shopId of type
ID as our type. We will then call the server query called
featuredProductsByShop with the
shopId we were passed in.
Next we are declaring the 'shape' of our return type. Since we are getting a cursor we says we want nodes, we want the Id of the node and we want those nodes to be of type
CatalogItemProduct and then we declare which fields of the
CatalogItemProduct type we want. And that's it. You can see that query looks just like our test code above with just an additional wrapper around.
Now let's create our actual HOC which we will call
withFeatured.js. That file will look something like this:
import React from "react"; import PropTypes from "prop-types"; import { Query } from "react-apollo"; import featuredQuery from "./featured.gql"; export default (Component) => ( class WithFeatured extends React.Component { static propTypes = { shopId: PropTypes.string } render() { const { shop: { _id: shopId } } = this.props; // Or primaryShopId if (!shopId) { return <Component {...this.props} shouldSkipGraphQL /> } return ( <Query query={featuredQuery} variables={{ shopId }}> {({ error, data, loading }) => { if (error) { console.log(error) } if (!loading && data) { const { featuredProductsByShop } = data; return ( <Component {...this.props} featuredProducts={featuredProductsByShop.nodes} /> ); } return <Component {...this.props} isLoadingFeatured/> }} </Query> ); } } );
This is a Component that will wrap our display components which is why we take in the argument
Component. We are also going to wrap our component in an HOC called
withShop so we will be getting the
shopID of the current shop. And then we use the
Query component from the
react-apollo library to wrap our passed-in component and then we can pass in
featuredProducts. So the snipped where we do the actual wrapping looks something like this
export default compose( withShop, withFeatured, withStyles(styles) )(HomePageFeatured);
We have
HomePageFeatured which is our presentation component wrapped by
withStyles then wrapped by
withFeatured and then wrapped by
withShop. We can then write our presentation component accepting an array of product objects passed in as the
featuredProducts prop.
So a simple version of a featured product component might look something like this:
@withStyles(styles, { name: "HomePageFeatured" }) class HomePageFeatured extends Component { render() { const { classes, featuredProducts } = this.props; return ( <div className={classes.root}> <div>HomePage Featured</div> {!!featuredProducts && featuredProducts.map((product) => { return ( <div className="featured-product-box" key={product.product.slug} style={{ border: "1px solid black", width: "350px" }}> <div className="featured-product-title"><a href={`/product/${product.product.slug}`}>{product.product.title}</a></div> </div> </div> )} )} </div> ); } }
While this is a deliberately simple example, you can see how you can easily take data from any source on the server side and easily pass it down to the client side without having to have any understanding of where it came from or its original data structure.
With that you should have a working
Featured Products section of the home page that you can edit by just editing the tags on individual products.
Congratulations. You can completed the Swag Shop Tutorial 🎉 | https://docs.reactioncommerce.com/docs/swag-shop-8 | CC-MAIN-2019-43 | refinedweb | 636 | 56.35 |
Load environment variables at runtime from a
.env file..
See documentation and examples.
Get the latest:
$ pub global activate dotenv
Run:
$ pub global run dotenv:new # create a .env file and add it to .gitignore $ pub global run dotenv # load the file and print the environment to stdout
Use the issue tracker for bug reports and feature requests.
Pull requests gleefully considered.
This project follows pub-flavored semantic versioning. (more)
Release notes are available on github.
${var}substitution #10
Platform.environment#6
#inside quotes #5
Parserinternals will become private. #3
#unquote,
#strip,
#swallow,
#parseOne,
#surroundingQuote,
#interpolate
Initial release.
example/example.dart
import 'dart:io'; import 'package:dotenv/dotenv.dart' show load, clean, isEveryDefined, env; void main() { load(); p('read all vars? ${isEveryDefined(['foo', 'baz'])}'); p('value of foo is ${env['foo']}'); p('value of baz is ${env['baz']}'); p('your home directory is: ${env['HOME']}'); clean(); p('cleaned!'); p('env has key foo? ${env.containsKey('foo')}'); p('env has key baz? ${env.containsKey('baz')}'); p('your home directory is still: ${env['HOME']}'); } p(String msg) => stdout.writeln(msg);
Add this to your package's pubspec.yaml file:
dependencies: dotenv: ":dotenv/dotenv.dart';
We analyzed this package on Jul 13, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter, other
Primary library:
package:dotenv/dotenv.dartwith components:
io.
The description is too short.
Add more detail about the package, what it does and what is its target use case. Try to write at least 60 characters. | https://pub.dartlang.org/packages/dotenv | CC-MAIN-2018-30 | refinedweb | 255 | 54.18 |
Important: Please read the Qt Code of Conduct -
Using requests with Qt presents problems
i have this function
def ping(): for key, value in self.board.items(): word = "hello world!" try: response = requests.get("http://" + value["ip"], timeout=1) except requests.exceptions.Timeout as ex: self.board[value["Name"]]['Status'] = "Offline" except requests.exceptions.ConnectionError as ex: self.board[value["Name"]]['Status'] = "Offline" if response.status_code == HTTPStatus.OK: if word in response.text: self.board[value["Name"]]['Status'] = "Online" else: print("got {} instead of {}".format(response.text, word)) else: print("got status {} {}".format(response.status_code, response.reason)) return False
What's going on here is that i ping 4 Arduino boards and check if the word "Hello World!" is present in the webpage (im gonna change that later btw) so that i know that it is "online".
When i run my python script pyqt5 waits for this to finish (i counted 4 seconds befor the Gui came up) i've gotten 2 options from people on discord saying
either
- Qthread
- use QNetwork / QNetworkRequest / QNetworkAccessManager
im fairly new to PyQt5 and you could also say Python in general! i've been reading on Qthread AND on QNetworking and well... my brain just shuts down and gives me the "finger" -.-
This is exactly what you need :)
You just need to translate it into Python code.
To ping all of your devices, you could add their addesses to an array or
QStringListand check all IPs
if(answer.contains("Hello World")) { qDebug() << "Yeay"; } else { qDebug() << "Nay"; }
keep in mind im new t PyQt5 and compare to "requests" QNetwork and the rest are way more complex :/
The solution from SO works. You just need to add your part, where you check whether the request reply contains your keyword or not. This is why I posted the code sample.
It's not more complex than the solution, provided on SO. | https://forum.qt.io/topic/112809/using-requests-with-qt-presents-problems | CC-MAIN-2020-40 | refinedweb | 314 | 58.79 |
setup.
We're starting an open source project on behalf of this website! In this series of video's well set up the project locally, turn our python code into a python project and also make the first version of it pip-installable.
The goal of this project (named
Clumper) will be to offer tools that make it easier to
deal with long lists of dictionaries/json-like objects. The API will
look something like this;
from clumper import Clumper list_of_dicts = [ {'date': '2014-01-01', 'values': [1, 2, 3]}, {'date': '2014-01-02', 'values': [1, 2, 3, 4]}, ... ] (Clumper(list_of_dicts) .keep(lambda d: d['date'] == '2014-01-01') .mutate(sum_values=lambda d: sum(d['values'])))
This code will take our list of dictionaries, grab all the instances
where the date is equal to
2014-01-01 and for those instances add a
key called
sum_values to is equal to the sum of the
values list in
each of those dictionaries. The code that might do this looks like))
Note that this is the code that we start with and it is the exact same code as what we ended with in our videos on method chains. We'll add more and more features to this class but in this series of videos we'll focus on getting a project set up where we might work in.
Feedback? See an issue? Something unclear? Feel free to mention it here.
If you want to be kept up to date, consider getting the newsletter. | https://calmcode.io/setup/introduction.html | CC-MAIN-2020-34 | refinedweb | 251 | 75.74 |
Type: Posts; User: ProElite
i have solved the problem.
anyways thanks.
ProElite
Hi All,
I have a small problem..
i have a control class which has to be mapped to a Custom control in DoDataExchange() using DDX_Control call..i have manualy added this statement..
the control...
Hi,
Try to search for topics on codeguru for parallel port programming.
search for a NTPort Library on the net..its free library which u can use to talk to parallel port..
ProElite
hi,
there is no customize button..instead its a drop down item available in the same combo box..
see the pic attached here with
ProElite
otherwise what u can do is..
when u detect the enter key press append "\r\n" to the text available in the text box..
and set the neew text using SetWindowText function
why do u want to do this?
r u doing some NASA's Mission CRITICAL programming :) :)
doing it in that manner will make ur program run even slower.
All the best :)
ProElite
set ur properties for RichEditContol as shown in the fig
ProElite
Hi Sunny whats the big deal in it..?
u can use OnTimer and Set Timer and two variables..thats it..
u will set ur SetTimer say for 2 secs when first time the button is cliked. at this time u...
hi Meka..:)
which class are u studying?? :)
my dear programmer friends..
:)
many good applications use Windows registry for storing positions of their toolbars and dialog boxes.
if u want to use registry use AfxGetApp->WriteProfileInt()...
Hi
Can u explain it in a little better way..
i didnt understand the problem
what i want to ask is..
1. r u having problem using visual studio
or 2. Ur application is not working properly
...
why are u doing it this way..??
u should create ur document template in CWinApp derived class..
then u should store CDocTemplate pointers in CWinApp derived class..
initialise them in...
hi..
what do u mean by folder name of some folders..
u can check whether the current oobject is directory or not using this function IsDirectory()
and to check whether its a dot ot not using...
OOPs !!!
:)
ProElite
hi Naumaan,
Can u explaing ur problem properly..
i didnt understadn it
i didnt understand the dll part of it
do u want to see the settings of printer also??
i mean page size and default printer attached and its port too??
ProElite
is it OK now??
ProElite
i do it this way..
use LPBROWSEINFO structure to pass the info to BroseForFolder
LPBROWSEINFO lpbi = new BROWSEINFO;
lpbi->lpszTitle = "Select Database "; //ur string to change the title...
Hi..
i was looking at ur code..
where is this variable 'i' decalred..and whats its value when calling SetItemText??
ProElite
use LPITEMIDLIST SHBrowseForFolder(
LPBROWSEINFO lpbi
);
you are welcome
ProElite
hi..
do one thing..
add this defination to ur stdafx.h file
#include <conio.h>
and then give a call to getch() before calling return 0 in ur main() function;
ProElite
it happense withall the applications present in the task manager window..
try minimizing any application running..
u will see a memory drop..
i dont think there is any problem..
ProElite
can u explain a little more on minimize and restore problem?
is it a feature provided by ur application :))
ProElite
if u have to change the color of some part of ur text then better use CRichEditCtrl class to derieve ur control from..
for setting focus to view window..u will have to capture use pressed byuttons... | http://forums.codeguru.com/search.php?s=c7ef019449c9b8c793b375f9746807b1&searchid=7940933 | CC-MAIN-2015-40 | refinedweb | 583 | 77.23 |
C library function - fclose()
Advertisements
Description
The C library function int fclose(FILE *stream) closes the stream. All buffers are flushed.
Declaration
Following is the declaration for fclose() function.
int fclose(FILE *stream)
Parameters
stream -- This is the pointer to a FILE object that specifies the stream to be closed.
Return Value
This method returns zero if the stream is successfully closed.On failure, EOF is returned.
Example
The following example shows the usage of fclose() function.
#include <stdio.h> int main() { FILE *fp; fp = fopen("file.txt", "w"); fprintf(fp, "%s", "This is tutorialspoint.com"); fclose(fp); return(0); }
Let us compile and run the above program, this will create a file file.txt, second it will write following text line and finally it will close the file using fclose() function.
This is tutorialspoint.com | http://www.tutorialspoint.com/c_standard_library/c_function_fclose.htm | CC-MAIN-2014-10 | refinedweb | 136 | 61.02 |
import "cloud.google.com/go/compute/metadata"
Package metadata provides access to Google Compute Engine (GCE) metadata and API service accounts.
This package is a wrapper around the GCE metadata service, as documented at.
ExternalIP returns the instance's primary external (public) IP address.
Get returns a value from the metadata service. The suffix is appended to "{GCE_METADATA_HOST}/computeMetadata/v1/".
If the GCE_METADATA_HOST environment variable is not defined, a default of 169.254.169.254 will be used instead.
If the requested metadata is not defined, the returned error will be of type NotDefinedError.
Hostname returns the instance's hostname. This will be of the form "<instanceID>.c.<projID>.internal".
InstanceAttributeValue returns the value of the provided VM instance attribute.
If the requested attribute is not defined, the returned error will be of type NotDefinedError.
InstanceAttributeValue may return ("", nil) if the attribute was defined to be the empty string.
InstanceAttributes returns the list of user-defined attributes, assigned when initially creating a GCE VM instance. The value of an attribute can be obtained with InstanceAttributeValue.
InstanceID returns the current VM's numeric instance ID.
InstanceName returns the current VM's instance ID string.
InstanceTags returns the list of user-defined instance tags, assigned when initially creating a GCE instance.
InternalIP returns the instance's primary internal IP address.
NumericProjectID returns the current instance's numeric project ID.
OnGCE reports whether this process is running on Google Compute Engine.
ProjectAttributeValue returns the value of the provided project attribute.
If the requested attribute is not defined, the returned error will be of type NotDefinedError.
ProjectAttributeValue may return ("", nil) if the attribute was defined to be the empty string.
ProjectAttributes returns the list of user-defined attributes applying to the project as a whole, not just this VM. The value of an attribute can be obtained with ProjectAttributeValue.
ProjectID returns the current instance's project ID string.
Scopes returns the service account scopes for the given account. The account may be empty or the string "default" to use the instance's main account.
Subscribe subscribes to a value from the metadata service. The suffix is appended to "{GCE_METADATA_HOST}/computeMetadata/v1/". The suffix may contain query parameters.
Subscribe calls fn with the latest metadata value indicated by the provided suffix. If the metadata value is deleted, fn is called with the empty string and ok false. Subscribe blocks until fn returns a non-nil error or the value is deleted. Subscribe returns the error value returned from the last call to fn, which may be nil when ok == false.
Zone returns the current VM's zone, such as "us-central1-b".
NotDefinedError is returned when requested metadata is not defined.
The underlying string is the suffix after "/computeMetadata/v1/".
This error is not returned if the value is defined to be the empty string.
func (suffix NotDefinedError) Error() string
Package metadata imports 13 packages (graph) and is imported by 81 packages. Updated 2017-05-20. Refresh now. Tools for package owners. | https://godoc.org/cloud.google.com/go/compute/metadata | CC-MAIN-2017-22 | refinedweb | 497 | 51.85 |
As when we write the application in Python kivy, to write all the things on the same code make a mess in the code and it is hard to understand that by someone another. Also writing a large code makes hard to maintain the construction of the widget tree and explicit the declaration of bindings.
The KV language, allows us to create own widget tree in a declarative way and to bind the widget properties to each other or to callbacks in a natural way.
👉🏽 Kivy Tutorial – Learn Kivy with Examples.
How to load kv file:
There are 2-ways to load the
.kv file into code or Application
- By name convention method-
While writing code we will make the App class. For this method, the name of the file and the app class is same and save the kv file with
appclassname.kv.
Kivy looks for a Kv file with the same name as your App class in lowercase, minus “App” if it ends with ‘App’ e.g:
classnameApp ---> classname.kv
If this file defines a Root Widget it will be attached to the App’s root attribute and used as the base of the application widget tree.
The sample code on how to use .kv file in kivy is given below:chevron_rightfilter_none
.kv file code save with the same name as the app class –chevron_rightfilter_none
Output:
- Builder method-
For this method to use you first have to import Builder by writing
from kivy.lang import builder
Now by the builder you can directly load the whole file as a string or as a file. By doing this for loading .kv file as a file:
Builder.load_file('.kv/file/path')
or, for loading, kv file as a string:
Builder.load_string(kv_string)chevron_rightfilter_none
Output:
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. | https://www.geeksforgeeks.org/python-kivy-kv-file/ | CC-MAIN-2021-10 | refinedweb | 325 | 81.02 |
A PDF version of this migration guide can be downloaded
here.
This guide is designed to support you in your Bing Maps migration from version 7 to version 8. This resource provides detailed comparisons between the JavaScript API of these two versions of Bing Maps as well as comparative code samples, migration suggestions
and best practices for migrating your code to the newest version of Bing Maps. As you read this document you should gain an understanding of the benefits of Bing Maps v8 and how to leverage it in your existing mapping applications.
The version 7 of Bing Maps has been around for over 6 years. Version 8 of Bing Maps was just recently released and offers numerous advantages over v7, many of which are highlighted in section 2.0 What’s New in Bing Maps V8 of this document. The Bing Maps
v8 web control is the recommended migration path from v7. Version 8 is over 80% backwards compatible with the v7 control which should help minimize the effort involved in migrating apps.
The Bing Maps V8 control contains several new innovative features and functionalities and there is a well laid out roadmap for bring new features to V8 on a regular basis.
Faster and more fluid map control
When it comes to performance, the v8 is miles ahead of v7. Version 8 is capable of rendering much more data in less time than version 7. When dealing with small data sets on a standard browser this might not be as noticeable, but if you need to display a
large data set or are using a mobile browser this performance increase makes a larger impact.
Increased culture support
Version 7 supported 22 different culture codes (languages). Version 8 supports significantly more culture codes and uses the Bing Maps REST services to perform geocode and route requests which has support for 117 languages.
Many new features
Over the years Bing Maps customers and developers have requested number of new features and functionalities. Many of these are now available in v8. Some of the most notable are;
To assist you with planning, we have compiled this list of high‐level steps to use as a baseline plan to move your codebase and development practices to version 8 equivalents. While your ultimate plan will depend on your specific situation, the following
steps outline suggested components of any effort:
Review existing application and identify where Bing Maps v7 code is being used.
As an option, before touching any code test your existing app against version 8 as it is by using the steps outlined in Appendix A: Fiddler Redirect test
Identify which features of Bing Maps v7 are being used and review the migration information in this document.
Migrate code to version v8 of Bing Maps and update script reference to point to the new v8 map control URL.
Test your migrated application.
Deploy your application to your production environment.
Here is a list of useful technical resources for the Bing Maps v8 web control.
Bing Maps v8 Interactive SDK
Bing Maps v8 documentation
Bing Maps MSDN documentation (All Bing Maps APIs)
Bing Maps MSDN Forums
Bing Maps Dev Center
Bing Maps REST Services (MSDN)
Bing Spatial Data Services (MSDN)
Bing Maps Terms of Use
The Bing Maps v8 control reduces development time by requiring less code to implement more features into your app. It also brings significance performance improvement by using the HTML5 canvas, which provides the ability to render vector data over 10 times
faster than previous versions of Bing Maps. It also supports rendering thousands of more shapes, allowing users to view more data and gain deeper insights into their data. In addition to this a number of exciting new features such as Autosuggest, Streetside
imagery, and many business intelligence tools have been added.
Over 5 years of customer and developer feedback was used to understand the types of apps developers were creating with Bing maps. From this knowledge several improvements were made in the V8 SDK which better aligns with the type with these types of apps.
One of the first noticeable changes in the Bing Maps V8 API is the map script URL used to load the code needed for Bing Maps.
If you wanted to use the V7 map control on a secure HTTPS site not only did you have to use an HTTPS domain for the URL, but you also had to add a query URL parameter.
This URL has been a source of confusion for developers new to Bing Maps for the following reasons:
The name Virtual Earth is the legacy name that was used for the Bing Maps platform before it was rebranded as Bing Maps in 2009. Developers who are new to Bing Maps are often unaware of the name change.
The dev subdomain was meant to indicate that this URL was for the Bing Maps developer API. However, to some this was interpreted as development environment which lead to developers asking where the production environment is. Bing Maps lets developers build
their apps against the production service using a basic key during their development cycle.
Having to add the extra query parameter when using HTTPS has been a common step many developers missed.
With the release of Bing Maps V8, this was the perfect time to update the map script URL to address these issues.
If you want to use HTTPS, simply add and “s” to ���http”.
Alternatively, if your app is hosted on a server, you can drop http:/https: from the URL and the browser will automatically retrieve the correct version of the map control.
With V8 you can asynchronously load the map control by making use of the “async defer” keywords in the script tag and by adding a callback function to the map script URL. This provides a slight performance improvement when the page load. Here is an example
of how to do this:
<
script
type
=
'text/javascript'
src
'http//'
async defer></
>
function GetMap() {
//Load the map.
}
</
There are three branches of the Bing Maps V8 SDK that can be accessed. This provides the option to access new features as your own pace.
To point your application to a specific branch simply add “&branch=[branch_version]” to the map script URL. If a branch is not specified, the release branch is automatically loaded. For example, the following URL can be used to load the experimental branch.
'http//'
Tip: Before releasing your application into production, ensure that the branch isn’t set to experimental and test against the branch you plan to use.
Localization of maps is the process of rendering maps in a specific language other than the default. Both v7 and v8 support localization. Version 7 required that you pass in a culture code to set the language of the map, however version 8 automatically tries
to detect the appropriate culture code for a user based on their browser settings and location. This saves a lot of work in trying to manage the culture of the map yourself. However, like v7 it is possible to specify a culture code, which will override the
default behavior. This is useful when testing, or if you only want the map to support a single language. If you do wish limit the map to a single language, the following is an example of how this was done in v7 and how it can be done in v8.
Before: v7
To get a localized map using v7, add an mkt parameter to the API script reference.
"text/javascript"
"[culture_code]"
></
Here is an example of Paris in v7 with the culture set to “fr-FR”.
After: v8
The Bing Maps v8 web control no longer requires the use of a mkt parameter to localize the map. If one is not provided the map will automatically attempt to set the language of the map based on the user’s browser settings and/or location.
The Bing Maps V8 map control provides a number of improvements over V7. The following is a comparison of key features between Bing Maps V7 and V8.
Of the key features listed between V7 and V8, only one feature was deprecated, Venue Maps. See the 3.1 Deprecated Modules section of this document for more details.
Autosuggest [docs,
samples]
Provides suggestions as you type a location in a search box.
Drawing tools [docs,
samples]
Want to draw a pushpin, polygon or some other shape on the map? The drawing tools module lets you do this easily. This is can be used in many different types of scenarios such as providing the user the ability to draw a search area, or providing tools for
managing sales territories.
Clustering [docs,
samples]
Visualize large sets of pushpins, by having overlapping pushpins group together as clusters and break apart as you zoom in automatically.
GeoJSON Support [docs,
samples]
GeoJSON is one of the most common file format used for sharing and storing spatial data. With Bing Maps v8 you can easily import and export GeoJSON data.
Heat Maps [docs,
samples]
Visualize the density of data points as a heatmap.
Point Compression [docs,
samples]
Some of the Bing Maps services compress arrays of locations using a compression algorithm. These tools make it easy to encode and decode this data when using these services. Additional, this algorithm can also be used with your own data if sending it between
a client app and a server.
Spatial Data Services module [docs,
samples]
The Bing Spatial Data Services provides two key features; the ability to upload, host and expose location data as a spatial REST service, and the ability to retrieve administrative boundaries data such as zip codes, cities, states and more from Bing Maps.
The Bing Maps v8 SDK exposes a set of useful tools for accessing this data and integrating it with the map seamlessly.
Spatial Math module [docs,
samples]
When analyzing business data on a map it is often useful to be able to perform a spatial formula or two. One of the most common being the ability to calculate the distance (as the crow flies) between two locations.
Streetside imagery [samples]
Explore 360-degrees of street level imagery.
Test Data Generator [docs,
samples]
When developing your app, you may find that you need some data to test parts of your application. Version 8 includes a test data generator that can create random locations, pushpin, polylines, polygons and colors.
Vector Map Labels
The map labels in the Bing Maps v8 SDK are separate from the base map and sit above the data on the map. This helps ensures that the labels can be clearly visible no matter what data is added to the map. When pushpins overlap labels the labels can detect
this and move out of the way. If it is a road label it will move along the road. If it is a city name it may move up a bit. If there are a large amount of pushpins in an area, the label may be hidden entirely.
Well Known Text support [docs,
samples]
This is a standard way of representing spatial objects as a string and use supported by all OGC systems and databases. Easily import and export spatial data with a spatial database and Bing Maps.
A list of features comparing Bing Maps V8 with other controls and services in the Bing Maps platform can be found
here.
The core functionality of the Bing Maps V8 has been designed to be partially backwards compatible so as to minimize the effort required to migrate from the Bing Maps V7. The following outlines the breaking changes between the APIs.
Important Note: There may be some features that where in V7 which are not yet in V8. If a feature is missing in V8 and does not appear in the lists it is highly likely that the feature in question is planned but has been a lower priority
and thus, not yet added. Check the experimental branch of V8 to see if the feature may be in testing. If it is not there, check the forums or contact the support team if you would like a status update on a feature.
The following Bing Maps V7 SDK modules and namespaces have been deprecated in the Bing Maps V8 SDK.
The following Bing Maps V7 classes have been deprecated from the Microsoft.Maps namespace in V8.
BusinessDetails
BusinessDisambiguationSuggestion
DirectionsStep
DirectionsStepEventArgs
DirectionsStepRenderEventArgs
DirectionsStepWarning
Disambiguation
DisambiguationRenderEventArgs
LocationDisambiguationSuggestion
Maneuver
Position
PositionCircleOptions
PositionError
PositionOptions
ResetDirectionsOptions
Route
RouteLeg
RoutePath
RouteSelectorEventArgs
RouteSelectorRenderEventArgs
RouteSubLeg
RouteSummary
RouteSummaryRenderEventArgs
SearchRegion SearchRequestOptions
SearchResponse
SearchResponseSummary
SearchResult
TransitLine
TransitOptions
WaypointEventArgs
WaypointRenderEventArgs
Many of the classes that are in the Bing Maps V7 control are available in the V8; however, in some cases, certain methods, properties or events have been deprecated.
getMap
reverseGeocode
setMapView
afterRouteSelectorRender
afterStepRender
afterSummaryRender
afterWaypointRender
beforeDisambiguationRender
beforeRouteSelectorRender
beforeStepRender
beforeSummaryRender
beforeWaypointRender
dragDropCompleted
itineraryStepClicked
mouseEnterRouteSelector
mouseEnterStep
mouseLeaveRouteSelector
mouseLeaveStep
routeSelectorClicked
waypointAdded
waypointRemoved
autoDisplayDisambiguation disambiguationPushpinOptions
displayTrafficAvoidanceOption
getId
getTitleAction
getTitleClickHandler
click
entitychanged
mouseenter
mouseleave
height
width
id
pushpin
titleAction
titleClickHandler
typeName
blur
focus
getImageryId
getModeLayer
getTargetBounds
getTargetCenter
getTargetHeading
getTargetMetersPerPixel
getTargetZoom
getUserLayer
getViewportX
getViewportY
isDownloadingTiles
imagerychanged
keydown
keypress
keyup
optionschanged
targetviewchanged
tiledownloadcomplete
customizeOverlays
disableKeyboardInput
disableMouseInput
disableTouchInput
disableUserInput
enableSearchLogo
fixedMapPosition
inertiaIntensity
theme
tileBuffer
useInertia
viewChangeEndDelay
handled
isPrimary
isSecondary
isTouchEvent
originalEvent
wheelDelta
dblclick
getHeight
getTypeName
getWidth
getZIndex
toString
infobox
state
htmlContent
clear
getBusinessDetails
getDisambiguationContainer
getDisambiguationResult
getShortAddress
isExactLocation
changed
geocoded
reverseGeocoded
businessDetails
disambiguationContainer
exactLocation
shortAddress
While some functions or properties may have been deprecated, a few functions or properties have breaking changes as outlined below.
When using a callback function with a TileSource object, the callback function used to return a
levelOfDetail property. This property has been renamed to
zoom.
Infoboxes are no longer added to the map using map.entities.push, instead they have a
setMap function where you pass in the map instance you want to attach the infobox to.
The mkt URL parameter that is used in V7 to set the culture/language of the map is no longer supported in V8. Instead V8 automatically detects the users culture/language settings from their browser.
In V7 the Pushpin class allowed raw HTML to be used to create custom pushpins. You could also specify a CSS class name using the
typeName pushpin option. In the V8, shapes are rendered on an HTML5 canvas which doesn’t support rendering HTML DOM elements. Bing Maps V8 uses the HTML5 canvas as it provides improved performance and ability to create richer and more powerful
features. V8 does however provide several ways to create custom pushpins;
URL to custom image
URL to an SVG
Data URI of an image
Inline SVG string
Canvas Data URIs
Many examples are provided in the documentation and in the interactive SDK for V8.
That said it is possible to render some HTML content on a Canvas by embedding the HTML inside of an SVG using the
foreignObject tag. However, there are a number of limitations to this approach;
Interactive content such as links, inputs, and iframes are not supported.
Some interactive content may render, however it’s interactive features will no longer work as the HTML is now simply an image of the element and not the interactive element itself.
The XHTML namespace is also required in the HTML.
In terms of browser support the SVG foreignObject tag is fully supported in Edge and Firefox and partially supported in Chrome. It is meant to be partially supported in IE11 however testing shows otherwise.
Not all browsers that support the SVG foreignObject tag may render the HTML content the same way.
Due to these limitations, in most applications it is likely best to use one of the many other ways of creating custom pushpins in V8. However, if your application has a limited audience who use a specific browser that does support the
foreignObject tag, then this maybe an option.
The following code sample creates an SVG template string that contains the
foreignObject tag and a DIV with the XHTML namespace on it. It then inserts custom HTML into the template and passes it in as the
icon property of the pushpin.
<!DOCTYPE html>
html
head
title
meta
charset
"utf-8"
/>
''
var map;
var svgTemplate = '<
svg
xmlns
""
"100"
"25"
><
foreignObject
"100%"
div
""
>{htmlContent}</
>';
map = new Microsoft.Maps.Map('#myMap', {
credentials: 'Your Bing Maps Key'
});
var customHtml = '<
style
"font-size:12px;border:solid 2px;background-color:LightBlue;padding:2px;"
>Custom Pushpin</
//Create custom Pushpin using an SVG string.
var pin = new Microsoft.Maps.Pushpin(map.getCenter(), {
icon: svgTemplate.replace('{htmlContent}', customHtml),
anchor: new Microsoft.Maps.Point(25, 5)
//Add the pushpin to the map.
map.entities.push(pin);
body
"myMap"
"position:relative;width:600px;height:400px;"
Running this sample in a browser that supports the SVG foreignObject tag will display a blue box with the text “Custom Pushpin” inside it.
Additional Resources:
Interactive Code Samples
Additional Code Samples
Pushpin Class
Drawing DOM objects into a canvas
The EntityCollection class was introduced in Bing Maps V7 and was used to separate groups of shapes into layers. In addition to being able to add shapes to an entity collection, you could also add other entity collections. This is very different
from how all of the other Bing Maps SDK’s have worked, adds unneeded complexity, and is something that isn’t a common scenario. This class has been deprecated in V8 and is replaced by a new
Layer class.
The Layer class is very different from the entity class. It only accepts
IPrimitive shapes (pushpins, polylines, polygons). One key feature of the Layer class in V8 is that it provides the ability to attach mouse events to the layer. These events will fire if any of the mouse events occur on any of the data in the
layer. Since there is only one event handler needed, development is simplified and there is also a performance benefit as well.
One key different between EntityCollections and Layer is how you add them to the map. You add an
EntityCollection by using the map.entities.push function. In V8,
Layers are added using the map.layers.insert function.
The following table shows links the main V7 EntityCollection functions to the V8 layer equivalents.
To minimize effort when migrating basic applications, an EntityCollection class is available in the VV8 control which wraps the Layer class. This wrapper flattens all child entity collections of an
EntityCollection into a single layer. This may result in some rendering differences when compared to V7. There may also be some other unexpected behaviors when compared to V7. This class is still deprecated and migrating over the
Layer class is recommended.
Layer Class
LayerCollection Class
The infobox is one of the most common ways of displaying information when a user interacts with a pushpin. In V7 infoboxes where treated like any other shape on the map. This caused a lot of issues, the most common being that if the infobox wasn’t to the
map last, other shapes would overlap it. In most, if not all scenarios this is not desired. A common way to work around this was to create to entity collection layers, one for your shape data and one for the infobox. The data layer would be below the infobox
layer and this would mean that no matter what order your data was added to the map, the infobox would always appear above it. In v8 this approach won’t work as entity collections have been deprecated. However, knowing the in most, if not all cases users want
the infobox to appear about all other data, the way infoboxes are added to the map has changed. Instead of adding the infobox to the
map.entities, map.layers properties or the
EntityCollection or Layer class, the infobox class instead has a
setMap function in which you pass a map instance to bind the infobox to. This is a breaking change, but solves one of the most common issues developers come across when using infoboxes. Here is how infoboxes can be added to the map in V8:
var
infobox =
new
Microsoft.Maps.Infobox(map.getCenter(), {
title:
'Map Center'
,
description:
'This is the center of the map.'
infobox.setMap(map);
Infobox Class
In V7, when setting the view of the map you could specify a center offset value which would be a Point object which specified pixel distances to offset the center of the map. This is useful if you want to programmatically pan the map to another location
without. For example, if you have a pushpin that opens an infobox when clicked and the pushpin is near the top of the map, offset setting the center of the map by the height of the infobox would bring the infobox into view. This feature hasn’t been added to
V8 as it wasn’t used very much and is fairly easy to work around. However, if there is demand, this may be added to V8 in the future.
Before: V7
map.setView({ centerOffset:
Microsoft.Maps.Point(dx, dy) });
After: V8
cp = map.tryLocationToPixel(map.getCenter());
cp.x += dx;
cp.y -= dy;
map.setView({ center: map.tryPixelToLocation(cp) });.
map =
Microsoft.Maps.Map(document.getElementById(
), {
credentials:
"YOUR BING MAPS KEY"
Microsoft.Maps.Map(
'#myMap'
, {.
shapes = [];
for
(
i=0;i<shapes.length;i++){
map.entities.push(shapes);
pin =
Microsoft.Maps.Pushpin(center, {
color:
'red'
After: v8 – Change color of Custom Inline SVG Pushpin
svgPushpinTemplate =
'<svg xmlns="" width="50" height="50"><circle cx="22" cy="22" r="20" stroke="black" stroke-</svg>'
;
Microsoft.Maps.Pushpin(map.getCenter(), {
icon: svgPushpinTemplate,
'rgba(0,120,255,0.5)'
anchor:
Microsoft.Maps.Point(22, 22)
Pushpin Interactive Code Samples
Additional Pushpin Code Samples.
polygon.setOptions({ fillColor:
Microsoft.Maps.Color(128,255,0,0) });
'rgba(255,0,0,0.5)'
});
In this scenario, V8 requires 26% less code than V7, however the performance improvement of V8 is the real benefit.
Colors in V8
Color Class.
Microsoft.Maps.Events.addHandler(map,
'click'
,
function
(e) {
point =
Microsoft.Maps.Point(e.getX(), e.getY());
loc = e.target.tryPixelToLocation(point);
//Do something with the location.
loc = e.location;
In this scenario, V8 requires 40% less code than V7.
Events documentation
Events interactive code samples.
i = 0; i < shapes.length; i++) {
Microsoft.Maps.Events.addHandler(shapes[i],
, eventHandlerCallback);
map.entities.push(shapes[i]);
layer =
Microsoft.Maps.Layer();
layer.add(shapes);
Microsoft.Maps.Events.addHandler(layer,
In this scenario, V8 requires 15% less code than V7. However, the shapes are stored in a layer which will make handling the data easier, especially if there are multiple layers of data. Additionally, the V8 solution provides a good performance benefits.
Layer class
Layer interactive code samples.
polygon =
Microsoft.Maps.Polygon(
/*ring data*/
);
Microsoft.Maps.loadModule(
'Microsoft.Maps.AdvancedShapes'
,
() {
map.entities.push(polygon);:
Polygon Class
Polygon interactive code.
'Microsoft.Maps.GeoJson'
'Microsoft.Maps.HeatMap'
Microsoft.Maps.loadModule([
],
V8 requires 32% less code than V7 when loading two modules at the same time. V8 would provide even more savings when loading more than two modules at a time. However, the performance improvement of V8 is the real benefit.
Modular Framework
Modular Framework interactive code samples. This is noted in the
3.4 Breaking Class Function or Property Changes of this doc.
After: V8 - Z, Y, Zoom Tile Layer
uriConstructor: xyzTileUrl
In this scenario, V8 requires 36% less code than V7.
Before: V7 – WMS Tile Layer
var wmsTileService = '';;
};
After: V8 – WMS Tile Layer
var wmsTileService = ''
uriConstructor: wmsTileService + '{bbox}'
}),.
Tile Layer interactive code samples
TileSource options
When the Bing Maps V7 control was initially released there was no built in way to geocode locations or calculate routes. As such applications had to connect directly to the Bing Maps REST services as described in these articles in the MSDN documentation:
Geocoding a Location
Getting Route Directions.
Autosuggest module
Autosuggest module interactive code samples
Directions module
Directions module interactive code samples
Search module
Search module interactive code samples
Bing Maps REST Services
Both the Bing Maps v7 and v8 map controls support modules. Version 8 is largely backwards compatible with v7. Many modules have been created by the developer community for v7 over the years. Many of these also work with v8 and many of these functionalities
already exist in v8. However, there are a few modules in the
Bing Maps v7 Modules CodePlex project that may not be in v8 which provide additional functionality that you may require. The following table outlines the support or migration plan for these modules.
If you are using the Bing Maps REST or Spatial Data services with a Bing Maps controls, you can optimize your application to reduce the number of transactions it uses. This can be done by making use of sessions. A session occurs when a map is loaded and
lasts until it is unloaded or the page it is on is refreshed. To take advantage of sessions you need to generate a session key from the map. A session key is special Bing Maps key that when used with the Bing Maps REST services marks all the requests as non-billable
transactions. If your application provides users, the ability to geocode and/or perform routing requests sessions can drastically reduce the number of billable transactions that occur. To make things easy it is best to generate a session key right after the
map loads and store it like this:
var sessionKey;
var map = new Microsoft.Maps.Map('#myMap', {
credentials: "Your Bing Maps Key"
map.getCredentials(function (c) {
sessionKey = c;
Once you have a session key simply use this instead of your Bing Maps keys in your REST service requests. See the
Understanding Bing Maps Transactions documentation for more details on billable and non-billable transactions.
Here are a few tips to maximize your use of sessions:
Avoid post backs that cause the page to fully reload on pages that have maps. Every time the page reloads a new session is created. Instead look at using AJAX to pull in data and make requests without refreshing the page.
Keep the user on a single page. Spreading the mapping functionality across multiple pages causes multiple page loads/refreshes which generate many sessions. In many cases, not only does keeping the user on a single page and pulling in data accordingly reduce
the number of sessions created, but it also makes for a much better user experience.
Generate a session key right after the map loads and store it in a global variable inside your app. This will save you time later in your application when you need to use it.
If you have a long running application where you expect the user to be using the app for more than 30 minutes, then generate the session key for each request to ensure the key doesn’t time out.
The following is are a few useful tips and tricks when using spatial data with the Bing Maps v8 map control.
Spatial Data in SQL
If your application is storing data in a spatial database and have created a custom web service to return the data to the webpage, send the spatial data back as Well Known Text. In SQL if you use the STAsText or the ToString methods on a SqlGeometry or SqlGeography
object, it will return a Well Known Text string. Version 8 has a Well Known Text module that can easily parse this for you. This would be a much better approach than creating custom data models for handling the spatial data. You can find documentation on this
module
here. Interactive code samples can be found
here.
Host your data in the Bing Spatial Data Services
Rather than storing your data in a database and creating a custom web service, or even worse, storing your data in flat files, upload the data to the Bing Spatial Data Services. This service will expose it for you as a spatial REST service that you can easily
connect to using the Spatial Data Services module in v8. There are many benefits to doing this;
Your hosting requirements are less.
If you use a session key from the map, the first 9 million requests to this service are non-billable which would help reduce your overall costs.
The amount of code that would need to be maintained by your development team is very small compared to a custom database/web service solution.
Here are some useful resources:
Spatial Data Services module documentation
Spatial Data Services module interactive code samples
Managing data sources through the Bing Maps portal
Bing Spatial Data Services REST API’s
Version 8 of Bing Maps provides a button that will center the map over the user’s location, however, depending on your application, you may want to do more than just this with the user’s location.
Obtaining a user’s location can easily be done using the
W3C Geolocation API. This API is exposed through the navigator.geolocation property in the browser. The browser will display a notification to the user the first time this API tries to get the users location, and ask permission to share
this data.
Example - Display user’s location
This example shows how to request the user’s location and then display it on the map using a pushpin.
credentials: ‘Your Bing Maps Key’
//Request the user's location
navigator.geolocation.getCurrentPosition(function (position) {
var loc = new Microsoft.Maps.Location(
position.coords.latitude,
position.coords.longitude);
//Add a pushpin at the user's location.
var pin = new Microsoft.Maps.Pushpin(loc);
//Center the map on the user's location.
map.setView({ center: loc, zoom: 15 });
If you run this code a notification will be displayed asking if you want to share your location. If you allow it to use your location the map will center on your location and a pushpin will be displayed.
Example – Continuously track user’s location
This example shows how to monitor the user’s location and update the position of a pushpin as the user moves.
var map, watchId, userPin;
function GetMap()
{
function StartTracking() {
//Add a pushpin to show the user's location.
userPin = new Microsoft.Maps.Pushpin(map.getCenter(), { visible: false });
map.entities.push(gpsPin);
//Watch the users location.
watchId = navigator.geolocation.watchPosition(UsersLocationUpdated);
function UsersLocationUpdated(position) {
//Update the user pushpin.
userPin.setLocation(loc);
userPin.setOptions({ visible: true });
map.setView({ center: loc });
function StopTracking() {
// Cancel the geolocation updates.
navigator.geolocation.clearWatch(watchId);
//Remove the user pushpin.
map.entities.clear();
br
input
"button"
value
"Start Continuous Tracking"
onclick
"StartTracking()"
"Stop Continuous Tracking"
"StopTracking()"
Here are some great tools that can be used to make your migration to Bing Maps easy.
Technical support
If you are a licensed Bing Maps Enterprise Customer you can contact the Bing Maps Enterprise support team for assistance with any technical issue you have. They are available by email during EU and NA business hours. You can find contact details for the
support team
here.
Developer Forums
The Bing Maps developer forums is also a good place to find migration assistance, especially if you are not an Enterprise customer with access to the Enterprise Support team. The Bing Maps developer forums are regularly monitored by community developers
and by members of the Bing Maps team. You can find the Bing Maps developer forums
here.
Licensing Queries
If you have licensing related questions you should take them to your Bing Maps account manager if you know who they are. If not, you can send queries to the Bing Maps licensing team and they will assist. For queries inside of North or South America you can
contact them at maplic at microsoft dot com. For queries in the rest of the world you can contact them at mapemea at microsoft dot com.
The Bing Maps blog is where the Bing Maps team announces any new features about Bing Maps. In addition to this regular technical posts showing how to do new and interesting things with the Bing Maps controls are made available. You can find the blog
here.
The following describes how to test an existing website that uses Bing Maps v7 against v8 by redirecting the request for the map control to v8. This is a quick and easy way to check what features will require migration beyond updating the map script URL.
Note that is solution only works for websites that use HTTP and not HTTPS.
If you don’t already have Fiddler installed, download and install it from here:
When you first open Fiddler you may see an information dialog about the isolation technology in Windows which may interfere with the app. You can ignore this.
From the right side panel, select the AutoResponder tab.
Check both the Enable Rules and the Unmatched requests passthrough checkboxes
Press Add Rule. In the Rule Editor, copy and paste the following two lines into the two textboxes and press
Save.
With the rule checked, go to the webpage that has your v7 map and test your application. If you are already on that page, refresh the page. There are a few signs you can use to verify if your webpage is using V8;
The Bing logo will have a capital “B”.
The navigation bar, if displayed will look much different.
If you inspect the DOM of the map div you will find that it contains HTML5 canvas objects.
If you notice any issues, open the browsers developer tools by pressing F12. Errors will generally appear in the console and can often provide an indication of where the source of an issue is. Using the developer tools, you can also add breakpoints in your
application code and step through the code, line-by-line, by pressing F10. This is a good way to find which line of code is causing an issue. Once the line or block of code that is having issues is identified, check this migration guide to verify that the
feature in question is not deprecated, and if it is, if there is a new alternative solution available. | https://social.technet.microsoft.com/wiki/contents/articles/34563.bing-maps-v7-to-v8-migration-guide.aspx | CC-MAIN-2018-39 | refinedweb | 5,634 | 53.31 |
Working with targets
Regardless of the output (esnext or es5) module resolution must be set to
commonjs. It's not because FuseBox cannot handle
imports it's because FuseBox
mimics
require statements in development (that's why FuseBox is so fast - it's
not altering the code).
As FuseBox is tightly coupled with TypeScript, there are a few tricks you can apply to make your life easier. If you are using Babel - you shoulds still install TypeScript as it will be used to transpile npm modules. More about it here
tsconfig.json is being scanned from the current
homeDir and higher up, if
it's not found, FuseBox generates in-memory configuration, which looks like
this:
{ "compilerOptions": { "module": "commonjs", "target": "es6" } }
Once the config is created, you cannot override
"module": "commonjs" it will
always be set to use
commonjs.
Take a look a target option.
FuseBox.init({ target: "browser@es5", });
If the above configuration is set, FuseBox will override
compilerOptions.target in your
tsconfig.json making it easier to control the
output. It will also take care of all npm modules. For example if you are
importing a library and the script target is not
es5 FuseBox will transpile it
down.
An alternative way of overriding
tsconfig.json is using
tsConfig option
FuseBox.init({ tsConfig: [{ target: `es5` }], });
UglifyingUglifying
Make sure you have
uglify-js and/or
uglify-es installed. If your target is
higher than
es5, Quantum will use
uglify-es. More about it
here You can control that manually too.
Working with BabelWorking with Babel
So you use
Babel instead of
TypeScript but you still should install it.
Don't worry you don't need to do anything, FuseBox will use TypeScript under the
hood to help matching your script target.
For example if you are importing a library and the script target is not
es5
FuseBox will transpile it down. If you target is
es6, FuseBox won't touch it
(unless
import statements are used - it will transpile it using
es6 target -
respecting your script level choice)
Server bundlesServer bundles
When working with server bundle, it's highly recommended to use the latest
NodeJS and
server@esnext option. If you want to take advantage of the new
syntax, and it make it blazing fast it's imperative to use the latest tech.
FuseBox.init({ target: `server@esnext`, });
Why can't we use Babel to transpile npm modules?Why can't we use Babel to transpile npm modules?
Because
TypeScript is much much faster and it's easy to change the script
target. By the way, if you are interested, FuseBox can transpile javascript
using TypeScript. Read up on
useTypescriptCompiler
option | https://fuse-box.org/docs/3.6.0/guides/working-with-targets | CC-MAIN-2021-25 | refinedweb | 442 | 64.41 |
in reply to
Re^2: Is there any XML reader like this?
in thread Is there any XML reader like this?
I.)
I'm have no idea why you call XML::LibXML a monster compared to XML::Simple.
Here's one reason:
XML::LibXML->load: specify location, string, or IO at C:\test\xml1.pl
+line 7
[download]
This is line 7:
my $root = XML::LibXML->load_xml( fh => \*DATA )->documentElement;
[download]
So now you've got to wade through the 32 separate pages of XML::LibXML POD to work out why!
I never have that problem with XML::Simple.
The start of some sanity?.
And that's not even mentioning the fact that XML::LibXML is 20x faster
BTW. Even that factually correct claim only tells half the story. Generate a simple and fairly modest XML file using this:
#! perl -slw
use strict;
$|++;
our $S //= '999';
our $I //= 10;
open O, '>', 'junk.xml';
print O '<servers>';
for my $s ( '0001' .. $S ) {
printf "\r%s", $s;
print O "<station$s>";
print O '<ip>', join('.', unpack 'C4', pack 'N', int( rand 2**32 )
+ ), '</ip>' for 1 .. $I;
print O "</station$s>";
};
print O '</servers>';
close O;
[download]
Like this:
C:\test>xmlgen -S=9999
9999
C:\test>dir junk.xml
15/01/2012 12:40 2,424,205 junk.xml
[download]
Now run XML::Simple & XML::LibXML scripts that parse that file and iterate the contents and time them:
C:\test>xmllib junk.xml
Parsing took 0.290895 seconds
Iteration took 171.657306 seconds
Total took 171.959000 seconds
Check mem:63.6MB
C:\test>xmlsimple junk.xml
Parsing took 38.202000 seconds
Iteration took 0.059186 seconds
Total took 38.262577 seconds
Check mem:142MB
[download]
All the time you gained during parsing, you throw away four-fold when accessing the data through the nightmare interface of OO baloney.
And if you double the file size:
C:\test>xmlgen -S=19999
19999
C:\test>dir junk.xml
15/01/2012 12:58 4,868,440 junk.xml
[download]
And now LibXML takes 8 times as long:
C:\test>xmllib junk.xml
Parsing took 0.560000 seconds
Iteration took 676.238758 seconds
Total took 676.802000 seconds
Check mem:107MB
C:\test>xmlsimple junk.xml
Parsing took 75.078000 seconds
Iteration took 0.124583 seconds
Total took 75.209615 seconds
Check mem:254MB
[download]
Increase the file size 10-fold and LIbXML will take 100 time longer.
Now look carefully at the split times. XML::Simple's parsing time is slow, but linear with the file size. It's traversal time is extremely fast and also linear.
Conversely, LibXML's parsing time is very fast and linear; but it's traversal time is horribly slow and quadratic with the file size.
It is easy to see which one wins in the speed stakes.
Not an especially compelling case without posting the source code for the "XML::Simple & XML::LibXML scripts that parse that file and iterate the contents".
Sorry, they are the same scripts as published earlier in the thread with the addition of a couple of timing points.
But here ya go. Using LibXML:
#! perl -slw
use strict;
use Data::Dump qw[ pp ];
use Time::HiRes qw[ time ];
use XML::LibXML;
open XML, '<', $ARGV[0] or die $!;
my $start = time;
my $root = XML::LibXML->load_xml( IO => \*XML )->documentElement;
printf "Parsing took %.6f seconds\n", time - $start;
my $start2 = time;
for my $station ($root->findnodes('*')) {
my $x = $station->nodeName;
for my $ip ( $station->findnodes('ip') ) {
$x = $ip->textContent;
}
}
printf "Iteration took %.6f seconds\n", time - $start2;
printf "Total took %.6f seconds\n", time - $start;
printf 'Check mem:'; <STDIN>;
[download]
And XML::Simple:
#! perl -slw
use strict;
use Data::Dump qw[ pp ];
use Time::HiRes qw[ time ];
use XML::Simple;
open XML, '<', $ARGV[0] or die $!;
my $start = time;
my $stations = XMLin( \*XML, ForceArray => [ 'ip'], NoAttr => 1 );
printf "Parsing took %.6f seconds\n", time - $start;
my $start2 = time;
for my $station ( keys %$stations ) {
my $x = $station;
for my $ip ( @{ $stations->{ $station }{ip} } ) {
$x = $ip;
}
}
printf "Iteration took %.6f seconds\n", time - $start2;
printf "Total took %.6f seconds\n", time - $start;
printf 'Check mem:'; <STDIN>;
[download]
It is easy to see which one wins in the speed stakes.
Yeah, LibXML. My tests *included* the time it took to extract the data from the tree. The test was done with real world data of various size from three different providers.
We use XML::Bare with a thin layer to compensate for it's awful interface (XML::Simple without ForceArray or any other option), its expectation of getting decoded text, and it's lack of namespace support. It's slightly faster when you factor in the time it takes to extract data. Not nearly as capable as libxml, and we had to create an interface just to be able to use it.
Yeah, LibXML. My tests *included* the time it took to extract the data from the tree.
Hm. So did mine. But I believe mine.
We use XML::Bare with a thin layer to compensate for it's awful interface (XML::Simple without ForceArray or any other option)
Hm. XML::Bare::forcearray( [noderef] )
S'funny init. It took less than a minute to disprove that. And after 5 minutes, I'm pretty sure I could use XML::Bare to read a file and get access to its content.
Conversely, when I tried to look up getDocumentElement, I completely crapped out after about an hour. You applied it to the return from load_xml() which is labelled $dom. So look in DOM. Nada. Maybe a Node. Nada. How about a parser, or a nodelist or a namespace? Nada, nada, nada!
For me:
That means small.
That means the first page shows me enough to get something working.
Details, refinements and esoterica can be deferred to secondary pages if that cannot be avoided.
That means, it starts by splitting the documentation along vertical lines. Ie. The way people need to use the interface. Eg, Read an XML; or write an XML; or edit an XML. etc. Not horizontally according to some arbitrary way the author decided to structure his code.
And it means starting with the basics in the root document, in the form of simple -- but complete -- worked examples of the main modes of use. And leaving the esoteric details for (preferably linked (and links that actually work)) secondary pages.
Not hitting the user in the face with a top level synopsis that contain every possible variation of the constructor and no indication of where to go from there.
XML::LibXML fails on every count.
Can we stop now, because we are once again doing nothing to help the OP; nor each | http://www.perlmonks.org/?node_id=947849 | CC-MAIN-2015-18 | refinedweb | 1,115 | 76.52 |
# Yo, Ho, Ho, And a Bottle of Rum — Or How We Analyzed Storm Engine's Bugs
PVS-Studio is a static analysis tool that helps find errors in software source code. This time PVS-Studio looked for bugs in Storm Engine's source code.
### Storm Engine
Storm Engine is a gaming engine that Akella has been developing since January 2000, for the Sea Dogs game series. The game engine became open-source on March 26th, 2021. The source code is available on [GitHub](https://github.com/storm-devs/storm-engine) under the GPLv3 license. Storm Engine is written in C++.
In total, PVS-Studio issued 235 high-level warnings and 794 medium-level warnings. Many of these warnings point to bugs that may cause undefined behavior. Other warnings reveal logical errors - the program runs well, but the execution's result may be not what's expected.
Examining each of the 1029 errors PVS-Studio discovered - especially those that involve the project's architecture - would take up an entire book that is difficult to write and read. In this article, I'll review more obvious and on-the-surface-type errors that do not require delving deep into the project's source code.
### Detected Errors
#### Redundant Checks
PVS-Studio warns: [V547](https://pvs-studio.com/en/w/v547/) Expression 'nStringCode >= 0xffffff' is always false. dstring\_codec. h 84
```
#define DHASH_SINGLESYM 255
....
uint32_t Convert(const char *pString, ....)
{
uint32_t nStringCode;
....
nStringCode = ((((unsigned char)pString[0]) << 8) & 0xffffff00) |
(DHASH_SINGLESYM)
....
if (nStringCode >= 0xffffff)
{
__debugbreak();
}
return nStringCode;
}
```
Let's evaluate the expression that the *nStringCode* variable contains. The *unsigned* *char* type takes values in the range of \*[0,255]\*. Consequently, *(unsigned char)pString[0]* is always less than *2^8*. After left shifting the result by *8*, we get a number that does not exceed *2^16*. The '&' operator does not augment this value. Then we increase the expression's value by no more than *255*. As a result, the *nStringCode* variable's value never exceeds *2^16+256*, and therefore, is always less than *0xffffff = 2^24-1*. Thus, the check is always false and is of no use. At first glance, it would seem that we can safely remove it:
```
#define DHASH_SINGLESYM 255
....
uint32_t Convert(const char *pString, ....)
{
uint32_t nStringCode;
....
nStringCode = ((((unsigned char)pString[0]) << 8) & 0xffffff00) |
(DHASH_SINGLESYM)
....
return nStringCode;
}
```
But let's not rush into anything. Obviously, the check is here for a reason. The developers may have expected the expression or the *DHASH\_SINGLESYM* constant to change in the future. This example demonstrates a case when the analyzer is technically correct, but the code fragment that triggered the warning might not require fixing.
PVS-Studio warns: [V560](https://pvs-studio.com/en/w/v560/) A part of conditional expression is always true: 0x00 <= c. utf8.h 187
```
inline bool IsValidUtf8(....)
{
int c, i, ix, n, j;
for (i = 0, ix = str.length(); i < ix; i++)s
{
c = (unsigned char)str[i];
if (0x00 <= c && c <= 0x7f)
n = 0;
....
}
....
}
```
The *c* variable holds an unsigned type value and the *0x00 <= c* check can be removed as unnecessary. The fixed code:
```
inline bool IsValidUtf8(....)
{
int c, i, ix, n, j;
for (i = 0, ix = str.length(); i < ix; i++)s
{
c = (unsigned char)str[i];
if (c <= 0x7f)
n = 0;
....
}
....
}
```
#### Reaching Outside Array Bounds
PVS-Studio warns: [V557](https://pvs-studio.com/en/w/v557/) Array overrun is possible. The value of 'TempLong2 - TempLong1 + 1' index could reach 520. internal\_functions.cpp 1131
```
DATA *COMPILER::BC_CallIntFunction(....)
{
if (TempLong2 - TempLong1 >= sizeof(Message_string))
{
SetError("internal: buffer too small");
pV = SStack.Push();
pV->Set("");
pVResult = pV;
return pV;
}
memcpy(Message_string, pChar + TempLong1,
TempLong2 - TempLong1 + 1);
Message_string[TempLong2 - TempLong1 + 1] = 0;
pV = SStack.Push();
}
```
Here the analyzer helped find the off-by-one error.
The function above first makes sure that the *TempLong2 - TempLong1* value is less than the *Message\_string* length. Then the *Message\_string[TempLong2 - TempLong1 + 1]* element takes the 0 value. Note that if *TempLong2 - TempLong1 + 1 == sizeof(Message\_string)*, the check is successful and the internal error is not generated. However, the *Message\_string[TempLong2 - TempLong1 + 1]* element is of bounds. When this element is assigned a value, the function accesses unreserved memory. This causes undefined behavior. You can fix the check as follows:
```
DATA *COMPILER::BC_CallIntFunction(....)
{
if (TempLong2 - TempLong1 + 1 >= sizeof(Message_string))
{
SetError("internal: buffer too small");
pV = SStack.Push();
pV->Set("");
pVResult = pV;
return pV;
}
memcpy(Message_string, pChar + TempLong1,
TempLong2 - TempLong1 + 1);
Message_string[TempLong2 - TempLong1 + 1] = 0;
pV = SStack.Push();
}
```
#### Assigning a Variable to Itself
PVS-Studio warns: [V570](https://pvs-studio.com/en/w/v570/) The 'Data\_num' variable is assigned to itself. s\_stack.cpp 36
```
uint32_t Data_num;
....
DATA *S_STACK::Push(....)
{
if (Data_num > 1000)
{
Data_num = Data_num;
}
....
}
```
Someone may have written this code for debugging purposes and then forgot to remove it. Instead of a new value, the *Data\_num* variable receives its own value. It is difficult to say what the developer wanted to assign here. I suppose *Data\_num* should have received a value from a different variable with a similar name, but the names got mixed up. Alternatively, the developer may have intended to limit the *Data\_num* value to the 1000 constant but made a typo. In any case there's a mistake here that needs to be fixed.
#### Dereferencing a Null Pointer
PVS-Studio warns: [V595](https://pvs-studio.com/en/w/v595/) The 'rs' pointer was utilized before it was verified against nullptr. Check lines: 163, 164. Fader.cpp 163
```
uint64_t Fader::ProcessMessage(....)
{
....
textureID = rs->TextureCreate(_name);
if (rs)
{
rs->SetProgressImage(_name);
....
}
```
In the code above, the *rs* pointer is first dereferenced, and then evaluated against *nullptr*. If the pointer equals *nullptr*, the null pointer's dereference causes undefined behavior. If this scenario is possible, it is necessary to place the check before the first dereference:
```
uint64_t Fader::ProcessMessage(....)
{
....
if (rs)
{
textureID = rs->TextureCreate(_name);
rs->SetProgressImage(_name);
....
}
```
If the scenario guarantees that *rs != nullptr* is always true, then you can remove the unnecessary *if (rs)* check:
```
uint64_t Fader::ProcessMessage(....)
{
....
textureID = rs->TextureCreate(_name);
rs->SetProgressImage(_name);
....
}
```
There's also a third possible scenario. Someone could have intended to check the *textureID* variable.
Overall, I encountered 14 of the V595 warnings in the project.
If you are curious, [download and start PVS-Studio](https://pvs-studio.com/en/pvs-studio/download/), analyze the project and review these warnings. Here I'll limit myself to one more example:
PVS-Studio warns: [V595](https://pvs-studio.com/en/w/v595/) The 'pACh' pointer was utilized before it was verified against nullptr. Check lines: 1214, 1215. sail.cpp 1214
```
void SAIL::SetAllSails(int groupNum)
{
....
SetSailTextures(groupNum, core.Event("GetSailTextureData",
"l", pACh->GetAttributeAsDword("index", -1)));
if (pACh != nullptr){
....
}
```
When calculating the *Event* method's arguments, the author dereferences the *pACh* pointer. Then, in the next line, the *pACh* pointer is checked against *nullptr*. If the pointer can take the null value, the if-statement that checks *pACh* for *nullptr* must come before the *SetSailTextures* function call that prompts pointer dereferencing.
```
void SAIL::SetAllSails(int groupNum)
{
....
if (pACh != nullptr){
SetSailTextures(groupNum, core.Event("GetSailTextureData",
"l", pACh->GetAttributeAsDword("index", -1)));
....
}
```
If *pACh* can never be null, you can remove the check:
```
void SAIL::SetAllSails(int groupNum)
{
....
SetSailTextures(groupNum, core.Event("GetSailTextureData",
"l", pACh->GetAttributeAsDword("index", -1)));
....
}
```
#### new[] – delete Error
PVS-Studio warns: [V611](https://pvs-studio.com/en/w/v611/) The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] pVSea;'. Check lines: 169, 191. SEA.cpp 169
```
struct CVECTOR
{
public:
union {
struct
{
float x, y, z;
};
float v[3];
};
};
....
struct SeaVertex
{
CVECTOR vPos;
CVECTOR vNormal;
float tu, tv;
};
....
#define STORM_DELETE (x)
{ delete x; x = 0; }
void SEA::SFLB_CreateBuffers()
{
....
pVSea = new SeaVertex[NUM_VERTEXS];
}
SEA::~SEA() {
....
STORM_DELETE(pVSea);
....
}
```
Using macros requires special care and experience. In this case a macro causes an error: the incorrect *delete* operator - instead of the correct *delete[]* operator - releases the memory that the *new[]* operator allocated. As a result, the code won't call destructors for the *pVSea* array elements. In some cases, this won't matter - for example, when all destructors of both array elements and their fields are trivial.
However, if the error does not show up at runtime - it does not mean there isn't one. The key here is how the new[] operator is defined. In some cases calling the *new[]* operator will allocate memory for the array, and will also write the memory section's size and the number of elements at the beginning of the memory slot. If the developer then uses the *delete* operator that is incompatible with *new[]*, the delete operator is likely to misinterpret the information at the beginning of the memory block, and the result of such operation will be undefined. There is another possible scenario: memory for arrays and single elements is allocated from different memory pools. In that case, attempting to return memory allocated for arrays back to the pool that was intended for scalars will result in a crash.
This error is dangerous, because it may not manifest itself for a long time, and then shoot you in the foot when you least expect it. The analyzer found a total of 15 errors of this type. Here are some of them:
* V611 The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] m\_pShowPlaces;'. Check lines: 421, 196. ActivePerkShower.cpp 421
* V611 The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] pTable;'. Check lines: 371, 372. AIFlowGraph.h 371
* V611 The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] vrt;'. Check lines: 33, 27. OctTree.cpp 33
* V611 The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] flist;'. Flag.cpp 738
* V611 The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] rlist;'. Rope.cpp 660
Analysis showed that many of the cases above involve the *STORM\_DELETE* macro. However a simple change from *delete* to *delete[]* will lead to new errors, because the macro is also intended free the memory that the *new* operator allocated. To fix this code, add a new macro - *STORM\_DELETE\_ARRAY* - that uses the correct operator, \*delete[]\*.
```
struct CVECTOR
....
struct SeaVertex
{
CVECTOR vPos;
CVECTOR vNormal;
float tu, tv;
};
....
#define STORM_DELETE (x)
{ delete x; x = 0; }
#define STORM_DELETE_ARRAY (x)
{ delete[] x; x = 0; }
void SEA::SFLB_CreateBuffers()
{
....
pVSea = new SeaVertex[NUM_VERTEXS];
}
SEA::~SEA() {
....
STORM_DELETE_ARRAY(pVSea);
....
}
```
#### A Double Assignment
PVS-Studio warns: [V519](https://pvs-studio.com/en/w/v519/) The 'h' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 385, 389. Sharks.cpp 389
```
inline void Sharks::Shark::IslandCollision(....)
{
if (h < 1.0f)
{
h -= 100.0f / 150.0f;
if (h > 0.0f)
{
h *= 150.0f / 50.0f;
}
else
h = 0.0f;
h = 0.0f;
vx -= x * (1.0f - h);
vz -= z * (1.0f - h);
}
```
Take a look at the *h < 1.0f* expression in the code above. First, the developer calculates the *h* variable, and then sets it to *0*. As a result, the *h* variable is always *0*, which is an error. To fix the code, remove the *h* variable's second assignment:
```
inline void Sharks::Shark::IslandCollision(....)
{
if (h < 1.0f)
{
h -= 100.0f / 150.0f;
if (h > 0.0f)
{
h *= 150.0f / 50.0f;
}
else
h = 0.0f;
vx -= x * (1.0f - h);
vz -= z * (1.0f - h);
}
```
#### Dereferencing a Pointer from realloc or malloc Function
PVS-Studio warns: [V522](https://pvs-studio.com/en/w/v522/) There might be dereferencing of a potential null pointer 'pTable'. Check lines: 36, 35. s\_postevents.h 36
```
void Add(....)
{
....
pTable = (S_EVENTMSG **)realloc(
pTable, nClassesNum * sizeof(S_EVENTMSG *));
pTable[n] = pClass;
....
};
```
When there's a lack of memory, the *realloc* function fails to extend a memory block to the required size and returns *NULL*. Then the *pTable[n]* expression attempts to dereference this null pointer and causes undefined behavior. Moreover, the *pTable* pointer is rewritten, which is why the address of the original memory block may be lost. To fix this error, add a check and use an additional pointer:
```
void Add(....)
{
....
S_EVENTMSG ** newpTable
= (S_EVENTMSG **)realloc(pTable,
nClassesNum * sizeof(S_EVENTMSG *));
if(newpTable)
{
pTable = newpTable;
pTable[n] = pClass;
....
}
else
{
// Handle the scenario of realloc failing to reallocate memory
}
};
```
PVS-Studio found similar errors in scenarios that involve the *malloc* function:
PVS-Studio warns: [V522](https://pvs-studio.com/en/w/v522/) There might be dereferencing of a potential null pointer 'label'. Check lines: 116, 113. geom\_static.cpp 116
```
GEOM::GEOM(....) : srv(_srv)
{
....
label = static_cast(srv.malloc(sizeof(LABEL) \*
rhead.nlabels));
for (long lb = 0; lb < rhead.nlabels; lb++)
{
label[lb].flags = lab[lb].flags;
label[lb].name = &globname[lab[lb].name];
label[lb].group\_name = &globname[lab[lb].group\_name];
memcpy(&label[lb].m[0][0], &lab[lb].m[0][0],
sizeof(lab[lb].m));
memcpy(&label[lb].bones[0], &lab[lb].bones[0],
sizeof(lab[lb].bones));
memcpy(&label[lb].weight[0], &lab[lb].weight[0],
sizeof(lab[lb].weight));
}
}
```
This code needs an additional check:
```
GEOM::GEOM(....) : srv(_srv)
{
....
label = static_cast(srv.malloc(sizeof(LABEL) \*
rhead.nlabels));
for (long lb = 0; lb < rhead.nlabels; lb++)
{
if(label)
{
label[lb].flags = lab[lb].flags;
label[lb].name = &globname[lab[lb].name];
label[lb].group\_name = &globname[lab[lb].group\_name];
memcpy(&label[lb].m[0][0], &lab[lb].m[0][0],
sizeof(lab[lb].m));
memcpy(&label[lb].bones[0], &lab[lb].bones[0],
sizeof(lab[lb].bones));
memcpy(&label[lb].weight[0], &lab[lb].weight[0],
sizeof(lab[lb].weight));
}
....
}
}
```
Overall, the analyzer found 18 errors of this type.
Wondering what these errors can lead to and why you should avoid them? See [this article](https://pvs-studio.com/en/blog/posts/cpp/0558/) for answers.
#### Modulo 1 Remainder
PVS-Studio warns: [V1063](https://pvs-studio.com/en/w/v1063/) The modulo by 1 operation is meaningless. The result will always be zero. WdmSea.cpp 205
```
void WdmSea::Update(float dltTime)
{
long whiteHorses[1];
....
wh[i].textureIndex = rand() % (sizeof(whiteHorses) / sizeof(long));
}
```
In the code above, the developer calculated the *whiteHorses* array's size and applied the modulo operation to the size value. Since the array size *equals* 1, the result of this modulo operation is always *0*. Therefore, the operation does not make sense. The author may have made a mistake when declaring the *whiteHorses* variable - the array's size needed to be different. There is also a chance that there's no mistake here and the \*rand() % (sizeof(whiteHorses) / sizeof(long)) \*expression accommodates some future scenario. This code also makes sense if the *whiteHorses* array size is expected to change in the future and there will be a need to generate a random element's index. Whether the developer wrote this code on purpose or by accident, it's a good idea to take a look and recheck - and that's exactly what the analyzer calls for.
#### std::vector vs std::deque
Aside from detecting obvious errors and inaccuracies in code, the PVS-Studio analyzer helps optimize code.
PVS-Studio warns: [V826](https://pvs-studio.com/en/w/v826/) Consider replacing the 'aLightsSort' std::vector with std::deque. Overall efficiency of operations will increase. Lights.cpp 471
```
void Lights::SetCharacterLights(....)
{
std::vector aLightsSort;
for (i = 0; i < numLights; i++)
aLightsSort.push\_back(i);
for (i = 0; i < aMovingLight.size(); i++)
{
const auto it = std::find(aLightsSort.begin(),aLightsSort.end(),
aMovingLight[i].light);
aLightsSort.insert(aLightsSort.begin(), aMovingLight[i].light);
}
}
```
The code above initializes *std::vector* *aLightsSort*, and then inserts elements at its beginning.
Why is it a bad idea to insert many elements at the beginning of *std::vector*? Because each insertion causes the vector's buffer reallocation. Each time a new buffer is allocated, the program fills in the inserted value and copies the values from the old buffer. Why don't we just simply write a new value before the old buffer's zeroth element? Because *std::vector* does not know how to do this.
However, *std::deque* does. This container's buffer is implemented as a circular buffer. This allows you to add and remove elements at the beginning or at the end without the need to copy the elements. We can insert elements into *std::deque* exactly how we want - just add a new value before the zero element.
This is why this code requires replacing \*std::vector \*with *std::deque*:
```
void Lights::SetCharacterLights(....)
{
std::deque aLightsSort;
for (i = 0; i < numLights; i++)
aLightsSort.push\_back(i);
for (i = 0; i < aMovingLight.size(); i++)
{
const auto it = std::find(aLightsSort.begin(),aLightsSort.end(),
aMovingLight[i].light);
aLightsSort.push\_front(aMovingLight[i].light);
}
}
```
### Conclusion
PVS-Studio found that the Storm Engine source code contains many errors and code fragments that need revision. Many warnings pointed to code the developers had already tagged as needing revision. These errors may have been detected by static analysis tools or during code review. Other warnings pointed to errors not marked with comments. This means, the developers hadn't suspected anything wrong there. All errors I've examined earlier in the article were from this list. If Storm Engine and its errors intrigued you, you can undertake my journey by yourself. I also invite you to take a look at [these select articles about projects whose source code we checked](http://www.pvs-studio.com/en/inspections/) - there my colleagues discuss the analysis results and errors. | https://habr.com/ru/post/564698/ | null | null | 2,994 | 52.15 |
Question:
I am doing some research into common errors and poor assumptions made by junior (and perhaps senior) software engineers.
What was your longest-held assumption that was eventually corrected?
For example, I misunderstood that the size of an integer is not a standard and instead depends on the language and target. A bit embarrassing to state, but there it is.
Be frank; what firm belief did you have, and roughly how long did you maintain the assumption? It can be about an algorithm, a language, a programming concept, testing, or anything else about programming, programming languages, or computer science.
Solution:1
That XML namespaces (or worse, well formedness) are in some way more difficult than trying to do without them.
A very common blunder, even at the W3C!
Solution:2
My incorrect assumption: That while there's always some room for improvement, in my case, I am pretty much as good a programmer as I can be.
When I first got out of college, I'd already been programming C for 6 years, knew all about "structured programming", thought "OO" was just a fad, and thought "man, I am good!!"
10 years later, I was thinking "OK, back then I was nowhere near as good as I thought I was... now I get the ideas of polymorphism and how to write clean OO programs... now I'm really good".
So somehow, I was always really good, yet also always getting way better than I was earlier.
The penny dropped not long after that and I finally have "some" humility. There's always more to learn (have yet to write a proper program in a purely functional language like Haskell).
Solution:3
I think I was 10 years old when someone convinced me that there will be a computer capable of running an infinite loop in under 3 seconds.
Solution:4
In C++, during a long time I was tkinking that compiler rejects your when giving a definition for a pure virtual method.
I was astonished when realizing that I was mistaken.
Many times when I tell someone else to give a default implementation of its pure virtual destructor for its abstract class, he/she looks back at me with BIG eyes. And I know from here that a long discussion will follow ... It seems a common belief somewhat spread within C++ beginners (as I consider myself too .. I am still learning currently!)
wikipedia link to c++'s pure virtual methods
Solution:5
I was convinced, for at least 6 years, that every problem had exactly 1 solution.
Utterly unaware of multiple algorithms with differing complexities, space/time tradeoffs, OOP vs. Functional vs. Imperative, levels of abstraction and undecidable problems. When that blissful naivety broke, it opened up a world of possibilities and slammed the door on simply sitting down and building things. Took me a long time to figure out how to just pick one and run with it.
Solution:6
As an old procedural programmer, I didn't really understand OO when I first started programming in Java for a hobby project. Wrote lots of code without really understanding the point of interfaces, tried to maximize code re-use by forcing everything into an inheritance hierarchy - wishing Java had multiple inheritance when things wouldn't fit cleaning into one hierarchy. My code worked, but I wince at that early stuff now.
When I started reading about dynamic languages and trying to figure out a good one to learn, reading about Python's significant whitespace turned me off - I was convinced that I would hate that. But when I eventually learned Python, it became something I really like. We generally make the effort in whatever language to have consistent indent levels, but get nothing for it in return (other than the visual readability). In Python, I found that I wasn't doing any more effort than I had before with regard to indent levels, and Python handled what I'd been having to use braces or whatever for in other languages. It makes Python feel cleaner to me now.
Solution:7
G'day,
That I'd be just designing and writing code.
No requirements gathering, documentation or supporting.
cheers,
Solution:8
- My co-workers were/are producing supposedly bad code because they sucked/suck. It took me a while to learn that I should first check what really happened. Most of the times, bad code was caused by lack of management, customers who didn't want to check what they really wanted and started changing their minds like there's no tomorrow, or other circunstances out of anyone's control, like economic crysis.
- Customers demand "for yesterday" features because they are stupid: Not really. It's about communication. If someone tells them it everything can really be done in 1 week, guess what? they'll want it in 1 week.
- "Never change code that works". This is not a good thing IMO. You obviously don't have to change what's really working. However, if you never change a piece of code because it's supposedly working and it's too complex to change, you may end up finding out that code isn't really doing what it's supposed to do. Eg: I've seen a sales commission calculation software doing wrong calculations for two years because nobody wanted to maintain the software. Nobody at sales knew about it. The formula was so complex they didn't really know how to check the numbers.
Solution:9
never met with integer promotion before... and thought that 'z' would hold 255 in this code:
unsigned char x = 1; unsigned char y = 2; unsigned char z = abs(x - y);
correct value of z is 1
Solution:10
I just recently found out that over a million instructions are executed in a Hello World! c++ program I wrote. I never would have expected so much for anything as simple as a single cout statement
Solution:11
That goto's are harmful.
Now we us continue or break.
Solution:12
The OO is not necessarily better then non-OO.
i assumed that OO was always better.. then i discovered other techniques, such as functional programming, and had the realization that OO is not always better.
Solution:13
That C++ was the coolest language out there!
Solution:14
don't use advanced implementation-specific features because you might want to switch implementations "sometime". i've done this time and again, and almost invariably the switch never happened.
Solution:15
I am a young fledgling developer hoping to do it professionally because it's what I love and this is a list of opinions i once held that I have learned through my brief experience are wrong
The horrible mess you end up with when you don't seperate user interface from logic at all is acceptable and is how everyone writes software
There's no such thing as too much complexity, or abstraction
One Class One Responsability - I never really had this concept, it's been very formitive for me
Testing is something I don't need to do when I'm coding in my bedroom
I don't need source control because it's overkill for the projects I do
Developers do everything, we're supposed to know how to design icons and make awesome looking layouts
Dispose doesn't always need a finaliser
An exception should be thrown whenever any type of error occurs
Exceptions are for error cases, and a lot of the time it's OK to just return a value indicating failure. I've come to understand this recently, I've been saying it and still throwing exceptions for much longer
I cam write an application that has no bugs at all
Solution:16
That we as software engineers can understand what the user really wants.
Solution:17
That more comments are better. I've always tried to make my code as readable as possible--mainly because I'm almost certainly the guy that's going to fix the bug that I let slip by. So in years past, I used to have paragraphs after paragraphs of comments.
Eventually it dawned on me that there's a point where more comments--no matter how neatly structured--add no value and actually becomes a hassle to maintain. These days, I take the table-of-contents + footnotes approach and everyone's happier for it.
Solution:18
That the only localization/internationalization issue is translating messages.
I used to think that all other languages (and I had no concept of locales) were like English in all ways except for words and grammar. To localize/internationalize a piece of software, therefore, you only needed to have a translator translate the strings that are shown to the user. Then I began realizing:
- Some languages are written right-to-left.
- Some scripts use contextual shaping.
- There is large variation in the way that dates, times, numbers, etc. are formatted.
- Program icons and graphics can be meaningless or offensive to some groups of people.
- Some languages have more than one "plural form".
- ...
Even today I sometimes read about internationalization issues that surprise me.
Solution:19
I used to think that Internet Explorer 6 box model is an evil dumb idea MS came up with only to break compatibility with other browsers.
Lots of CSSing convinced me that it's much more logical, and can make the page design maintenance (changing blocks paddings/borders/margins) much easier.
Think about the physical world: changing the paddings or borders width of an A4 page doesn't change the page width, only reduce the space for the content.
Solution:20
- Programming Language == Compiler/Interpreter
- Programming Language == IDE
- Programming Language == Standard Library
Solution:21
I used to think I was a pretty good programmer. Held that position for 2 years.
When you work in a vacuum, it's easy to fill the room :-D
Solution:22
That the now popular $ sign was illegal as part of a java/javascript identifier.
Solution:23
Thinking that I know everything about a certain language / topic in programming. Just not possible.
Solution:24
That virtual-machine architectures like Java and .NET were essentially worthless for anything except toy projects because of performance issues.
(Well, to be fair, maybe that WAS true at some point.)
Solution:25
It's important to subscribe to many RSS feeds, read many blogs and participate in open source projects.
I realized that, what is really important is that I spend more time doing coding. I have had the habit of reading and following many blogs, and while they are a rich source of information its really impossible to assimilate everything. It's very important to have balanced reading, and put more emphasis on practice.
Reg. open source, I'm afraid I won't be popular. I have tried participating in open source, and most of them in .NET. I'm appalled to see that many open source projects don't even follow a proper architecture. I saw one system in .NET not using a layered architecture, and database connection code was there all over the place including code behind, and I gave up.
Solution:26
That managers know what they talk about.
Solution:27
That my schooling would prepare me for a job in the field.
Solution:28
That learning the language is just learning the syntax, and the most common parts of the standard library.
Solution:29
That bytecode interpreted languages (like C# or F#) are slower than those reset - button - hogs that compile directly to machine code.
Well, when I started having that believe (in the 80s), it was true. However, even in C# - times I sometimes wondered if "putting that inner loop into a .cpp - file would make my app go faster").
Luckily, no.
Sadly, I just realized that a few years ago.
Solution:30
"It's going to work this time"
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/05/tutorial-long-held-incorrect.html | CC-MAIN-2018-51 | refinedweb | 2,005 | 62.17 |
Flight Simulator wind speed. This set of problems relates to a computer simulation if wind speed for a flight simulator. Assume that the wind speed for a particular region can be modeled by using an average value and a range of gust values that is added to the average. For example, the wind speed might be 10 miles an hour, with added noise ( which represents gusts) that ranges from 22 miles per hour to - 2 miles per hour. Use the function rand_float developed in this chapter which is :
double rand_float (double a, double b)
{
return ((double) rand() / RAND_MAX ) * (b - a) + a;
}
Write a program to generate a data file name wind.dat that contains one hour of simulated wind speed. Each line of the data file should contain the time in seconds and the corresponding wind speed. The time should start with 0 seconds. The increment in time should be 10 seconds and the final line of the data file should correspond to 3600 seconds. The user should be prompted to enter the average wind speed and the range of values of the gusts.
I'm just working in a menu selection where the user will be able to
1 = Create a wind simulation
2 = other
3 = other
4 = other
5 = Quit
I have the file created called wind.dat.
The time from 0 seconds to 3600 seconds with an increment time of 10 seconds is done.
Could you guys help me out just letting me know what should I do next.
I'm not asking for code just hand example .
Thanks
This is what i have so far
Code:#include <iostream> #include <cstdlib> #include <fstream> #include <iomanip> #include <cmath> using namespace std; void Add(); void Delete(); void Edit(); void Find(); double rand_float(double a, double b); int main() { char selection; do { system("cls"); cout << "MENU\n\n" << "1. Create wind speed simulation \n" << "2. Create wind speed with storms \n" << "3. Create wind speed with microbursts \n" << "4. View wind simulation \n" << "5. Quit\n\n" << "SELECTION: "; //get menu selection selection = cin.get(); //process selection switch( selection) { case '\n': break; case '1': Add(); break; case '2': Delete(); break; case '3': Edit(); break; case '4': Find(); break; case '5': break; default: cout << '\a'; } }while( selection != '5' ); return 0; }//end main() //function definitions void Add() { system("cls"); const double HIGH_GUSTS(22); const double LOW_GUSTS(-2); int i = 0; int Max_Seconds = 3600; double Avg_Wind, num1; ofstream wind; wind.open("wind.dat"); cout << " |---- Dilmer Valecillos [CS1600-001] ----|" << endl; cout << "\n\n Enter the average of wind speed (mhp) : " ; cin >> Avg_Wind; wind.setf(ios::fixed | ios::showpoint); // Set Formats in the file wind.precision(2); cout.setf(ios::fixed | ios::showpoint); // Set Formats to display on the screen cout.precision(2); for (i = 0; i <= Max_Seconds; i++) { //Avg_Wind = rand_float (22 ,-2); if ( ( i % 10 ) == 0 ) { wind << setw(4) << i << " " << Avg_Wind << endl; // Write data to a file wind.dat cout << setw(4) << i << " " << Avg_Wind << endl; // Outputs the information to the screen } } wind.close(); system("pause"); } void Delete() { system("cls"); system("pause"); } void Edit() { system("cls"); system("pause"); } void Find() { system("cls"); system("pause"); } double rand_float (double a, double b) { return ((double) rand() / RAND_MAX ) * (b - a) + a; } | https://cboard.cprogramming.com/cplusplus-programming/76998-flight-simulator-wind-speed.html | CC-MAIN-2017-26 | refinedweb | 531 | 70.73 |
A Yo Generator which sets up a new, on the Slim Framework based, project.
Founder: Katja Lutz, Maintainer: wanted, Delevopers: wanted
This project needs several overhauls! And has a couple of bugs with the current Version of node.js If anyone has the interest to develop this project further, please contact me! I'm out of time.
This yeoman generator sets up a PHP Project including Slim Framework, all needed configs to begin with developing and several grunt tasks. Yes its a PHP Project Generator and automatically downloads composer and several php libs.
(based on the yeoman generator-generator)
For the client part, ive included Backbone and Marionette. Coffeescript and Less are integrated in app/src and automatically builded and minified into the public folder.
npm install -g yo
npm install -g generator-slim
yo slim
The following directory structure do you get:
. ├── .bowerrc ├── .editorconfig ├── .gitignore ├── app │ ├── app.php │ ├── autoload.php │ ├── bootstrap.php │ ├── config │ │ ├── config.development.php │ │ ├── config.env.php │ │ └── config.production.php │ ├── helpers │ │ ├── Authhash.php │ │ └── Search.php │ ├── models │ ├── routes │ │ └── index.php │ ├── src │ │ ├── coffee │ │ │ ├── app.coffee │ │ │ └── views │ │ │ └── viewTest.coffee │ │ ├── hbs │ │ │ ├── config.env.hbs │ │ │ └── head.hbs │ │ └── less │ │ └── styles.less │ └── views │ ├── errors │ │ └── 404.twig │ ├── index.twig │ └── layouts │ ├── LICENSE.md │ ├── breadcrumb.twig │ ├── head.html │ ├── master.twig │ └── one_column.twig ├── bower.json ├── bower_modules ├── cache ├── composer.json ├── composer.lock ├── composer.phar ├── composer_modules │ ├── autoload.php │ └── composer │ ├── ClassLoader.php │ ├── autoload_classmap.php │ ├── autoload_namespaces.php │ ├── autoload_real.php │ └── installed.json ├── gruntfile.js ├── logs ├── node_modules └── public ├── .htaccess ├── css ├── dev ├── img ├── index.php └── js
grunt serverand you`re ready for development
yo slim:route
The generator comes with a included server for php. For faster development, I added a watcher with build at change and livereload.
grunt serverThe server runs in development!
grunt server:productionto launch the production server
If you don't wanna use the included grunt server, you can use any Apache/PHP Webserver. To get your webserver to work with the project you need to change the following things:
grunt
grunt watch
Thanks to the included server you often really doesnt need the following commands. But if you use an own Apache, PHP Server you need these commands to switch between the environments!
All Script and CSS Files are served in full length, no uglifying. PHP View Files doesnt get cached. Livereload is active. SQlite is the active database.
gruntor
grunt developmentto switch to development
Script and CSS Files are served minified. No livereload! MySQL is the active database.
grunt productionto switch to production
grunt testto start the jasmine tests
The generated distribution includes just the needed files and is as small as possible.
grunt distto generate the dist
grunt distchanges automatically to production, run
gruntif you wanna change back to development!
grunt fetch
0.11.1
0.11.0
0.10.6
0.10.5
grunt fetchcreates lib Directory, is fixed
0.10.4
0.10.1
0.10.0
0.9.9
0.9.8
grunt server, the Browser started before the server was ready, is fixed
grunt servercreates dist folder, is fixed
0.9.x
0.9.0 | https://www.npmjs.com/package/generator-slim | CC-MAIN-2015-32 | refinedweb | 501 | 52.56 |
>>?"
No it isn't. Now let's get back to work. (Score:1, Interesting)
Codeplex was created to undermine the open source and more particularly the free software movement. Well, they launched their Tet offensive and it was massively funded, but it failed.
They'll have to try something else.
Re: (Score:3, Insightful)
Yep, "in a string of attempts to play nicely with open source" sounds like "in a string of attempts to nicely play open source" but it's not really the same thing.
Re: (Score:2)
what part of embrace extend extinguish does "attempts to play nicely with open source" fit in again?
oh yes, clearly, we must be ignorant and have forgotten? Surely the leopard has changed their spots, huh?
Has anyone seen MS ever do something pro open source/pro free software? The answer is no, and it never will happen either. All they do is try to cover their tail when they screw up, as is common.
Re: (Score:2)
Has anyone seen MS ever do something pro open source/pro free software?
Off the top of my head: [asp.net]
Re: (Score:2)
Umm, isn't this to benefit
.net, specifically ASP and involves creep via Mono?
how is that a gain for open source?
Re: (Score:2)
Re: (Score:2)
How would this benefit
.net? .net is (mostly) a serverside technology,and it already knows all about cultures.
Re: (Score:2)
Pro free tools: [microsoft.com]
Re: (Score:2, Insightful)
Re: (Score:1, Informative)
I had actually forgotten that codeplex even existed until seeing it mentioned here on Slashdot today. Basically, codeplex is a home for Windows zealots who kind of like the idea of open source and want to dabble in it but refuse to leave the comforting confines of their OS of choice. So now, they have somewhere to hang out. It serves MS's purposes as it gives them something to hopefully take a little of the wind out of the sails of cross-platform real open source development. Personally, I think it a bit
Re: (Score:2)
A lot of Microsoft's open source projects, including projects like MEF, build on Mono and were subtly patched but not announced to be fixed as such. So they aren't "announcing to the world" that it works on Mono, but their developers are making sure it's compatible.
Besides, what does it matter which platform your software layer resides on? If you think it's absurd to build OSS on proprietary software, then I suppose you only write software and packages for the most free distro, depending on your definition
Re: (Score:2)
Of which, your only valid example is VB6, which had a syntax that they broke to allow it to interface with
.NET.
Did you ever write anything in Cobol? Any other "dead" language? That's natural. The problem companies have is that they think that once their software is written, their responsibility to do anything with it is over. But owning software is sort of like owning a car, eventually compared to all the other cars, it's going to look rusty and antiquated, eventually the shops will run out of parts for it
Re: (Score:1)
I had actually forgotten that SourceForge even existed until seeing it mentioned here on Slashdot today. Basically, SourceForge is a home for Open Source/*nix/FS zealots who kind of like the idea of open source and want to dabble in it but refuse to leave the comforting confines of their OS of choice. So now, they have somewhere to hang out. It serves the zealot's purposes as it gives them something to hopefully take a little of the wind out of the sails of the Windows stack of software. Personally, I think
Re: (Score:2)
Err? I didn't recall seeing anything even close to what you describe.
As far as I can tell, they're just trying to foster open source development on Windows because it's a developer issue. Some developers prefer and only engage in open source development, causing them to gravitate to Linux, BSD, etc. Microsoft hates losing developers, because users, slowly but surely, follow them and where the good applications are.
It's not a grand "Tet offensive". And it was anything but massively funded.
If MS was really serious... (Score:4, Insightful)
They could endow a trust fund for SourceForget.net. And if they had ideas for a better forge, they could make code submissions to SourceForge.net.
Re: (Score:2)
Why? Why can there only be one open source code repository?
Further, ultimately, as a developer, do you even care what repository the code comes from? I just google what I need, and wherever I land, I land.
Re:If MS was really serious... (Score:5, Insightful)
I'm not saying there should only be one public forge. I'm just saying that would be one way for MS to get away from people's distrust in anything they back. Because I think most people would trust SF.net to not be corrupted the kind of thing I proposed.
No. But as a project contributor, maybe. If this was the MS of the 1990's, I wouldn't trust a forge they owned one tiny bit - there would almost certainly be a trap hidden in the legalese. Nowadays, I'm not sure.
But here's another way to look at it: aside from branding, what might MS's motives be for setting this thing up? Based on their past actions, it's pretty clear that they're not angels.
Re: (Score:1)
Re: (Score:2, Informative)
Sourceforge's engine is closed source.
I asked.
You can't make "code submissions" to it.
Re: (Score:3, Informative)
and closed it again after some time.
I think an open source community driven competitor started using that code and then got killed or something, can't remember for sure.
Re: (Score:3, Insightful)
The thing with Microsoft is that nothing you create based on their 'technologies' can truly be open. The Shared Source license is likewise not a very 'open' or 'free' (both in speech and in beer) license. The problem with Microsoft is that they have used their financial and patent weight against open source in the past and will probably continue doing so. If Microsoft really want, they can revoke all their permissions and promises at any point in time and all projects based on the Shared Source License woul
Re: (Score:2)
The specs are published here:
SMB: [microsoft.com]
SMB2: [microsoft.com]
You say "We have reverse engineered it for a while"... Who's "we"? Do you speak for the Samba team? The Samba team not only has access to the above specs, but t
Re: (Score:2)
I wish I could simply forget SourceForge.net
Re: (Score:2)
I wish I could simply forget SourceForge.net
Why would that be?
Re: (Score:2)
SourceForget.net
What a splendid idea. A source revision control system hooked up straight to
/dev/null, with a webinterface. FUND IT!
Re: (Score:2)
They did this ~ 10 years ago. The result was windows ME.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
So could Google - but no one seems to be bitching about Google Code.
Google [google.com] has been a [android.com] great [chromium.org] friend [google.com] of open source. They have earned and continue to earn a great deal of trust and respect from the open source and free software community.
Compare [theregister.co.uk] to the current CEO of Microsoft and I think it will be clearer why Microsoft needs to do more.
Re: (Score:2)
I dunno, just about every non-Google project I've seen initially on Google Code has moved off of it to GitHub or someplace else in a fairly short time, usually after some complaints about it.
Though the complaints have been about Google reinventing the wheel and not doing it particularly well from the perspective of the projects involved, rather than about any presumed nefarious motives, most likely because Google, unlike MS, doesn't have a
Re: (Score:2)
SourceForget?
Is that a typo or a commentary on the quality of SourceForge?
Re: (Score:2)
And if they had ideas for a better forge, they could make code submissions to SourceForge.net.
CodePlex uses TFS for source control. It makes sense for projects that are already centered around MS tech in other ways, and especially if developers use VS, but I somehow doubt that SourceForge would appreciate that.
By the way, it's interesting how the article is about CodePlex Foundation, while most comments are about CodePlex - which is a different thing (yeah, I know, the naming is confusing as hell).
Let me get this straight (Score:4, Insightful)
An organization that wants to make open source products based off Microsoft will only get more Open Source Cred if they separate from Microsoft?
It seems like Microsoft is stuck in a position to make no concession. You don't like Microsoft. You'd like it a bit more if it were friendlier to Open Source. Microsoft starts an Open Source Initiative. It doesn't quite live up to Expectations. Now, the only way this new initiative can redeem itself is to become independent of Microsoft.
Wouldn't then Microsoft NOT have an open source initiative, and put them back at square one? Does becoming independent of Microsoft allow them to better work on Microsoft code?
Re:Let me get this straight (Score:5, Insightful)
Microsoft eventually wants
.NET to be competitive with the Java platform.
They know that Java has a massive, massive advantage in terms of OSS 3rd party library availability. As mentioned in the article, this comes from high profile Java OSS projects like Apache's Jakarta, Eclipse and others.
So Codeplex is their attempt at getting a similar ball rolling for
.NET. We'll see if it succeeds, I doubt it will catch on in a similar fashion though, .NET is doomed to niche Microsoft operating systems.
Re: (Score:2)
Microsoft eventually wants
.NET to be competitive with the Java platform.
I'm curious by what standard you think it isn't. Certainly each has its advantages and disadvantages, and there's a lot of work for both out there.
But that being said, as someone who's spent years developing professionally with each, I'd say the list in your
.sig is largely slanted/inaccurate/dubious, so, maybe you're just a guy who really likes Java.
Re: (Score:1, Troll)
I'm biased as fuck.
But I don't think that takes away from the fact that
.NET adoption is 1/10 that of Java or less nor from the fact that .NET OSS adoption is probably less than 1/10th the size of Java's.
Nor the fact that it's in Microsoft's interest to do so, nor the fact that this is probably an attempt to change that.
Nors for everybody!
Re: (Score:2)
I have had good exposure to two fairly large UK web design/development and bespoke software markets in the UK (South West/West/Bristol and South East/East/London/Anglia) and I have to say its all either PHP, Python or Perl, or its
I think the statistics being used by people like yourselve
Re: (Score:2)
... you know that there's a lot more to
.NET than web development, just as there's a lot more to Java than web development, right?
I only have my own anecdotal experience to go on, but damn near all of my profressional Java projects have involved web development, whereas less than half of my
.NET projects have.
Re: (Score:2)
Re: (Score:2)
I'm biased as fuck.
Fair enough. I respect you for not having any illusions about that.
I don't know that I'd say
.NET adoption is 1/10 of Java's -- in some markets (e.g. phones), definitely, and in the open source world, probably, but in general that doesn't jive with what I've seen in the market. But then, the work I mostly do is of the "writing custom apps (sometimes web, sometimes console, sometimes services, etc.) for business" and I don't have great knowledge of adoption outside of that space.
If nothi
Re: (Score:1)
101 Reasons why Java is better than
.NET - [helpdesk-software.ws]
This article is completely outdated. A signature like this makes it hard to take you seriously.
Re: (Score:1, Flamebait)
It's still quite accurate.
Re: (Score:1)
More importantly, what do you have to say about this: [itjobswatch.co.uk]
Re: (Score:2)
No it is not. I've spotted at least 5 of those items that are outright wrong.
Java is generally better than c#, but you don't need to make shit up to show that.
Re: (Score:2)
Good job fuck mook, there are over a hundred total.
Re:Let me get this straight (Score:4, Insightful)
Show me the
.Net for Solaris, Linux or Mac.
Re: (Score:1)
Not
.NET, but close enough and open source for Solaris, Linux and Mac downloads is available here: [go-mono.com]
Re: (Score:3, Insightful)
I assume you must be one of the codeplex people.
Good luck and GG!
;)
Re: (Score:2)
More people code in
.NET than even use Linux at all.
First and foremost, he never mentions Linux. He mentions Open Source, but surprisingly, open source is not limited to Linux. *GASP* I know.
And if you are going to compare, at least pick something comparable. Like
.NET to Java like he does. I've met a lot more people who know Java than .NET - Though on top of that, I've seen even more C#. But that's just me.
Re: (Score:2)
I've met a lot more people who know Java than
.NET - Though on top of that, I've seen even more C#.
I'm confused by this. You do know that C# is
.NET, right?
Re: (Score:1)
Not really. C# is a language like any other - it's just the best known implementation is for
.NET. If you wanted to, you could write a C# compiler that uses precisely zero .NET, and it'd still be a compiler for C#.
Plus C# is used for Mono and GTK#, neither of which are
.NET. Mono implements the same stuff true, but it's not .NET.
Re: (Score:2)
To me, at this point what you're saying is technically true but in any practical sense... not really.
Kind of like saying that people don't need to breathe to live -- technically, they could get their blood oxygenated any number of ways.
Probably, 99.9%+ of people writing C# code today are using
.NET. For any practical purpose it's not unreasonable to assume that if someone knows more C# devs than Java devs, they also know more .NET devs than Java devs.
Re: (Score:1)
Re: (Score:2)
Sure, and you could write an Erlang compiler for the JVM. But, in the real world today, usable C# compilers exist only for the
.NET (and Mono, which is a .NET clone), and Erlang only for the the BEAM virtual machine (well, older versions exist for a previous, equally-specific, VM.)
Re: (Score:2)
The OP mentions "niche Microsoft operating systems", which places him/her firmly into the linux loony camp. There's nothing wrong with Linux, but believing that the company that still has 60% of the server market and has an even higher percentage of the desktop is "niche" either means the he/she has never left the server room of a bank, or is a loony.
I've coded in
.NET and I've coded in JEE, there are pluses and minuses to both.
That said, the biggest benefit that Java has isn't so much the open source libra
Re: (Score:2)
60% of the server market, are you high or is this a study from ages ago?
Re: (Score:1, Interesting)
Linux = Opensource, Opensource != Linux
Now that we got that out of the way... He means that
.Net is just not very suitable for open source en cross platform development. In Java, I can use swing, hibernate and other stuff and just assume it will work on other platforms. Usually this doesn't cause any issues if your application is coded decently. However in C# en .NET a lot of useful and sometimes essential functionality is only available in Windows.* namespaces and libraries. These are not available in othe
Re:Let me get this straight (Score:5, Informative)
Microsoft's unfriendliness to Open Source has very little to do with them releasing any, or hosting code repositories.
The unfriendliness is expressed in terms of vague threats using software patents, attempts to derail implementation in various places, suspicious licensing deals like with Novell and so on.
All that has to go for me to start changing my mind. Until that happens, I'm not touching CodePlex with a 10 foot pole, and consider it completely irrelevant at best, and some sort of trap at worst.
Re: (Score:1)
Re: (Score:3, Insightful)
CodePlex may or may not be bad, but Microsoft's history of attacks on open source over the last fifteen years means I'd never use anything they offered. Sorry, maybe that's biased, but I tend to think of it as being cautious and rational.
Re: (Score:2)
Profile of a OSS Zealot:
Thinks M$ is bad because M$ is big huh company lots of money, eats little children;
Linux rocks, every OS steals code from linux, you to xBSD, that network stack is ours;
GPL is the one and only opensource license, everything else must be compatible;
Anything thats not copyleft is not free;
Freedom is a word created by the FSF, and no one has the right to redefine it;
Profile of an OSS Realist:
Think Microsoft has a track record of looking out for its stockholders and has done so by abusively using its position as a monopoly.
Linux is a good OS, which I actually prefer over Windows. Every OS wants to borrow code and concepts from others. You can "borrow" concepts from Linux and not be sued. The same does not hold true for MS or MacOS X.
GPL is a very useful open source license. If you want to come to the biggest open source party out there, you need to be able to dance with
It's A Trap (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
wont float. (Score:2)
Re: (Score:3, Informative)
FYI, apostraphes aren't just for quoting words for no apparent reason, they're also used in contractions.
Re: (Score:2)
*You* "can" 'emphasise' a $comment$ any ^way^ you like
......
But speaking in "airquotes" can be annoying
....
Re: (Score:2)
Firefox (Score:4, Insightful)
Re: (Score:2)
I know I will probably get flamed for this, but as someone who just developed some
.NET projects (it was the right tool for the job), I did so using Firefox almost exclusively for testing. Note that every component used was a straight .NET component, no third party anything. One day I fired up IE 8 just to see what it looked like. There were things broke all over IE that "just worked" in Firefox (w/ the .net plugin).
On top of all the broken things in IE...the most annoying thing about IE is that links are t
Like github, but worse (Score:2, Interesting)
After a cursory look it seems like an foundation more interested on marketing and policies than in code. I actually had to look hard in order to find the project list.
Am I right to assume that there are only 6 projects?
Seriously, six?
Meh. Call me when they have 600.
(Goes back to github).
Re: (Score:3, Informative).
Re: (Score:3, Informative)
From the article:
"... Not to be confused with Codeplex.com"
I think we both have been looking to different sites. Sure, codeplex.com has lots of projects. But this article is not about it.
Also, FYI: I happen to have suffered eye surgery. As a result, my vision is better than average.
OT: Why are my moderations not registering? (Score:1, Offtopic)
This has been going on for a couple days, ever since I got this batch of mod points. Can someone explain?
Re: (Score:1)
Javascript deactivated? Overzealous firewall?
Re: (Score:2)
Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100402 Firefox/3.6.3
Re: (Score:2)
Good Question...here is my answer (Score:1)
Not exactly any license. (Score:2, Insightful)
Codeplex is utterly GPL unfriendly, i would say GPL hostile. Its also nothing more than a way to steer open source towards being something you build with Microsofts closed technologies. Its not even stealthy in that regard.
I say fuck Microsoft until they prove they can cooperate. Why give them free ammo for absolutely nothing?
Re: (Score:2, Informative)
From [codeplex.org] (emphasis mine):
The Foundation has no pre-suppositions about particular projects, platforms, or open source licenses.
Doesn't sound hostile to the GPL to me.
Re: (Score:2)
CodePlex () hosts over 4500 projects [codeplex.com] licensed under GPLv2 or LGPL (the majority of which are under GPL). Ironically, one of those projects is a Linux distro [codeplex.com].
CodePlex Foundation - a different thing () - doesn't mention GPL at all [google.com] on the website - which, admittedly, raises a brow for an OSS-centric organization - but I still don't see how it makes it "GPL hostile". It looks more like an awkward silence to me.
Re: (Score:1)
Re: (Score:2)
It does mention BSD several times in project listings (i.e. there are projects released under it there), but that's it.
By the way, since I posted the comment, the website does mention GPL now, in a new post to the CodePlex Foundation blog [codeplex.org]:
It can't work (Score:3, Interesting)
Why I don't like MS Hosting FOSS Projects (Score:3, Insightful)
Why I don't like MS Hosting FOSS Projects
... a few reasons.
1) Microsoft has always looked towards the bottom line first and community second.
2) Microsoft doesn't really want any competition in platforms, so anything written that runs on many different platforms will "never behave as well" (performance, threading, resources, etc) as a 100% native application.
3) When Microsoft does attempt to get onboard with a standard app/tool/protocol, they always extend it in a proprietary way. Sometimes they make it better than it was, but since nobody else is allowed to also get those extensions, it doesn't do any good for the original community. Just look at LDAP/Active Directory.
4) Microsoft has had 30+ years to select, port and deliver a good cross platform scripting language, but they have not done so. I would love to have a native-from-Microsoft pre-installed version of Perl on every MS-Windows platform. Still they release wsh, cmd, bat and other similar crap. Where's the MS-Python or MS-Perl or MS-Php? Oh, because those are true FOSS projects, MS can't bastardize them. It doesn't matter how much more productive scripting would be. We know other commercial vendors that include these tools with the OS. Why won't Microsoft?
If you want a new idea to flourish, you need these things:
- small group of _believers_ that work on it for passion, not money
- complete openness in the results - source code in this case
- competition - another real player to battle against who also has complete openness in their code. It is NOT cheating to look at the competition's work.
Examples include the robot soccer team competition where at the end of every competition, all software for every team is shared so the level of play the following year will be elevated for all teams. Basically, the best software for last year is the starting point for all teams in the next competition.
Just a few thoughts.
NDA? (Score:3, Interesting)
I remember back when the Shared Source Initiative was announced, I looked into in, and found that actually seeing any of the source code required signing an NDA (Non-Disclosure Agreement). I closed those windows and forgot about it.
So are there NDAs required by any of the various CodePlex things? Or are there other equivalent "agreements" that have other euphemistic names? That would tell us a lot about their actual intentions.
I've written a lot of software that's secret, proprietary, whatever. The companies that hired me paid me pretty well for the software. But if I'm to get involved in something that I think is going to be shared publicly among a crowd of developers, and then discover that it's actually owned and controlled by the web site's owners, I'm going to feel rather double-crossed. I'd rather know beforehand, so I can avoid wasting my time just to donate code to such organizations.
Another variant of this problem existed on AT&T's Sys/V. I did some development in which some of the machines that I tested the code on ran Sys/V. I found that the binaries always contained an AT&T copyright notice. This was obviously because the binaries linked in the AT&T libc and other libraries. So I refused to distribute binaries for Sys/V, on the grounds that doing so might legally constitute signing my copyright to AT&T. I know of a number of companies that abandoned Sys/V after I pointed this out to them (and their lawyers agreed).
There a lot of tricky ways to lose control of your code to big corporations, and Microsoft has a bit of a rep for tricks like this. So it'd be nice to know up front whether a new repository holds such threats.
Re: (Score:3, Informative)
So are there NDAs required by any of the various CodePlex things? Or are there other equivalent "agreements" that have other euphemistic names? That would tell us a lot about their actual intentions.
I wouldn't be able to say anything about CodePlex Foundation, but then I don't know what you would do there in the first place.
As for CodePlex - no, you don't need any NDAs. It's really just your typical project hosting website, except that it's targeted at the audience that uses MS development technologies (though doesn't exclude other stuff [codeplex.com]).
Re:Yeah. Now we see the truth. (Score:5, Insightful)
In other words MS fanboys are ignorant of MS's history of backstabbing any competitor including one they have partnered with. Actually, especially the ones they have partnered with. CodePlex Foundation should be ignored by the open source community until MS has absolutely no possible influence within the organization.
Re:Yeah. Now we see the truth. (Score:4, Insightful)
Actually it doesn't really matter a bit what MSFT has done in the past, as they like any other company has to obey the license. If all the foundation has is OSI approved licenses, like Apache, BSD, Mozilla, etc then it shouldn't matter to you, I, or anyone else except zealots who pays the bills, as they have to obey the license. Sure in the future they could decide to take any project they own and go closed source with it, but so can the writer/owner of ANY software, and they can't close the previous version, therefor you can always fork.
In the end these projects just show that like Apple MSFT is beginning to see how they can leverage FOSS in certain situations to help themselves as well as anyone else. Nobody expects Apple to give up their proprietary bits, why should MSFT? In the end they have to obey the license or risk being sued (and the resulting bad PR) no different than any other corp.
Re: (Score:3, Insightful)
Still all is based on Microsoft Technologies. So if you design and "Open" killer application in VB dotNet it is not a threat. VB dotNet only runs on Windows. To properly implement it in Mono, you need the odd bits that Microsoft owns the patents on.
The idea is that you develop cool projects that the community can contribute to, but only the coolest of the cool and the best of the best will be able to run on Windows. That's what they call open source.
I would call it a failure. How long did it take source for
Re: (Score:2)
I like how you specifically chose the CLR language that doesn't work on Mono, and then said implied it's part of Microsoft's grand plan.
Hint: The vast majority of code on Codeplex, the code sharing site, is in C#. And Codeplex Foundation is an open source outreach program that will do work behind the scenes like invest in projects, form partnerships, whatever, but not write code.
Re: (Score:2)
Ok, call me paranoid. I just picked one of the major languages on the CLR. The same holds true of the other. In real life VB and C# run on the CLR. And not every facet of C#, VB or the CLR is free enough that I can be sure anything I write in it to be cross platform today does not violate some MS patent which MS will at some point later choose to enforce.
It is there right to enforce those patents. It is my right to choose a language and platform that will not land me in patent enforcement hell someday.
It's
Re: (Score:2)
There are a lot of great libraries at CodePlex, which of course you would be unlikely to hear about in "success stories". Of SourceForge projects, I can probably think of 10 off the top of my head, and maybe, with some serious thought, come up with a list of 25 SourceForge projects that I've had contact with and are still active.
I also think the SourceForge list of "active" projects is misleading and inflated.
Re: (Score:2, Insightful)
Again, look at the history of MS's dealing with their partners with which they have had contracts with. How many times have they been in court and lost. Of course you need deep pockets to take MS to court even if you are right. MS is no friend to open-source and if they can screw a software developer they will, based on past history. They are not happy with a slice of the pie if they can take the whole pie. They still have not come close to changing their spots . . .
They still have leverage (Score:5, Insightful)
The point of codeflex is to get people to develop open source software that runs on Microsoft's Platforms - desktop applications using WPF.NET, web applications using ASP.NET, windows mobile 7 applications using Silverlight, rich web environments using Silverlight. For desktop/phone applications this make sense - free high-quality applications improve the appeal of the operating system. For web applications, the only reason they want this is to increase market share of their proprietary technology. In both cases they still control the platform.
Developers whose sole intention is to write for Microsoft's platforms alone, probably shouldn't have any problems, because MS would be shooting themselves by hindering them. However for developers that write applications in
.NET/Silverlight thinking that the existence of Mono/Moonlight means that it is a great cross-platform tool, could easily be backstabbed by Microsoft if they ever change their stance on patents.
Re:Yeah. Now we see the truth. (Score:4, Insightful)
then it shouldn't matter to you, I, or anyone else except zealots who pays the bills
Based on MS's historical disdain for open source with the current CEO Steve Ballmer even going so far as to refer to Linux as a cancer [theregister.co.uk], I think it extremely naive and presumptuous to refer to people suspicious of their motives as just zealots implying that their caution is without merit. Contrarily, I think anything other than an attitude of extreme skepticism is foolhardiness approaching absurdity.
Furthermore, any license which by its very nature being a legal document is open to ambiguity and interpretation by a court and can very well be used in unpredictable ways to damage open source and to completely downplay this possibility in general and in the case of MS in particular especially in light of their very direct statements against open source is extremely arrogant and misinformed on your part.
Re: (Score:2)
what MSFT has done in the past
So now breaking contracts as part of a business strategy is no predictor of how they'll behave?
Re: (Score:2)
then MS is to the BP oil leak.
the only interest MS has in open source is to muddy the water.
Re: (Score:2)
Another fine example of Microsoft "Technology Evangelist" dollars at work.
Re: (Score:2, Interesting)
Leading question, rhetorical question, whatever, the fact is that everyone knows what Codeplex really is, so at the end of the day, only Microsoft shills seem particularly interesting in pushing it, or using it. The open source community really has no need for yet another trojan horse from Redmond. | http://news.slashdot.org/story/10/06/23/1351230/is-the-codeplex-foundation-truly-independent-now | crawl-003 | refinedweb | 5,611 | 73.17 |
On this one the top right pin is the GND, it goes to the VSS pin on the LCD module, right?
My recommendation if soldering these directly to the lcd,is to either change out the pot for one with a thumb wheel,or a side screw access or unsolder the pot and move it to the otherside of the board.
another question came up:do I need to insert a resistor between 5V and the back light anode (pin 15)?The I2C and the LCD modules have some smd resistors on them but since I don't have the schematics I can't tell if any of them is there to limit the current to the back light leds. ---:solved, there is a 100ohm resistors connected to the anode pin.
Did you mean your particular LCD has a built in 100 ohm resistor on its module?
pulsarus,If you are out of ideas, and are unable to determine the pin wiring of your i2c chip to the hd44780 interface,attached is a sketch I wrote that will try to figure it all out by guessing.It will locate the i2c chip's address then try several of the most common configurations.I will say that while I've not seen any damage occur using incorrect configurations, it is possiblethat using an incorrect configuration could damage the hardware.It uses the serial port along with the serial monitor in the IDE tocontrol the "guessing".See the comments in the sketch for how set things up and how to use it.Give it a try and let me know if it detects the proper constructor for you.fm,I'm working on some updates for the library that are not quite finished yet.This sketch is one of the updates.--- bill
#include <Wire.h>#include <LiquidCrystal_I2C.h>LiquidCrystal_I2C lcd(0x27, 2, 1, 0, 4, 5, 6, 7, 3, NEGATIVE); // Addr, En, Rw, Rs, d4, d5, d6, d7, backlighpin, polarity void setup(){ lcd.begin(16,2); lcd.backlight(); lcd.setCursor(0, 0); lcd.print("Hello world!"); lcd.setCursor(0, 1); lcd.print("Row number: "); lcd.setCursor(12, 1); lcd.print("2");}void loop(){ }
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=157817.msg1316367 | CC-MAIN-2016-18 | refinedweb | 396 | 72.76 |
SOAP generated by receivers, either an intermediary or the ultimate receiver of a message. The receiver is required to send a SOAP fault back to the sender only if the Request/Response messaging mode is used. In One-Way mode, the receiver should generate a fault and may store it somewhere, but it must not attempt to transmit it to the sender.
SOAP faults are returned to the receiver's immediate sender. For example, if the third node in a message path generates a fault, that fault message is sent to the second node in the message path and nowhere else. In other words, you don't send the fault to the original sender unless it's also the immediate sender. When that sender receives the fault message, it may take some action, such as undoing operations, and may send another fault further upstream to the next sender if there is one.
Most developers see error handling as a pretty dull subject, so it's often ignored or poorly implemented. The tendency to ignore error handling is natural, but it's not wise. As the saying goes, “Stuff happens”: Things can, and often do, go wrong; it's inevitable that errors will occur in the normal course of events. Because errors are fairly common, it's logical that some time should be dedicated to error handling. The SOAP Note recognizes the importance of error handling and dedicates a considerable amount of verbiage to addressing the issue. Even so, SOAP is not strict enough to avoid interoperability problems, so the BP provides a lot more guidance on the generation and processing of SOAP fault messages.
A SOAP message that contains a Fault element in the Body is called a fault message. A fault message is analogous to a Java exception; it's generated when an error occurs. Fault messages are used in Request/Response messaging. Nodes in the message path generate them when processing a request message. When an error occurs, the receiving node sends a fault message back to the sender just upstream, instead of the anticipated reply message. Faults are caused by improper message formatting, version mismatches, trouble processing a header, and application-specific errors.
When a fault message is generated, the Body of the SOAP message must contain only a single Fault element and nothing else. The Fault element itself must contain a faultcode element and a faultstring element, and optionally faultactor and detail elements. Listing 4-18 is an example of a SOAP fault message.
Listing 4-18 A SOAP Fault Message
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>soap:Client</faultcode> <faultstring> The ISBN value contains invalid characters </faultstring> <faultactor></faultactor> >
Note that the Fault element and its children are part of the SOAP namespace, just as the SOAP Envelope and Body elements are.
Did you notice in Listing 4-18 that the children of the Fault element weren't qualified with the soap prefix? The children of the Fault element may be unqualified.BP In other words, they need not be prefixed with the SOAP 1.1 namespace. Note as well that it's forbidden for the Fault element to contain any immediate child elements other than faultcode, faultstring, faultactor, and detail.BP
4.6.1 The faultcode Element
The faultcode element may use any of four standard SOAP fault codes to identify an error.
SOAP Standard Fault Codes
Client Server VersionMismatch MustUnderstand
Although you're allowed to use arbitrary fault codes, you should use only the four standard codes listed.BP
The faultcode element should contain one of the standard codes listed above, with the appropriate SOAP namespace prefix. Prefixing the code, as in soap:Client, allows for easy versioning of standard fault codes. As SOAP evolves, it's possible that new fault codes will be added. New fault codes can easily be distinguished from legacy fault codes by their namespace prefix. The meaning of a fault code will always correlate to both the code (the local name) and the namespace (the prefix).
The SOAP Note recommends the use of the dot separator between names to discriminate general standard fault codes from specific application subcodes. This convention is not used in J2EE Web services, which prefers the use of XML namespace-based prefixes for SOAP fault codes. If you use one of the standard SOAP fault codes, the namespace prefix must map to the SOAP namespace "".BP
4.6.1.1 The Client Fault
The Client fault code signifies that the node that sent the SOAP message caused the error. Basically, if the receiver cannot process the SOAP message because there is something wrong with the message or its data, it's considered the fault of the client, the sender. The receiving node generates a Client fault if the message is not well formed, or contains invalid data, or lacks information that was expected, like a specific header. For example, in Listing 4-19, the SOAP fault indicates that the sender provided invalid information.
Listing 4-19 An Example of a SOAP Fault with a Client Fault Code
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>soap:Client</faultcode> <faultstring>The ISBN contains invalid characters</faultstring> <detail/> </soap:Fault> </soap:Body> </soap:Envelope>
When a node receives a fault message with a Client code, it should not attempt to resend the same message. It should take some action to correct the problem or abort completely.
4.6.1.2 The Server Fault
The Server fault code indicates that the node that received the SOAP message malfunctioned or was otherwise unable to process the SOAP message. This fault is a reflection of an error by the receiving node (either an intermediary or the ultimate receiver) and doesn't point to any problems with the SOAP message itself. In this case the sender can assume the SOAP message to be correct, and can redeliver it after pausing some period of time to give the receiver time to recover.
If, for example, the receiving node is unable to connect to a resource such as a database while processing a SOAP message, it might generate a Server fault. The following is an example of a Server fault, generated when the BookPrice Web service could not access the database to retrieve price information in response to a SOAP message.
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>soap:Server</faultcode> <faultstring> Database is unavailable.</faultstring> <detail/> </soap:Fault> </soap:Body> </soap:Envelope>
4.6.1.3 The VersionMismatch Fault
A receiving node generates a VersionMismatch fault when it doesn't recognize the namespace of a SOAP message's Envelope element. For example, a SOAP 1.1 node will generate a fault with a VersionMismatch code if it receives a SOAP 1.2 message, because it finds an unexpected namespace in the Envelope. This scenario is illustrated by the fault message in Listing 4-20.
Listing 4-20 An Example of a VersionMismatch Fault
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>soap:VersionMismatch</faultcode> <faultstring>Message was not SOAP 1.1-conformant</faultstring> <detail/> </soap:Fault> </soap:Body> </soap:Envelope>
The VersionMismatch fault applies only to the namespace assigned to the Envelope, Header, Body, and Fault elements. It does not apply to other parts of the SOAP message, like the header blocks, XML document version, or application-specific elements in the Body.
The VersionMismatch fault is also used in the unlikely event that the root element of a message is not Envelope, but something else. Sending a VersionMismatch fault message back to the sender in this case may not be helpful, however: The sender may be designed to handle a different protocol and doesn't understand SOAP faults.
4.6.1.4 The MustUnderstand Fault
When a node receives a SOAP message, it must examine the Header element to determine which header blocks, if any, are targeted at that node. If a header block is targeted at the current node (via the actor attribute) and sets the mustUnderstand attribute equal to "1", then the node is required to know how to process the header block. If the node doesn't recognize the header block, it must generate a fault with the MustUnderstand code. Listing 4-21 shows an example.
Listing 4-21 A MustUnderstand Fault
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>soap:MustUnderstand</faultcode> <faultstring>Mandatory header block not understood.</faultstring> <detail/> </soap:Fault> </soap:Body> </soap:Envelope>
4.6.1.5 Non-standard SOAP Fault Codes
It is also possible to use non-standard SOAP fault codes that are prescribed by other organizations and belong to a separate namespace. For example, Listing 4-22 uses a fault code specified by the WS-Security specification.
Listing 4-22 Using Non-standard Fault Codes
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>wsse:InvalidSecurityToken</faultcode> <faultstring>An invalid security token was provided</faultstring> <detail/> </soap:Fault> </soap:Body> </soap:Envelope>
4.6.2 The faultstring Element
The faultstring element is mandatory. It should provide a human-readable description of the fault. Although the faultstring element is required, the text used to describe the fault is not standardized.
Optionally, the faultstring element can indicate the language of the text message using a special attribute, xml:lang.BP The set of valid codes is defined by IETF RFC 1766.4 For example, a Client fault could be generated with a Spanish-language text as shown in Listing 4-23.
Listing 4-23 Using the xml:lang Attribute in the faultstring Element
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>soap:Client</faultcode> <faultstring xml: El ISBN tiene letras invalidas </faultstring> <detail/> </soap:Fault> </soap:Body> </soap:Envelope>
Although it's not specified, it's assumed that, in the absence of the xml:lang attribute, the default is English (xml:lang="en"). The xml:lang attribute is part of the XML 1.0 namespace, which does not need to be declared in an XML document.
4.6.3 The faultactor Element
The faultactor element indicates which node encountered the error and generated the fault (the faulting node). This element is required if the faulting node is an intermediary, but optional if it's the ultimate receiver. For example, let's assume that an intermediary node in the message path, the authentication node, did not recognize the mandatory (mustUnderstand="1") processed-by header block, so it generated a MustUnderstand fault. In this case the authentication node must identify itself using the faultactor element, as in Listing 4-24.
Listing 4-24 Locating the Source of the Fault Using the faultactor Element
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>soap:MustUnderstand</faultcode> <faultstring>Mandatory header block not understood. </faultstring> <faultactor> </faultactor> <detail/> </soap:Fault> </soap:Body> </soap:Envelope>
The faultactor element may contain any URI, but is usually the Internet address of the faulting node, or the URI used by the actor attribute if a header block was the source of the error.
SOAP 1.1 doesn't recognize the concept of a role as distinct from a node. In fact, it lumps these two concepts together into the single concept actor. Thus you can see the faultactor as identifying both the node that generated the fault and the role that it was manifesting when it generated the fault.
4.6.4 The detail Element
The detail element of a fault message must be included if the fault was caused by the contents of the Body element, but it must not be included if the error occurred while processing a header block. The SOAP message in Listing 4-25 provides further details about the invalid ISBN reported in the faultstring element.
Listing 4-25 A SOAP Fault detail Element
<?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <soap:Fault> <faultcode>soap:Client</faultcode> <faultstring> The ISBN value contains invalid characters </faultstring> >
The detail element may contain any number of application-specific elements, which may be qualified or unqualified, according to their XML schema. In addition, the detail element itself may contain any number of qualified attributes, as long as they do not belong to the SOAP 1.1 namespace, "".BP
It's perfectly legal to use an empty detail element, but you must not omit the detail element entirely if the fault resulted while processing the contents of the original message's Body element.
4.6.4.1 Processing Header Faults: Omitting the detail Element
SOAP provides little guidance on how details about header faults should be provided. It says only that detailed information must be included in the Header element. Some SOAP toolkits place a SOAP Fault element inside the Header element, or nested within a header block, while other toolkits may use a different strategy.
4.6.5 Final Words about Faults
As a developer, it's your responsibility to be aware of the various circumstances under which faults must be generated, and to ensure that your code properly implements the processing of those faults.
This is probably a good time to recap. Faults result from one of several conditions:
The message received by the receiver is improperly structured or contains invalid data.
The incoming message is properly structured, but it uses elements and namespaces in the Body element that the receiver doesn't recognize.
The incoming message contains a mandatory header block that the receiver doesn't recognize.
The incoming message specifies an XML namespace for the SOAP Envelope and its children (Body, Fault, Header) that is not the SOAP 1.1 namespace.
The SOAP receiver has encountered an abnormal condition that prevents it from processing an otherwise valid SOAP message.
The first two conditions generate what are considered Client faults, faults that relate to the contents of the message: The client has sent an invalid or unfamiliar SOAP message to the receiver. The third condition results in a MustUnderstand fault, and the fourth results in a VersionMismatch fault. The fifth condition is considered a Server fault, which means the error was unrelated to the contents of the SOAP message. A server fault is generated when the receiver cannot process a SOAP message because of an abnormal condition. | http://www.informit.com/articles/article.aspx?p=169106&seqNum=6 | CC-MAIN-2019-30 | refinedweb | 2,410 | 52.09 |
Some time ago an acquaintance started a danish version of “Word a Week” on Twitter. She would every week tweet a word that she thought wasn’t used enough. It could be a real word or some old slang. I found it very funny and thought “Why isn’t there a site which lists them”. So long story short I started working on it and was born (I got her permission first!).
The site is very simple. It uses Unsplash to show a background and then a simple black box in the middle with a word, its meaning, and sometimes a…
Recently I have rediscovered my love for Golang. A language that I used to develop in professionally as a consultant.
Since that time the landscape have changed. New tools have arrived on the scene and Go finally have a modules system.
Getting those two to play nicely on CircleCI wasn’t a quick and easy thing for me to do. So I hope by sharing my raw config it can help others.
So what the config is doing is the default CircleCI stuff, like checking out the code. But look at the
cache entries. They have been updated to match…
This time we are using Rollup instead of Webpack and it is super easy. Also it requires less configuration.
Before going further I suggest you read the other blog post which will explain the conventions behind the Rollup configuration.
// rollup.config.js
import fs from 'fs'
import CloudFormation from 'yaml-cfn'
import typescript from 'rollup-plugin-typescript'
import nodeResolve from 'rollup-plugin-node-resolve'const defaultConfig = {
plugins: [
nodeResolve(),
typescript(),
],output: {
format: 'commonjs',
},external: [
'aws-sdk',
],
}const { Resources } = CloudFormation.yamlParse(fs.readFileSync('template.yml'))const entries = Object.values(Resources)
.filter(resource => resource.Type == 'AWS::Serverless::Function')
.filter(resource => resource.Properties.Runtime.startsWith('nodejs')).map(resource => {…
In a world where scalability is everything it is hard to ignore the new wave of “Serverless”. One of the biggest players in this space is Amazon AWS with their Lambda offering.
In this post I am going to try and explain how I set up my AWS Sam projects using Webpack and TypeScript.
The thought behind my approach is to make the
CodeUri part of my
template point to the Webpack output. Then have Webpack running in “watch” mode in order to recompile when i save a file. This adds the requirement of having Webpack running in the background…
Over the last decade I have started to travel more and more. When travelling a lot one of the constant problems is figuring out how to pay for stuff like food accommodation etc. Without paying millions in fees to your bank or Forex companies. So I was pretty happy when I found out about the new breed of FinTech companies and found Revolut.
Revolut is officially an Electronic Money Institution regulated by the FTC in the UK. But really it is an Awesome app for travelling and Forex. Here is a short list that makes it so awesome for travel: based on what happens in the application embedded in our BrowserWindow.
Fortunately Electron gives us access to ipcRenderer and ipcMain which together with a simple preload script makes this quite easy.
So let’s start with a simple Electron app. This will open and use our preload script.
When we receive an event on the postMessage channel we will just use console.log to log some output. If open-url is then triggered we will send a message to the renderer process for our mainWindow. | https://medium.com/@henrikbjorn | CC-MAIN-2021-39 | refinedweb | 590 | 64.71 |
Higher Order Functions
Closures, function factories, common factory pattern functions
In the first chapter we saw first class functions, ie. functions as first class citizens of the language. Higher order functions are functions that do one of the following
- Take a function as one of its arguments
- Returns a function
A function is said to be functional if it takes a function as its argument. Higher order functions form the backbone of functional programming.
Creating Closures
Mastering closures is the key to mastering JavaScript, and functional JavaScript. Closures are every where in JavaScript, and you would not be able to do very much in a functional manner without closures. Let us look at an example.
var showName = function() {
var name = “FP JavaScript”
return function() {
console.log(name)
}
}()
showName() //==>> FP JavaScript
console.log(name) //==>> Reference Error
We created an immediately executed anonymous function.
function() {}()
And assigned its return value to showName. showName is set to the function.
function() {
console.log(name)
}
We also created a variable name in the same scope as the above function. The variable is visible inside the function, as the function prints its value.
Outside the immediately executed function, name is undefined. We get an error when we try to access it. However we can print the name calling showName. Even though the function which created name has run and returned, name continues to be accessible inside showName.
In effect we created a closure with a variable name for the function showName. You can do whatever you want with showName. Pass it to another function, or where ever you want. You can bank on its closure to faithfully follow it.
A functions closure is a pointer, that the function carries with it. It points to a table of all the variables in the scope where the function was created.
A closure is created every time a function returns another function defined within it.
Creating closures should come naturally to a JavaScript programmer. If not, keep practising closures. We will be making a lot of closures in this book. You should recognise them even if it is not always explicitly mentioned.
Function Factory
Consider the code below.
function add(x) {
return function(y) {
return x + y
}
}
var add2 = add(2)
var add3 = add(3)
add2(4) //==>> 6
add3(6) //==>> 9
The function add takes a single variable x and returns a function, which in turn takes a single variable y and returns the sum of x and y.
add is a Function Factory. Given a value, it creates a function, that adds a given value to a value stored in its closure.
So why not just write add as add(x, y)? This is surely simpler. Writing functions as function factories allow you to compose other functions. We will see more of this below. And in the chapter on composition.
We will look at some of the common functions that are written using the function factory pattern.
get
function get(prop) {
return function(obj) {
return obj[prop]
}
}
What is interesting about this function is that it does no “getting” so to speak. It just returns another function.
var getName = get(“name”)
getName gets set to a function returned by get. But before returning anything, get creates a closure with variable prop (property) for getName. And the property is set to “name”.
var book = {
name: “FP JavaScript”
}
getName(book) //==>> FP JavaScript
get can be used with arrays too.
get(1)([1, 2, 3]) //==>> 2
This is the way get works in functional languages. This is useful when you compose it with other functions. Also useful if you are accessing the same property of a set of objects. We will see all this in action next.
map
We will rewrite the JavaScript array map function in a functional manner.
function map(fn) {
return function(arr) {
return Array.prototype.map.call(arr, fn)
}
}map(function(x) {return x * x}) ([1, 2, 3]) //==>> [ 1, 4, 9 ]
Notice that the arguments are flipped. The function comes first then the array. Also we used Array.prototype.map.call instead of just calling arr.map. This is so that we can use our map function with array like objects, arguments and DOMNodeList.
Say you wanted to get all the emails from a list like this in an array.
var people = [ {name:”John”, email:”john@example.com”},
{name:”Bill”, email:”bill@example.com”} ]map(get(‘email’)) (people)
//===>> [ ‘john@example.com’, ‘bill@example.com’ ]
You see the advantage of writing get this way. You can pass it as an argument to another function. We composed get with map. Next we will see the advantage of writing map this way.
pluck
The pattern map(get()) we saw above is so common that we a have a function for it called pluck.
function pluck(prop) {
return map(get(prop))
}pluck(‘email’)(people)
//===>> [ ‘john@example.com’, ‘bill@example.com’ ]
pluck is a function composition of map and get(prop). Composition works from right to left.
map returns a function that requires an array. But has also set its required function get(prop) in its closure.
So pluck(‘email’) is a function returned by map. Now we need to call it with array to evaluate it. map then calls the arrays map function with the function in its closure.
forEach
function forEach(fn) {
return function(arr) {
Array.prototype.forEach.call(arr, fn)
}
}
This works just like our map function earlier, except that it just iterates over an array, and does not return anything. This will also work with a DOMNodeList.
A common pattern we encounter often while doing client side development is iteration over a NodeList returned by document.querySelectorAll. To accomplish this we can write a generic higher order function to iterate over a NodeList given a selector.
<!DOCTYPE html><html><body><div>Hide Me</div>
<div>Hide Me Too</div><script>function forEach(fn) {
return function(arr) {
Array.prototype.forEach.call(arr, fn)
}
}var displayNone = forEach(function(elem) {
elem.style.display = “none”
})displayNone(document.querySelectorAll(“div”))</script></body></html>
take
take takes a number n and returns a function, to which you must pass an array, to get the first n elements of the array.
function take(n) {
return function(arr) {
return arr.slice(0, n)
}
}
flip
flip takes a function of two or more arguments and returns a function with the first two arguments flipped.
function flip(fn) {
return function(first, second) {
var rest = [].slice.call(arguments, 2)
return fn.apply(null, [second, first].concat(rest))
}
}
flip is useful when you need to partially apply a given function in the order you want the arguments. We will learn more about partial application in the chapter on currying.
memoize
Memoization is the process in which a function caches its results, and returns results from the cache, if the function was called before with the same argument(s).
function memoize(fn) {
var cache = {}
return function(arg) {
return (arg in cache) ? cache[arg] : (cache[arg] = fn(arg))
}
}
This is memoize for a function that takes one argument. It returns a function with an object cache in its closure. Every time the function is called it checks the cache for the argument. If found it returns the corresponding value. Otherwise it sets the cache with the argument and result, and returns the result.
memoize is very useful to optimise lengthy calculations, or expensive recursive operations.
The fibonacci series calculation is the popular example used to demonstrate the memoize function. Because the computation involved grows exponentially the larger the number you want to calculate in the series.
Each number in the fibonacci series is the sum of the previous two numbers starting after the second number. eg. 1,1,2,3,5,8,13,21 .. etc. And we can write this as
function fib(n) {
return n < 2 ? 1 : fib(n — 1) + fib(n — 2)
}
And let us run some tests.
var fibmemo = memoize(fib)
var start = new Date()
fibmemo(20)
console.log(new Date() — start) //==>> 11 ms on my machinestart = new Date()
fibmemo(20)
console.log(new Date() — start) //==>> 0 ms
Notice that running fibmemo(20) the second time round took only 0 ms, because it just returned the cached value.
We need to modify memoize to handle functions with multiple arguments. Fortunately that is quite simple.
function memoize(fn) {
var cache = {}
return function() {
var args = Array.prototype.slice.call(arguments)
return (args in cache) ? cache[args] :
(cache[args] = fn.apply(null, args))
}
}
First we convert the arguments passed to an array in args. Since args is an array and is used in the context of a object key (hash), it will be coerced into a string. Second, we now apply the array to fn.
once
once will create a function you can only run once. Subsequent invocations will return the first result.
function once(fn) {
var cache = {}
return function() {
return (“once” in cache) ? cache[“once”] :
(cache[“once”] = fn.apply(null, arguments))
}
} | https://medium.com/functional-javascript/higher-order-functions-78084829fff4 | CC-MAIN-2019-39 | refinedweb | 1,480 | 59.9 |
On Fri, Sep 3, 2010 at 9:57 PM, David Hutto <smokefloat at gmail.com> wrote: > First of all, I'll respond more thoroughly tomorrow, when I can review > what you said more clearly, but for now I'll clarify. > > Here is the whole code that I'm using: > > > > On Fri, Sep 3, 2010 at 9:12 PM, Steven D'Aprano <steve at pearwood.info> wrote: >> On Fri, 3 Sep 2010 12:24:00 pm David Hutto wrote: >>> In the below function I'm trying to iterate over the lines in a >>> textfile, and try to match with a regex pattern that iterates over >>> the lines in a dictionary(not {}, but turns a text list of >>> alphabetical words into a list using readlines()). >>> >>> def regexfiles(filename): >> [...] >> >> Your function does too much. It: >> >> 1 manages a file opened for input and output; >> 2 manages a dictionary file; >> 3 handles multiple searches at once (findall and search); >> 4 handles print output; >> >> and you want it to do more: >> >> 5 handle multiple "select" terms. >> >> >> Your function is also poorly defined -- I've read your description >> repeatedly, and studied the code you give, and your three follow up >> posts, and I still don't have the foggiest clue of what it's supposed >> to accomplish! > > This is supposed to recreate a thought experiment I've heard about, in which, > if you have an infinite amount of monkeys, with an infinite amount of > typewriters, > they'll eventually spit out Shakespeare. > > So, I create a random length file, with random characters, then regex > it for the iteration > of dictionary terms, but the regex is needed further for the > theoretical exploratory purposes > into the thought experiment. If dictionary patterns are matched, then > it needs further regex > for grammatically proper structures, even if they don't make > sense(take mad libs for example), > but are still grammatically correct, randomly produced files. > > So the bulldozer is needed. I forgot, the randomly generated files with random len, are regexed, for words,, then sorted into a range file for len of the words contained, then regexed for grammatical structure, and sorted again. The latters of this have not been set in yet, just up until it finds the len of real words in the file, then regex of the grammar is next on my list. So it's more practice with regex, than use a bulldozer to dig a fire pit. > > > You do two searches on each iteration, but other than >> print the results, you don't do anything with it. > > I had to print the results, in order to understand why using 'apple' > in a variable > yielded something different than when I iterated over the text file. > The end result was that > the list of dictionary words ['a\n'', 'b\n'] had \n, which was the > extra character in the iteration I was referring to, > and thanks to printing it out I was able to further isolate the > problem through len(). > > So rstrip() removed '\n' from the iterated term in the text file, > yielding just the ['term'], and not ['term\n']. > > Print helps you see the info first hand. > >> >> What is the *meaning* of the function? "regexfiles" is a meaningless >> name, and your description "I'm trying to iterate over the lines in a >> textfile, and try to match with a regex pattern that iterates over the >> lines in a dictionary" is confusing -- the first part is fine, but what >> do you mean by a regex that iterates over the lines in a dictionary? >> >> What is the purpose of a numeric variable called "search"? It looks like >> a counter to me, not a search, it is incremented each time through the >> loop. The right way to do that is with enumerate(), not a manual loop >> variable. >> >> Why do you call readlines() instead of read()? This makes no sense to >> me. You then convert a list of strings into a single string like this: >> >> readlines() returns ['a\n', 'b\n', 'c\n'] >> calling str() returns "['a\n', 'b\n', 'c\n']" >> >> but the result includes a lot of noise that weren't in the original >> file: open and close square brackets, commas, quotation marks. >> >> I think perhaps you want ''.join(readlines()) instead, but even that is >> silly, because you should just call read() and get 'a\nb\nc\n'. >> >> You should split this up into multiple functions. It will make >> comprehension, readability, debugging and testing all much, much >> easier. Code should be self-documenting -- ideally you will never need >> to write a comment, because what the code does should be obvious from >> your choice of function and variable names. >> >> I don't understand why you're using regex here. If I'm guessing >> correctly, the entries in the dictionary are all ordinary (non-regex) >> strings, like: >> >> ape >> cat >> dog >> >> only many, many more words :) >> >> Given that, using the re module is like using a bulldozer to crack a >> peanut, and about as efficient. Instead of re.search(target, text) you >> should just use text.find(target). There's no built-in equivalent to >> re.findall, but it's not hard to write one: >> >> def findall(text, target): >> results = [] >> start = 0 >> p = text.find(target) >> while p != -1: >> results.append(p) >> p = text.find(target) >> return results >> >> (This returns a list of starting positions rather than words. It's easy >> to modify to do the other -- just append target instead of p.) >> >> >> Anyway, here's my attempt to re-write your function. I've stuck to >> regexes just in case, and there's lots of guess-work here, because I >> don't understand what you're trying to accomplish, but here goes >> nothing: > > The above should explain a little more, and tomorrow, I'll thoroughly > review your post. >> >> >> # Change this to use real English *wink* >>> >> def get_dict_words(filename=''): >> """Return a list of words from the given dictionary file. >> >> If not given, a default dictionary is used. >> The format of the file should be one word per line. >> """ >> if not filename: >> filename = DEFAULT_DICT >> # Don't use file, use open. >> dictionary = open(filename) >> words = dictionary.readlines() >> # Best practice is to explicitly close files when done. >> dictionary.close() >> return words >> >> >> def search(target, text): >> """Return the result of two different searches for target in text. >> >> target should be a regular expression or regex object; text should >> be a string. >> >> Result returned is a two-tuple: >> (list of matches, regex match object or None) >> """ >> if isinstance(target, str): >> target = re.compile(target) >> a = target.findall(text) >> b = target.search(text) >> return (a, b) >> >> >> def print_stuff(i, target, text, a, b): >> # Boring helper function to do printing. >> print "counter = %d" % i >> print "target = %s" % target >> print "text = %s" % text >> print "findall = %s" % a >> print "search = %s" % b >> print >> >> >> def main(text): >> """Print the results of many regex searches on text.""" >> for i, word in get_dict_words(): >> a, b, = search(word, text) >> print_stuff(i, word, text, a, b) >> >> >> # Now call it: >> fp = open(filename) >> text = fp.read() >> fp.close() >> main(text) >> >> >> >> I *think* this should give the same results you got. >> >> >> -- >> Steven D'Aprano >> _______________________________________________ >> Tutor maillist - Tutor at python.org >> To unsubscribe or change subscription options: >> >> > | https://mail.python.org/pipermail/tutor/2010-September/078259.html | CC-MAIN-2014-15 | refinedweb | 1,188 | 70.13 |
Say there’s a component I want to use in both the ember engine and app. How do I share it between both in a effective way?
Sharing components between ember engines and app
JimParsons #1
The officially condoned solution would be to put the component in an addon and use the addon in both the app and the engine (components must be re-exported into the engine namespace). See the engines guides for more specifics.
If that seems to unpalatable for whatever reason I feel like I’ve seen another approach discussed in a thread here recently but I can’t seem to find it offhand. Maybe poke around a little bit.
JimParsons #3 | https://discuss.emberjs.com/t/sharing-components-between-ember-engines-and-app/16197 | CC-MAIN-2019-13 | refinedweb | 115 | 61.16 |
I found that straightforward chain merging with pandas library is quite inefficient when you merge a lot of datasets with a big number of columns by the same column.
The root of the problem is the same as when we join a lot of str's dumb way:
joined = reduce(lambda a + b, str_list)
Instead of:
joined = ''.join(str_list)
Doing chain merge we copying dataset many times (in my case almost 100 times) instead of just filling columns from several datasets at once or in order
Is there is some efficient way (= with linear complexity by the number of sets) to chain merge by the same column a lot of datasets?
If you have a list of your dataframes
dfs:
dfs = [df1, df2, df3, ... , dfn]
you can join them using panda's
concat function which as far as I can tell is faster than chaining merge.
concat only joins dataframes based on an index (not a column) but with a little pre-processing you can simulate a
merge operation.
First replace the index of each of your dataframes in
dfs with the column you want to merge on. Lets say you want to merge on column
"A":
dfs = [df.set_index("A", drop=True) for df in dfs]
Note that this will overwrite the previous indices (merge would do this anyway) so you might want to save these indices somewhere (if you are going to need them later for some reason).
Now we can use concat which will essentially merge on the index (which is actually your column!!)
merged = pd.concat(dfs, axis=1, keys=range(len(dfs)), join='outer', copy=False)
The
join= argument can either be
'inner' or
'outer' (default). The
copy= argument keeps
concat from making unnecessary copies of your dataframes.
You can then either leave
"A" as the index or you can make it back into a column by doing:
merged.reset_index(drop=False, inplace=True)
The
keys= argument is optional and assigns a key value to each dataframe (in this case I gave it a range of integers but you could give them other labels if you want). This allows you to access columns from the original dataframes. So if you wanted to get the columns that correspond to the 20th dataframe in
dfs you can call:
merged[20]
Without the
keys= argument it can get confusing which rows are from which dataframes, especially if they have the same column names.
I'm still not entirely sure if
concat runs in linear time but it is definitely faster than chaining
merge:
using ipython's %timeit on lists of randomly generated dataframes (lists of 10, 100 and 1000 dataframes):
def merge_with_concat(dfs, col): dfs = [df.set_index(col, drop=True) for df in dfs] merged = pd.concat(dfs, axis=1, keys=range(len(dfs)), join='outer', copy=False) return merged dfs10 = [pd.util.testing.makeDataFrame() for i in range(10)] dfs100 = [pd.util.testing.makeDataFrame() for i in range(100)] dfs1000 = [pd.util.testing.makeDataFrame() for i in range(1000)] %timeit reduce(lambda df1, df2: df1.merge(df2, on="A", how='outer'), dfs10) 10 loops, best of 3: 45.8 ms per loop %timeit merge_with_concat(dfs10,"A") 100 loops, best of 3: 11.7 ms per loop %timeit merge_with_concat(dfs100,"A") 10 loops, best of 3: 139 ms per loop %timeit reduce(lambda df1, df2: df1.merge(df2, on="A", how='outer'), dfs100) 1 loop, best of 3: 1.55 s per loop %timeit merge_with_concat(dfs1000,"A") 1 loop, best of 3: 9.67 s per loop %timeit reduce(lambda df1, df2: df1.merge(df2, on="A", how='outer'), dfs1000) # I killed it after about 5 minutes so the other one is definitely faster | https://codedump.io/share/8ddqrCsz7ra6/1/efficient-chain-merge-in-pandas | CC-MAIN-2017-09 | refinedweb | 616 | 69.31 |
#include <rte_compat.h>
#include <rte_ethdev.h>
#include <rte_ether.h>
Go to the source code of this file.
ixgbe PMD specific functions.
Definition in file rte_pmd_ixgbe.h.
Response sent back to ixgbe driver from user app after callback
Definition at line 657 of file rte_pmd_ixgbe.h.
Notify VF when PF link status changes.
Set the VF MAC address.
Enable/Disable VF VLAN anti spoofing.
Enable/Disable VF MAC anti spoofing.
Enable/Disable vf vlan insert
Enable/Disable tx loopback
set all queues drop enable bit
set drop enable bit in the VF split rx control register
Enable/Disable vf vlan strip for all queues in a pool
Enable MACsec offload.
Disable MACsec offload.
Configure Tx SC (Secure Connection).
Configure Rx SC (Secure Connection).
Enable Tx SA (Secure Association).
Enable Rx SA (Secure Association).
Set RX L2 Filtering mode of a VF of an Ethernet device.
Enable or disable a VF traffic receive of an Ethernet device.
Enable or disable a VF traffic transmit of the Ethernet device.
Enable/Disable hardware VF VLAN filtering by an Ethernet device of received VLAN packets tagged with a given VLAN Tag Identifier.
Set the rate limitation for a vf on an Ethernet device.
Set all the TCs' bandwidth weight.
The bw_weight means the percentage occupied by the TC. It can be taken as the relative min bandwidth setting.
Initialize bypass logic. This function needs to be called before executing any other bypass API.
Return bypass state.
Set bypass state
Return bypass state when given event occurs.
Set bypass state when given event occurs.
Set bypass watchdog timeout count.
Get bypass firmware version.
Return bypass watchdog timeout in seconds
Reset bypass watchdog timer
Acquire swfw semaphore lock for MDIO access
Release swfw semaphore lock used for MDIO access
Read PHY register using MDIO without MDIO lock The lock must be taken separately before calling this API
Write data to PHY register using without MDIO lock The lock must be taken separately before calling this API
Get port fdir info
Get port fdir status | https://doc.dpdk.org/api-20.11/rte__pmd__ixgbe_8h.html | CC-MAIN-2021-39 | refinedweb | 337 | 61.12 |
Attribute programming is a declarative programming model tool that you should keep in your development toolbox.
Programming with attributes refines and solves some of the difficult development problems we face every day.
While this article is squarely aimed at beginners, the reader must have an understanding of
Reflection in order to comprehend attribute programming.
When you see this property declaration:
[CustomAttribute]
public string MyProperty { get; set; }
...you should recognize a property declaration that has been 'decorated' with the custom attribute named CustomAttribute.
CustomAttribute
There are a few things going on here.
First of all, regardless of whether you've ever created your own attributes and used them in your code, you must have seen various .NET Framework-related
attributes at certain times in your applications. For instance, if you generate a class from an XSD schema, you'll notice a bunch of attributes attached
to almost all the classes in the resultant source file.
Why are they there? What do they do?
At runtime, the .NET Framework investigates those attributes and executes code based on the values those attributes contain.
How does the runtime investigate those attributes?
Reflection.
Now, why use attributes?
Basically, the answer is to simplify your programming and make your code more readable by making more of your code "declarative" in nature.
Attribute programming is a declarative programming model.
Attribute programming is effective when some class or class member needs to know something
about itself at runtime, and it would make for very inelegant or clumsy code to feed that data to the class or member in any other way.
An example: a class is "hydrated" from a text file at runtime. The name of the text file is known at design time.
The class needs a way to know what text file to read at runtime to hydrate itself.
There would be many ways to solve this problem. Among them, a popular one would be to add a property like ContentFileName
to the object and hard-code in the name of the file. That would work but it's clumsy and unsightly.
ContentFileName
At runtime, that class is going to hydrate itself and become, essentially, data. The name of the file to hydrate with is data.
So we're dealing with "data about data", also called "metadata". Whenever you're dealing with metadata, think about attribute programming.
A better solution is to create your own custom attribute. It must inherit from System.Attribute, or another class that itself inherits from
Attribute.
System.Attribute
Attribute
A typical custom attribute could look like:
public class ObjectTextAttribute : Attribute
{
}
Now back in your object code, do this:
public class ObjectTextAttribute : Attribute
{
public ObjectTextAttribute()
{
}
public string ContentFileName { get; set; }
}
[ObjectTextAttribute(ContentFileName="blah.txt")]
public class TextFile
{
}
You're about halfway done now. You've used attribution to tell that class to hydrate itself from the text file named "blah.txt".
Other class definitions could hydrate themselves from different text files.
What's left is the code to parse out those attributes and to do something with them.
Reflection saves the day.
A typical solution for this is to create something like a ReflectOnAttributes method in this class, or better yet,
a base class that this class is derived from.
ReflectOnAttributes
In the ReflectOnAttributes method, you would use reflection code to find and isolate the type of attribute you're looking for,
and then do something once you found the value(s).
In this case, we'll scan for the MyCustomAttribute value and then parse out the ContentFileName once found. When we have that, we'll read the text in from that
file and assign it to a property in our class.
MyCustomAttribute
using System;
using System.IO;
namespace ClassLibrary
{
public class MyCustomAttribute : Attribute
{
public String ContentFileName { get; set; }
}
[MyCustomAttribute(ContentFileName = "C:\blah.txt")]
public class MyCustomClass
{
public MyCustomClass()
{
ReflectOnClassAttributes();
}
public String BodyText { get; private set; }
private void ReflectOnClassAttributes()
{
//inspect for the custom attribute and obtain value of property desired
object[] classAttrs = this.GetType().GetCustomAttributes(typeof(MyCustomAttribute), true);
if ((classAttrs != null) && (classAttrs.Length > 0))
{
//inspect for the body attribute, this gives our content for the main body
MyCustomAttribute attr = (MyCustomAttribute)classAttrs[0];
string fileName = attr.ContentFileName;
BodyText = File.ReadAllText(fileName);
}
}//method
}//class
}//namespace
There's a lot going on in the (above) code. Let's take it step-by-step.
We start out by defining an attribute class called MyCustomAttribute. It must inherit from the built-in .NET Framework
System.Attribute class.
Next we define a class that will use MyCustomAttribute. MyCustomClass is decorated with a citation to "MyCustomAttribute",
and provides a value in the attribute constructor to the property ContentFileName. This is where we give the class knowledge about which text
file to open and read and hydrate itself.
MyCustomClass
When the class is instantiated and ReflectOnClassAttributes() is called, the
Reflection process
interrogates the class declaration and determines what, if any, attributes are declared. When it finds the MyCustomAttribute, it digs into the property and
gets its value. Then it simply reads the text file into the BodyText property.
ReflectOnClassAttributes()
BodyText
This is a simple example of attribute programming, and you can take it further in your own work. And you should, because attribute programming resolves and
refines many of the programming problems we're faced with on a daily basis.
Submitted May 22 2012.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
public MyCustomClass()
{
BodyText = File.ReadAllText("C:\blah.txt");
}
class Person
{
[DisplayName("First Name")]
public string FirstName {get;set;}
[DisplayName("Percent Body fat")]
[DisplayFormat("#.## %")]
public double BodyFat {get;set;}
...
public int Age {get;set;}
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/390422/Quick-Overview-of-Csharp-Attribute-Programming | CC-MAIN-2016-30 | refinedweb | 966 | 57.27 |
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
Hey everyone,
I'm struggling a bit to see how the logic of Scriptrunner behaviors triggers and processes the scripts.
I currently have one behavior linked to a JSM Request Type.
On this behavior I've added several fields and each field as it's own server side script.
Now I'm seeing some strange things happening and wanted to double check on whether my understanding of a behavior trigger and execution is correct.
When I change a field on my Request Type, will it always run thu all the fields I've added in that behavior and execute the script?
Or do the fields that I add in the behavior mean that it will only trigger when that specific field is changed? I was always under the impression that a behavior will trigger when you change the field that is listed in that behavior and only for that field.
It's now that I'm seeing some weird things happening that lead me to believe otherwise.
Or maybe even clearer. It seems like when I hit my Create button to submit my request, the request is evaluated (errors on mandatory fields) and then all the behaviors seem to trigger.
I don't have any initialiser set but it almost seems like when the validation of the request is done (mandatory fields) that the form reloads and triggers all the behaviours as if all the fields I have defined are changed..?
Anybody got some thoughts?
You can use your chrome developer tool and examine all the network calls to rest/scriptrunner/behaviours/latest/validators.json and rest/scriptrunner/behaviours/latest/runvalidator.json
Here is what I've deduced from that..
So the trick is, assume each server-side script will be run once with each time a screen is loaded. If you will react to changes, make sure the changes are real. For example, I have a behaviours that clears field A every time field B is edited. I don't want to have this script wipe field A each time I edit the issue and change other fields (but not B). So to protect against that, I check the value of field B against the underlyingIssue and if it's different, then I clear field A.
Something like this:
import com.atlassian.jira.component.CoponentAccessor
def fieldB = getFieldById(fieldChanged)
def fieldA = getFieldByName('fieldA')
def cfB = ComponentAccessor.customFieldManager.getCustomFieldObject(fieldB.fieldId)
if(underlyingIssue && fieldB.value != underlyingIssue.getCustomFieldValue(cfB)){
fieldA.setValue('')
}
Hey @Peter-Dave Sheehan ,
Interesting findings and workaround. This does throw of how I've been thinking about behaviours in general. My broad idea/understanding was that
Like you mention, any time a single field is changed, only the part for that field is executed. However, when a field is loaded/reloaded it will call the initalizer (makes sense when opening to me but not after) but also all the field scripts that are in your mapping.
Now in general that doesn't seem like much of an issue and I tend to not have an issue with it because I only use my behaviours for/map them to Request Types, to assist the customer, which doesn't interfere any more afterwards.
I only noticed the behaviour because indeed a field with a specific value cleared some other fields. Then I add data to those newly cleared field. Where it goes wrong then is that when I hit the create button and the mandatory fields are checked (from the request type) which in turn returns a "you missed a field", I guess it reloads the screen and runs both the initializer and all the scripts from the mapped fields causing my newly manually entered data to be cleared again.
The way your workaround then works is by double checking the front end value vs the underlying issue value and only executing the behaviour if this really changed. Nice idea!
I did reach out to Adaptavist for this as well to get a definitive answer (besides my best educated guess) and here is what they provided me.
I can confirm that all behaviours attached to fields will be triggered when you press the "Create" button. This is how Behaviour works in front end.
Fortunately, this problem can be easily worked around by using setError() and clearError(). To do that:
1. Add following snippet to field A:
getFieldByName("Field X").setError("You need to enter a value")getFieldByName("Field Y").setError("You need to enter a value")getFieldByName("Field Z").setError("You need to enter a value")
- Adding following snippet to fields X, Y and Z:
if (getFieldById(getFieldChanged()).getValue() == "") { getFieldById(getFieldChanged()).setError("You need to enter a value")} else { getFieldById(getFieldChanged()).clearError()}
Unless all three fields have value, the user can't press the "Create" button and, thus, won't trigger the field A behaviour to clear value.
essentially it seems it would make more sense then to handle the mandatory fields yourself in the code instead of having it handled by the request type mandatory flag and thus blocking the user from hitting the create button before all is filled. | https://community.atlassian.com/t5/Adaptavist-questions/Scriptrunner-behaviours-logic/qaq-p/1688818 | CC-MAIN-2022-40 | refinedweb | 868 | 61.77 |
22 January 2010 17:52 [Source: ICIS news]
(adds updates throughout)
HOUSTON (ICIS news)--Braskem’s consolidation of the petrochemical arms of Brazil’s Petrobras, Odebrecht and Quattor will make it the largest plastic resins producer in the Americas and is part of a long-term plan to become one of the world’s five largest producers, the Brazilian company said on Friday.
“The acquisition is creating a world-scale player,” chief executive Bernardo Gradin said on a conference call with analysts. “Braskem positions itself as the biggest resin producer in the ?xml:namespace>
The merger of Braskem and rival Quattor will create
After the deal is completed, Braskem will have a capacity of 3.04m tonnes/year of polyethylene (PE), 1.97m tonnes/year of polypropylene (PP) and 510,000 tonnes/year of polyvinyl chloride (PVC), the company said.
Overall, the combined resin capacity of 5.51m tonnes/year would rank first in the
Going forward, Braskem cited “aggressive growth plans” in
In the transaction, Brazilian holding company, and operates 10 plants in the Brazilian states of
The deal, which effectively nationalises
Talks of a Braskem-Quattor alliance were first reported last August.
Additional reporting by Nigel Davis
($1 = R1.81)
For more on PE, PP | http://www.icis.com/Articles/2010/01/22/9328281/braskem-quattor-deal-creates-top-americas-resins-producer.html | CC-MAIN-2015-22 | refinedweb | 207 | 52.29 |
0
Hello friends. I'm working on a program to sort a list object using toArray, Arrays.sort, and Arrays.asList. The problem asks that I then apply Collections.binarySearch directly to the list object. I know what the problem is (which I have stated through comments in my code) but I don't know how to resolve it.
Here is my code :
import java.util.*; //begin class PartC public class PartC { //begin main method public static void main(String args[]) { //construct ArrayList object ArrayList<Integer> mylist = new ArrayList<Integer>(); //ArrayList<Integer> newlist = new ArrayList<Integer>(mylist); //initialize scanner Scanner scan = new Scanner(System.in); //ask user how long the list will be System.out.println("Enter the number of values to be added to the list"); int nums = scan.nextInt(); //ask user to input each value separated by the enter key System.out.println("Input the values"); //add all values to the list for(int index = 0; index<nums; ++index) { int input = scan.nextInt(); mylist.add(input); } Object[] a = mylist.toArray(); Arrays.sort(a); System.out.println(Arrays.asList(a)); //not working because mylist is not sorted in ascending order //must make sorted list the parameter for the Collections.binarySearch System.out.println(mylist); System.out.println("Enter the value you want to search for"); int searchValue = scan.nextInt(); int where = Collections.binarySearch(mylist, searchValue); System.out.println("The value you searched for is at index " + where); }//terminates main method }//terminates class PartC
Thanks for all your help. | https://www.daniweb.com/programming/software-development/threads/189034/collections-binarysearch | CC-MAIN-2017-17 | refinedweb | 249 | 51.65 |
July 24, 2002 - Creating Web Service Proxy Namespace
wsdl.exe. Your .NET Framework should support this executable by including its directory in your path. Let's demonstrate this step with the
addWeb service. Open a Command Prompt window and
cd(change directory) to
d:\aspDemo. Type
wsdland verify you get the help for this executable. Type the full command now (in one line):
The last entry on this line is the input to theThe last entry on this line is the input to the
wsdl /l:js /namespace:calcService /out:calc
simpleCalc
calc
simpleCalc.asmx and the echo of the
wsdl command:
| http://www.webreference.com/js/tips/020724.html | CC-MAIN-2017-09 | refinedweb | 102 | 66.33 |
PySide QThread.terminate() causing fatal python error
- nullstellensatz
I am using PySide version 1.2.2, which wraps the Qt v4.8 framework. I am in a situation where I have to choose between having my application wait for a QThread that I no longer need to exit normally (it is quite possible that the thread will block indefinitely), and giving the unresponsive thread a grace period (of several seconds), then calling QThread.terminate() on it. Though I wish I could, I cannot let the QThread object go out of scope while the underlying thread is still running, since this will throw the error "QThread: Destroyed while thread is still running" and almost surely cause a segfault.
Please note that I am aware that terminating QThreads is dangerous and highly discouraged. I am just trying to explore my options here.
When I try to terminate a thread however, my application crashes with the following error:
bq. Fatal Python error: This thread state must be current when releasing
You can try this out yourself by copy/pasting and running the following code:
@from PySide import QtCore, QtGui
class Looper(QtCore.QThread):
"""QThread that prints natural numbers, one by one to stdout."""
def init(self, *args, **kwargs):
super(Looper, self).init(*args, **kwargs)
self.setTerminationEnabled(True)
def run(self): i = 0 while True: self.msleep(100) print(i) i += 1
Initialize and start a looper.
looper = Looper()
looper.start()
Sleep main thread for 5 seconds.
QtCore.QThread.sleep(5)
Terminate looper.
looper.terminate()
After calling terminate(), we should call looper.wait() or listen
for the QThread.terminated signal, but that is irrelevant for
the purpose of this example.
app = QtGui.QApplication([])
app.exec_()
@
How do you properly terminate QThreads in python? I reckon that the error I am getting has got something to do with releasing of the Global Interpreter Lock, but I am not sure exactly what is going wrong, and how to fix it.
Hi,
There is a dedicated forum for "language bindings":. You may get a better answer there.
- SGaist Lifetime Qt Champion
Hi,
Topic moved
On to your question: terminate and properly do not belong together. You should write your thread so that you can exit/quit it properly. The documentation of QThread also explains that termination can lead to unexpected results.
@
class Looper(QtCore.QThread):
"""QThread that prints natural numbers, one by one to stdout."""
def init(self, *args, **kwargs):
super(Looper, self).init(*args, **kwargs)
self.setTerminationEnabled(True)
def stop(self): self.continue = False def run(self): i = 0 self.continue = True while self.continue: self.msleep(100) print(i) i += 1
@
Something like that and you can stop it gracefully
- nullstellensatz
@SGaist I am already using flags as you suggested. The operation that is taking too long is a library function call that I have little control over. I am looking to understand why I cannot terminate() QThreads in PySide.
- JKSH Moderators
[quote author="nullstellensatz" date="1421370989"]The operation that is taking too long is a library function call that I have little control over.[/quote]That's unfortunate. Any chance of getting the library's author to provide a better API?
[quote]I am looking to understand why I cannot terminate() QThreads in PySide.[/quote]Not sure (I don't have Python experience), but judging from your error message, I'm guessing it's related to how PySide handles the "Global Interpreter Lock": | https://forum.qt.io/topic/50171/pyside-qthread-terminate-causing-fatal-python-error | CC-MAIN-2018-09 | refinedweb | 569 | 58.48 |
#include <SD.h>#define BUF_DIM 32#define CHIP_SELECT 10File file;//-------------------------------------------------------uint32_t lineCount(File* f) { char buf[BUF_DIM]; uint32_t nl = 0; int n; // rewind file f->seek(0); // count lines while ((n = f->read(buf, sizeof(buf))) > 0) { for (int i = 0; i < n; i++) { if (buf[i] == '\n') nl++; } } return nl;}//-------------------------------------------------------void setup() { Serial.begin(9600); Serial.println("start"); if (!SD.begin(CHIP_SELECT)) { Serial.println("begin error"); return; } file = SD.open("TEST.TXT"); if (!file) { Serial.println("open error"); return; } uint32_t m = micros(); uint32_t n = lineCount(&file); m = micros() - m; Serial.print(file.size()); Serial.println(" bytes"); Serial.print(n); Serial.println(" lines"); Serial.print(m); Serial.println(" micros"); Serial.println("done"); }void loop() {}
start9388 bytes299 lines53084 microsdone
start9388 bytes299 lines327632 microsdone
start9388 bytes299 lines72732 microsdone
start9388 bytes299 lines47452 microsdone
uint32_t
f->seek(0)
What will happon if there are lines with no characters, so only a '\n' ,will that be detected as EOF or is n = 1 in this casen=0 when EOF is detected?
About the buf size, how larger the buffer how faster the file is readbut using a buffer of size 1 , will result in a read time of 0.32secSo a buffer of 30 is really fast 0.053sec
((n = f->read(buf, sizeof(buf))) > 0)
while ((n = f->read(buf, sizeof(buf))) > 0) //read a buffer in sequence of size buf, if n=0 no characters any more { for (int i = 0; i < n; i++) // check in a loop is one of the characters is a end of line { if (buf[i] == '\n') nl++; } }
There is already buffering in the SD library so there is not much point creating really big buffers in your code (wastes memory for not much gain at all).
"There is very little memory cost for this buffer since it is allocated on the stack only during the call to lineCount().
But if im correct , after a use it will still be allocated on the SRAM and can not be used for other codes anymore. it will not be released
The globals area, where global variables are stored. The heap, where dynamically allocated variables are allocated from. The stack, where parameters and local variables are allocated from.
Here is the sequence of steps that takes place when a function is called:1. The address of the instruction beyond the function call is pushed onto the stack. This is how the CPU remembers where to go after the function returns.2. Room is made on the stack for the function's return type. This is just a placeholder for now.3. The CPU jumps to the function's code.4. The current top of the stack is held in a special pointer called the stack frame. Everything added to the stack after this point is considered "local" to the function.5. All function arguments are placed on the stack.6. The instructions inside of the function begin executing.7. Local variables are pushed onto the stack as they are defined. When the function terminates, the following steps happen:1. The function's return value is copied into the placeholder that was put on the stack for this purpose.2. Everything after the stack frame pointer is popped off. This destroys all local variables and arguments.3. The return value is popped off the stack and is assigned as the value of the function. If the value of the function isn't assigned to anything, no assignment takes place, and the value is lost.4. The address of the next instruction to execute is popped off the stack, and the CPU resumes execution at that instruction..
uint16_t f() { char a[256]; Serial.println(FreeRam()); // make sure compiler allocates a[] for (int i = 0; i < 255; i++) a[i] = i; uint16_t r = 0; for (int i = 0; i < 255; i++) r += a[i]; return r;}//----------------------------------------------------------------void setup() { Serial.begin(9600); Serial.println(FreeRam()); Serial.print(f()); Serial.println(" f return value"); Serial.println(FreeRam());}void loop() {}//------------------------------------------------------------------------------#ifdef __arm__// should use uinstd.h to define sbrk but Due causes a conflictextern "C" char* sbrk(int incr);#else // __ARM__extern char *__brkval;extern char __bss_end;#endif // __arm__/** Amount of free RAM * \return The number of free bytes. */int FreeRam() { char top;#ifdef __arm__ return &top - reinterpret_cast<char*>(sbrk(0));#else // __arm__ return __brkval ? &top - __brkval : &top - &__bss_end;#endif // __arm__}
1824156365409 f return value1824
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=50181.0;prev_next=prev | CC-MAIN-2015-06 | refinedweb | 762 | 63.8 |
@hangouts91 Your
lines declaration is a string, not a tuple (your are missing the trailing comma). Change it to
lines = ('numtrd',)
@hangouts91 Your
lines declaration is a string, not a tuple (your are missing the trailing comma). Change it to
lines = ('numtrd',)
@hfrog713 It's hard to really provide any helpful information without knowing what "different results" means. Are you getting different EMA values? Are you getting different buy/sell signals from the crossover indicator? Are the signals the same but they result in different orders?
Additionally, without the full script and the data file, it's going to be hard to try it out locally. I ran your script against some AAPL prices and it just generates one sell signal during the period of test. Is that right or wrong? Is it consistent with what TradeView would show? It's hard for me to know.
@li-mike I don't think BT supports "boxed" positions (simultaneous long and short in the same security) but I would think you could fake it by adding the same feed for the security twice and just treating one of them as the long and the other as the short.
@kjiessar You are always receiving data in "bars". A bar is a summary of all the trades over a certain time interval. Each interval will have its own Open, High, Low and Close. You receive the bar in
next() just after the interval is complete so the Close is never in the future, it always just happened. The Open will be further in the past by the interval of time of that bar.
Typically if you are using daily data, then
next() is being called at the end of the trading day. If you place an order, the first opportunity for it to get filled is at the next days's Open price.
@ruiarruda said in Transactions happen for times that don't exist in source data:
However, the most confusing issue still stands: the RSI indicator, which I initialize but do NOT use (intentionally, at least), is somehow necessary for the code to run (if you comment it out on your code, the code breaks).
Here is the issue
self.interval_low = min(self.data.low.get(size=3))
In
next you are trying to access the last 3 low values but in the first call to next, there is only one value in
self.data.low at this point. When you leave in the line for the RSI indicator, BT buffers the first 14 data points before calling
next so the RSI indicator will be available (whether you access it or not).
You could change the offending line to:
self.interval_low = min(self.data.low.get(size=min(3, len(self.data.low))))
or just create some indicator with a minimum period of 3 to work around this.
@dizzy0ny I think you'd need a custom feed derived from
PandasData like in my example with all the additional "lines". But I supposed you could create one that just dynamically added the lines based on the columns in your DataFrame. But doing it explicitly isn't that complicated, as my example shows.
As for the vector vs. event-based backtesting, I think it's probably true that a vector-based approach is more powerful in some ways and bound to be a lot faster, but I think there is some logic that is easier to express in an event-driven approach which might lead to fewer mistakes (although I'm speculating a bit). The event-driven approach is also "safer" in that you can't cheat and accidentally look at future values. Finally, Backtrader makes is pretty straightforward to switch from backtesting to live trading. That might be more challenging with a vector-based system.
BTW, here's a vector-based Python backtesting project I found that looks interesting: vectorbt
@new_trader Here is how limit orders work:
Hope that helps.
@punajunior It looks like the date format in your file is
"%m/%d/%Y" but you specified
"%Y-%m-%d" in your script. Try changing the
dtformat argument of
GenericCSVData.
@catvin When you create expressions using lines and indicators, you need to use operations that Backtrader understands for those operations. It supports the basic arithmetic operators like
+, -, *, / but doesn't know about
math.log. But there is a helper indicator called
ApplyN that can apply a function across a line to create a new indicator, which fits your case nicely. The only caveat is that
ApplyN generally works with a list of values over a period, not a scalar, so below I've used a
lambda to just pass the one value in that period to the
math.log function:
self.lines.x = bt.ind.ApplyN(self.close, func=lambda c: math.log(c[0]), period=1)
If you look at the implementation of
ApplyN here, (which derives from
BaseApplyN, which itself derives from
OperationN). You can see that it simply just calls the function in
next like you were originally doing!
def next(self): self.line[0] = self.func(self.data.get(size=self.p.period))
So
ApplyN is really just syntactic sugar for your original implementation. But I think it's more readable.
@kuky I believe BT assumes the timestamp of a bar is for the end of that time interval. Thus at the simulator time of 22:30, you are seeing the 5m bar that ended at 22:30 (i.e. 22:25-22:30) and the 1h bar that ended at 22:00 (which is the last complete 1h bar at this point).
@the-world Yes, it uses C# but it's a completely different system than BT so you can't compare the two simply on implementation language. It may have something to do with how they process historical data? (I'm guessing here). I'd have to try writing two identical algorithms in each to really understand how they compare and it's not something I've tried or plan to do.
@the-world LEAN is not as fast as BT in my experience but I personally don't think that's the most important characteristic of a backtesting environment. You'll end up spending more time figuring out how to implement, analyze and debug your strategy, so finding the tool that allows you to work productively is - to me - the most important quality of a backtesting platform.
I think LEAN's ease of use with Docker, integrated VS Code debugging and good documentation with a very active community are really attractive features, though.
@kjiessar In that analyzer,
self.rets does not track trades, it tracks
entries which is a list of position size, price, data name and proceeds. Additionally, the position already aggregates multiple executions for multiple trades in a given data name (in
notify_order), so there isn't an issue here using datetime as a key. It simply gives you a time series record of all position metrics at each bar in the analysis for all data names.
Here is the full function for context:
def next(self): # super(Transactions, self).next() # let dtkey update entries = [] for i, dname in self._idnames: pos = self._positions.get(dname, None) if pos is not None: size, price = pos.size, pos.price if size: entries.append([size, price, i, dname, -size * price]) if entries: self.rets[self.strategy.datetime.datetime()] = entries self._positions.clear()
@sarah_james You can use
order_target_percent(data, target=0.05) to buy 5% of your current portfolio value. Then when you want to sell, you can use
order_target_percent(data, target=0.0)
@jacksonx I would think you could create an Analyzer to do this. It has access to the strategy, and therefore all the lines within it, so you could calculate on each day your margin requirement and then at the end of the analysis, sum up the total margin cost. You would need to incorporate that into your net P&L, though.
They other approach would be a custom broker (derived from the standard BackBroker) that only overrides the commission calculations, but that might be more work as it doesn't appear that brokers were designed to be as easily extensible as some of the other classes.
@auth87 Yes, the
pre_next method in
Strategy will be called each time and you can simply call
next from it. But you then need to check which of the data items in
self.datas have data ready. Something like:
def pre_next(self): self.next() def next(self): available = [d for d in self.datas if len(d)] # do something with available list...
@sky_aka41 I think you are pretty close. I tried this all in a strategy (not a custom indicator) and it seems to be working.
Note I changed the
prob_change and
probability expressions slightly.
class St(bt.Strategy): params = (("momLength", 34), ("maLength", 55), ("pLength", 13)) def __init__(self) : self.momentum = bt.ind.Momentum(self.datas[0].close, period=self.p.momLength) self.accelerate = bt.ind.Momentum(self.momentum, period=self.p.momLength) self.prob_change = bt.ind.Momentum(self.datas[0].close, period=1) > 0 self.probability = bt.ind.SumN(self.prob_change) / self.p.pLength self.adjusted_close = self.datas[0].close + (self.momentum + self.accelerate*0.5 ) * self.probability self.lines.MaMA = bt.ind.SMA(self.adjusted_close, period=self.p.maLength)
@brunof I think Option 2 sounds pretty reasonable. It may seem a bit kludgy to replace the Open field with a different value, but I looked at the
execute method in
BackBroker and it's pretty involved so subclassing and overriding would be complicated. Note you could replace the Open with whatever you like: VWAP, the first tick 10 seconds after the open, etc. - whatever you think is a reasonable fill price.
@brunof This sounds to me like an issue with the source data bars, not with Backtrader. If you are truly aggregating tick data into bars, then a given tick would only be included in one bar. So if the close price of one bar is the same as the open price of the next bar, that would only occur if there were two separate ticks at the same price (which is quite common, btw). It would be interesting to look at the underlying tick data for some of your ten-minute bars and see if they are consistent with what you'd expect.
I think Backtrader's assumption of filling at the open of the next bar is as good of an assumption as you can make. If indeed, there were two ticks at the same level that crossed the boundary of a bar, then you would expect to get filled at the same level as the previous bar's close.
If you want a much more realistic simulation of execution behavior on an exchange, I think you'd ultimately need to use tick data.
@jialeen You can use
order_target_percent(data, target=1) to have BT use 100% of the portfolio value for the size of the order. | https://community.backtrader.com/user/davidavr | CC-MAIN-2022-40 | refinedweb | 1,833 | 63.09 |
Answered by:
How to redirect Infopath form programmatically on button click
(well except the image somehow, but that's not the problem)
using Microsoft.Office.InfoPath;
using System;
using System.Xml;
using System.Xml.XPath;
using System.Web;
// get the text field entry
string out_URL = MainDataSource.CreateNavigator().SelectSingleNode("/my:myFields/my:textURLField", NamespaceManager).Value;
// now go to that URL
HttpContext.Current.Response.Redirect(out_URL , false); --> is not working for complex address, but works for simple domain ()
PS: form1 & form2 are both Forms compatible + FullTrust security.
- Edited by Mike Walsh FIN Thursday, April 08, 2010 3:22 AM Shouting removed
Question
Answers
i prefer to use hyperlinks in such cases.
bg Andrej Salnik
All replies
Hi Andrej,
Thanks for the Reply!
I have a slight change in requirement. Could you help please..
I am trying to open up a Report(SSRS) from Infopath form on button click. the url i am using is as follows.
string strurl = "";
After this code executes, a warning message from IE comes up saying 'this page is tryong to access information that is not under its control. It poses a security risk.Do you want to continue?'
When i click on yes, i get 'Critical Error' pop up with the options 'Start Over' and 'Exit'
Other url's like '', '' opens up properly from Infopath form itself on button click.
However the same above url 'strurl' opens up the PDF correctly when i enter this url directly in to the browser.
Any sort of help would be appreciated !
Ritesh
i prefer to use hyperlinks in such cases.
bg Andrej Salnik
- Hi All, I have this exact same problem. When submitting my form, after the submit event I am using HTTP Redirect to change to another, complex.. When I set it to change to google.com, it works fine. But when I set it to a long, complex address, I get a form error. Any solutions?
I can modify the source Andrej, but the problem is that I need it to be dynamic. The form should redirect to slightly different URL's based on what the user enters in the form.
Basically, upon submitting the form, the user should be redirected to a seperate newform.aspx page. On this page I have implemented some custom javascript to take in query parameters and automatically populate the form fields.
For example, newform.aspx?Title=New&Description=NewUser will automatically set the Title and description fields.
This is my code:
Public Sub FormEvents_Submit(ByVal sender As Object, ByVal e As SubmitEventArgs)
Dim spConn As DataConnection = DataConnections("Main submit")
spConn.Execute()
e.CancelableArgs.Cancel = True
Dim mainNav As XPathNavigator = Me.CreateNavigator
Dim first As String = mainNav.SelectSingleNode("/my:newEmployee/my:employee/my:firstName", NamespaceManager).Value
Dim last As String = mainNav.SelectSingleNode("/my:newEmployee/my:employee/my:lastName", NamespaceManager).Value
Dim fullName As String = first + last
Dim rURL As String = String.Empty
rURL = ""
Dim finalURL As String = String.Empty
finalURL = String.Concat(rURL, "?Item%20Description=", fullName)
HttpContext.Current.Response.Redirect(finalURL, False)
And when it runs it simply throws an error. It does submit, just can't redirect. If I use instead of finalURL, it redirects fine...
I have more information relating to this problem.
It seems the use of HttpContext.Current.Response.Redirect works perfectly, when the URL it is going to is in the Internet Zone.
I only experience InfoPath form errors when trying to redirect to a site that is part of my Local Intranet Security Zone.
For example, I am trying to redirect from a form on my intranet, to another form on my intranet, and it is throwing an error. Redirecting to an external site, (ANY), works perfectly...
Hello! I know this is an old thread. But I am having a difficult time figuring out how to do this. I have an web enabled InfoPath form. After it's completed, I submit to a library, but I also gererate a dynamic url, which I need to redirect the user either on button click or using the hyperlink. The hyperlink works great, EXCEPT it opens in a new browser tab. This is not acceptable. How can I open in the same window OR close the form tab when it goes to the new url?
thanks!
Hello Andrej,
I have similar kind of scenario and need you help for resolution.
Condition: On clicking submit button within info-path, i need to redirect it to another page (any one except current page like any other external site or page within current site collection).
Although, i have tried to apply multiple rules on submit button, but could not accomplish just the way i wanted.
Do i need to put extra scripts for redirection OR get the submit control id and then customize it thereafter.
Adding hyperlink as an extra field will not be required anymore, as it will open a new tab instead of changing the page for the same tab within the browser window. Also, it adds extra burden on my list to include its value in each and every list item.
However, if i add an extra hardcore text within info-path, that is also leads to a new tab, which does not serves my purpose.
If there is a possibility, where i can redirect it after submission to another page in the same tab, it could be quite helpful. Just let me know, your suggestion OR feedback for better understanding in this regard.
Thanks for your support anyways.
- Edited by TomTomPune Thursday, June 02, 2016 5:04 AM
Hello,
I got another link from MSDN only.
Hope it will help everyone.
Feel free to share your feedback on the same. | https://social.msdn.microsoft.com/Forums/en-US/d83e7c6e-f01f-42c9-94fd-20ca78db27d8/how-to-redirect-infopath-form-programmatically-on-button-click?forum=sharepointcustomizationlegacy | CC-MAIN-2017-22 | refinedweb | 941 | 58.48 |
We are making a web-gl build with Unity for the facebook game and using Playfab services. In-Apps are set in the playfab console and playfab is used as a third party web payment method. Some of the in-apps are working fine but others are giving 500 status code (Internal Server error). It is not opening the pay popup for some In-Apps. All bundles are set up in the same way in console. Product ids are set correctly from both client end and playfab end. Same part of code is being used for all the bundle purchases except the bundleId itself. Struggling to find what could be causing the issue, if you can point me to how to resolve this, I would be really thankful
,
Answer by Citrus Yan · Aug 03 at 02:52 AM
Hi @Muhammad Hassan & @Chanda Yadav,
We made a fix on this, currently it's waiting for merging and deploying, we'll let you know once it's deployed.
Hi @Muhammad Hassan & @Chanda Yadav
Those product URLs should be working fine now.
Yes cross checked and it works now. Thanks!
Answer by Citrus Yan · Jul 27 at 02:37 AM
What's your title id? And, which In-Apps are not working? Can you specify them?
Thank you for replying. Title Id is "64f85". Bundles not working are
Whereas these are the bundle which are working
These are the RM in-apps. I also tried to create a new in-app on server and client but that too gave me the same error (status code 500).
Are you using facebook as the payment method? And if so, did you follow this tutorial:
Moreover, which API you made to PlayFab was returning the "Internal Server error" ?
Yes that is the same tutorial followed and flow is same for all in-apps. Like I told earlier code from client side is same for all the in-apps. API call "StartPurchase" sends a successful callback but in that call back when we call "Facebook.Canvas.Pay" only at that statement it sends an error for some in-apps. I have put the debug logs before and after the statement to confirm. I have attached the snippet.
Citrus you're right, error is returned b Facebook.Canvas.Pay. I have already tried the thread you linked but it didnt work out. What else would you suggest?
Answer by Muhammad Hassan · Jul 28 at 02:05 PM
I have had a word with facebook support team and they asked the product urls for the in-apps so I shared these
And all of these are returning 404 error code when hit simply through browser. Should they not return (20x) valid status code? Facebook support team has recommended to reach out to playfab support team!
Hi, we're currently working with the engineer team to help investigate this issue, any updates on this will keep you informed.
I have attached the reply I have received from the facebook support team if it may help
Thanks for the info:)
Answer by Muhammad Hassan · Aug 03 at 02:02 AM
Any update Citrus?
Answer by Chanda Yadav · Aug 03 at 02:02 AM
Hi Citrus,
Could you please update us on the investigation. Looking forward to hear from you.
Thank you
Chanda
Answer by Muhammad Hassan · Aug 13 at 07:22 PM
Hi Citrus, there appears to be an issue with the purchases once again. Its showing the wrong price for in-apps now. I have also tested the in-apps URLS with the Graph API Explorer "" provided by facebook. Price shown for these bundles are incorrect and same, Also in the facebook canvas pay popup its showing wrong title names as well. But on Facebook's Graph API explorer it is showing correct title but wrong prices. I have tried doing the changes but it does not reflect the price at all. Title change only gets reflected in Graph API Explorer
While below ones are working fine.
Could you please check? It was working fine few days ago after the fix but just today this has started to happen.
I can see that you overrode the price in the Store:
That may be the reason why you were getting the wrong price.
And, about the wrong title name issue, I can see that it was returned correctly from the product's OpenGraphURL, maybe Facebook cached the old ones and you did some modifications on that item later on, in that case you may force Facebook to re-scrape that object to update Facebook's cache.
Answer by Muhammad Hassan · Aug 14 at 07:01 AM
Thanks Citrus. Yes, rescraping did work!
Hey @Citrus Yan we are facing the same issue once again, and this time with all the bundles listed in the in-apps. All in-apps are being scrapped as "websites" while they should have been "product". I have attached the screenshot of the console log. Tried with postman, facebook graph explorer and always getting type:website.
Screenshot
Hi @Muhammad Hassan,
I opened a new thread here, I will work with you in that channel.
Answers Answers and Comments
3 People are following this question.
Viewing/Deleting All Shared Groups 2 Answers
Unreal Playfab Multiplayer,Unreal Multiplayer Playfab 1 Answer
Debug the API Calls dashboard 1 Answer
Can a catalog item be used as both a VC item and an IAP? 1 Answer
Multiplayer 2.0 Issues 1 Answer | https://community.playfab.com/questions/43012/some-in-apps-not-working.html | CC-MAIN-2020-50 | refinedweb | 910 | 72.87 |
CLI in C++: The Ideal SolutionSunday, June 28th, 2009
This is the third installment in the series of posts about designing a Command Line Interface (CLI) parser for C++. The previous posts were:
Today I would like to explore the solution space and get an idea about what the ideal solution might look like.
Using the terminology introduced in the previous post, an application may need to access the following three objects that result from the command line parsing: commands, options, and arguments. Both commands (or, usually, just one command) and arguments are homogeneous arrays of strings. It is normally sufficient to present them to the application as such, either directly in the
argv array by identifying their start/end positions or as separate string sequences. Options, on the other hand, are a more interesting problem.
If we start thinking about the form in which we could make the parsed options information available to our applications, several alternatives come to mind. In a very simple application we might have a variable (global or declared in
main()) for each option. The CLI parser then sets these variables to the values provided in the command line. Something along these lines:
bool help = false; bool version = false; unsigned short compression = 5; int main (int argc, char* argv[]) { cli::parser p; p.option ("--help", help); p.option ("--version", version); p.option ("--compression", compression); p.parse (argc, argv); if (help) { ... } }
The major problem with this approach is that it does not scale to a more modularized design. In such applications each module may have a specific set of options. For example, in the XSD and XSD/e compilers the compiler driver, frontend, and each code generator has a unique set of options. Placing the corresponding variables all in the global namespace is cumbersome. They are more naturally represented as member variables in the corresponding module classes.
Of course, nothing prevents us from parsing directly into member variables using the above solution. However, it requires that all the classes that hold option values be instantiated before command line parsing can begin. This creates a chicken and egg problem since these classes often need the option values in their constructors. The only way to resolve this problem with the above approach is to first parse the options into temporary variables which are then used to initialize the modules. Here is an example:
struct compressor { compressor (unsigned short level); }; int main (int argc, char* argv[]) { bool help = false; bool version = false; unsigned short compression = 5; cli::parser p; p.option ("--help", help); p.option ("--version", version); p.option ("--compression", compression); p.parse (argc, argv); compressor c (compression); }
Another drawback of this approach is the need to repeat each option name twice: first as the variable name (e.g.,
help) and then as the option name (e.g.,
"--help"). Furthermore, in the case of global variables, there are two distinct places in the source code where each option must be recorded: first as the variable name and then as the call to
option(). In non-trivial allocations the global option variables would most likely also be declared as
extern in a header file so that they can be accessed from other modules. This brings the number of places where each option is recorded to three.
The alternative approach to storing the option values in individual variables is to have a dedicated object which holds them all. The application can then query this object for individual values. Logically, such an object is a heterogeneous map of option names to their values and we can use the map interface to access individual option values. Here is how this might look:
int main (int argc, char* argv[]) { cli::parser p; p.option<bool> ("--help"); p.option<bool> ("--version"); p.option<unsigned short> ("--compression", 5); cli::options o (p.parse (argc, argv)); if (o.value<bool> ("--help")) { ... } }
There are a number of drawbacks with this interface. The first is the use of strings to identify options. If we misspell one, the error will only be detected at runtime. The second drawback is the need to specify the value type every time we access the option value. Then we have the verbosity problem as in the previous approach. Option names and option types are repeated in several places in the source code which makes it hard to maintain.
The alternative interface design would be to have an individual accessor for each option. Something along these lines:
struct options: cli:options { options () : help_ (false), version_ (false), compression_ (5) { // The option() function is provided by cli::options. // option ("--help", help_); option ("--version", version_); option ("--compression", compression_); } bool help () const; bool version () const; unsigned short compression () const; private: bool help_; bool version_; bool compression_; }; int main (int argc, char* argv[]) { cli::parser<options> p; options o (p.parse (argc, argv)); if (o.help ()) { ... } }
While we have solved all the problems with accessing the option values, the declaration of the
options class is very verbose. For each option we repeat its name five times plus we have to manually implement each accessor, initialize each option variable with the default value, as well as register each option with
cli:options. We could automate some of these step by using functor objects to store the option values as well as implement the accessors, for example:
struct options: cli:options { options () : help (false), version (false), compression (5) { option ("--help", help); option ("--version", version); option ("--compression", compression); } cli::option<bool> help; cli::option<bool> version; cli::option<unsigned short> compression; };
We could also get rid of the explicit calls to the
option() function by making the
cli::option object automatically register with the containing object (we would need to use a global variable or a thread-local storage (TLS) slot to store the current containing object). Here is how the resulting
options class could look:
struct options: cli:options { options () : help (false, "--help"), version (false, "--version"), compression (5, "--compression") { } cli::option<bool> help; cli::option<bool> version; cli::option<unsigned short> compression; };
With this approach we have reduced the number of option name repetitions from five to three.
How does the above approach address the issue of modularized applications that we brought up earlier? One alternative would be to have the corresponding member variables added manually to module classes and then initialized with values from the
options object. For example:
struct compressor { compressor (unsigned short level) : level_ (level) { } private: unsigned short level_; }; int main (int argc, char* argv[]) { cli::parser<options> p; options o (p.parse (argc, argv)); compressor c (o.compression ()); }
Alternatively, we could use the
options object directly by inheriting the module class from it. For that, however, we would also need to split the
options object into several module-specific parts, for example:
struct compression_options: virtual cli:options { compression_options () : compression (5) { option ("--compression", compression); } cli::option<unsigned short> compression; }; struct compressor: private compression_options { compressor (const compression_options& o) : compression_options (o) { } }; struct options: compression_options { options () : help (false, "--help"), version (false, "--version") { } cli::option<bool> help; cli::option<bool> version; }; int main (int argc, char* argv[]) { cli::parser<options> p; options o (p.parse (argc, argv)); compressor c (o.compression ()); }
At this point it appears that we have analyzed the drawbacks of all the practical approaches and can now list the properties of an ideal solution:
- Aggregation: options are stored in an object
- Static naming: option accessors have names derived from option names
- Static typing: option accessors have return types fixed to option types
- No repetition: the option name and option type are specified only once for each option
With these properties figured out, next time we will examine the drawback of the existing solutions, namely the Program Options library from Boost as well as my previous attempt at the CLI library which is part of libcult. As usual, if you have any thoughts, feel free to add them as comments. | http://codesynthesis.com/~boris/blog/2009/06/ | CC-MAIN-2017-13 | refinedweb | 1,309 | 51.48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.