text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
9 ways to level up your JavaScript code
var a = 2,
b = a + 2,
c = b - 3;
var a = 2;
var b = a + 2;
var c = b - 3;
- variables can be easily moved around from line to line without having to worry about the commas.
var fruit = "apple";
var fruit = ["apple", "pear"]; // namespace collision
window.Basket = window.Basket || {};
Basket.fruit = "apple";
window.Stock = window.Stock || {};
Stock.fruit = "apple";
var Basket = (()=>{
var items = [];
return {
addItem: ()=> {},
removeItem: (index)=> {}
};
})();
- The closure created by the Immediately Invoked Function Expression (IIFE) allows privacy. E.g: The items array is only accessible through the returned methods.
- Link to more indepth article
/* Class representing a point. */
class Point {
/*
* Create a point.
* @param {number} x - The x value.
* @param {number} y - The y value.
*/
constructor(x, y) {
// ...
}
}
- Allows developers to get a deeper understanding of what is happening in your code
- Can be used to generate online documentation guides
- Industry standard is JSDOC
- The Observer pattern helps your modules communicate to each other through events thus providing loose coupling in your app.
- The Mediator pattern takes control over a group of objects by encapsulating how these objects interact. E.g: a basket that handles items.
- The Iterator pattern is the underlying pattern for ES2015 generators. Useful to know whats actually going on :D
- The Facade pattern provides an simple interface which encapsulates the end user from complex functionality. E.g: Email module with simple methods such as start, stop and pause;
Not only are these solutions to commonly solved problems, they are a way of describing application structure to other developers fairly simply. E.g: “The basket module is a mediator for all the store items, it communicates to the payment module via an observer”.
function colorWidget(element, colorValue, colorRange, colorFormat, opacity, onChange) {
}
colorWidget($("<div/>"), "#fff", ...);
function colorWidget(options) {
var config = .extend({
element: $("<div/>"),
colorValue: "#000",
colorRange: [0, 255],
colorFormat: "rgb",
opacity: 0.8,
onChange: function(){}
}, options);
}
colorWidget({
element: $("<div/>")
});
- Allows new arguments to be added easily
- The developer doesn’t have to worry about the order of the arguments
- Allows the setting of default values easily
This is most commonly seen in the creation of jQuery plugins.
var x1 = new Object();
var x2 = new String();
var x3 = new Number();
var x4 = new Boolean();
var x5 = new Array();
var x6 = new RegExp("()");
var x7 = new Function();
var x1 = {};
var x2 = "";
var x3 = 0;
var x4 = false;
var x5 = [];
var x6 = /()/;
var x7 = function(){};
- Creation through type constructors is significantly slower.
- Because the end result of the constructor is an Object rather than a primitive value you get nasty side effects like so:
var a = new String("x");
a = "x" //false
a "x" //true
var b = "dog";
b.woof = true;
b.woof; // undefined
var c = new String("dog");
c.woof = true;
c.woof; // true
var Button = {
count: 0,
click: function() {
this.count += 1;
},
init: function() {
$("button").click(this.click);
}
};
Button.init();
From a glance this should work however when a user clicks the button we will get an error that count doesn’t exist. This is because the click function is executed in the context of the
$("button") element instead of the
Button object. We can fix this by binding the context to the function:
$("button").click(this.click.bind(this));
// or
$("button").click(()=> this.click());
The
apply()method calls a function with a given
thisvalue and
argumentsprovided as an array (or an array-like object). — MDN
Some useful instances of using apply:
// emulating "super" in an constructor
SomeConstructor.prototype.somemethod.apply(this, arguments);
// passing an array of promises to jQuery.when
$.when.apply($, [$.get(), $.get()]);
// finding the max number in an array
Math.max.apply(Math.max, [1,2,3,4,5]);
Contributed by: Russley Shaw
let allows you to declare variables that are limited in scope to the block, statement, or expression on which it is used.
Lets look at a few use cases where this is useful over
var statements:
var elements = document.querySelectorAll('p');
for (var i = 0; i < elements.length; i++) {
// have to wrap in IIFE other i would be elements.length
(function(count){
elements[count].addEventListener('click', function(){
elements[count + 1].remove();
});
})(i);
}
// using let
let elements = document.querySelectorAll('p');
for (let i = 0; i < elements.length; i++) {
elements[i].addEventListener('click', function(){
elements[i + 1].remove();
});
}
The reason why the top example would be
elements.length is because
i is referenced directly so on the next iteration
i is incremented. When we wrap the code in an IIFE we reference
i under the parameter
count thus removing the direct reference.
const allows the declaration of variables that cannot be re referenced. This is useful for declaring
constants (the keyword originates from it).
const APIKEY = '2rh8ruberigub38rt4tu4t4';
const PI = 3.14;
However objects and arrays can still be accessed and changed. To stop this use
Object.freeze() :
const APICONFIG = Object.freeze({
'key': '8ry4iuggi4g38430t5485jmub',
'endpoint': '/some/boring/api/path/'
});
APICONFIG.key = null; // attempt to change
API_CONFIG.key; //= '8ry4iuggi4g38430t5485jmub'
const EVENS = Object.freeze([ 2, 4, 6, 8]);
EVENS[2] = 9;
EVENS[2]; //= 6
The only reason to avoid doing this is when the variable is allowed to be
false . Take a look at the example below:
var msg = ''; //= should hide the button
var btnMsg = msg || 'Click Me';
btnMsg; //= 'Click Me'
The reason this happens is due to the conversion of the
"” into a Boolean which returns
false . As the
"” is counted as
false the
or comparison returns the other side
Click Me .
If you want to have shorthand if statements you can use the ternary operator:
var msg = ''; //= should hide the button
var btnMsg = msg.length < 1 ? msg : 'Click Me';
btnMsg; //= ''
|
http://brianyang.com/9-ways-to-level-up-your-javascript-code/
|
CC-MAIN-2017-47
|
refinedweb
| 935
| 57.47
|
Introduction: Beargardian
Hey guys for school I needed an idea for a project. So I was thinking, it has to be a project with the raspberry pi and it's local. Suddenly I had an great idea and don't ask me how I get that idea but I thought about an upgrade for a baby monitor. Just think a second about that idea, the most baby monitors just have the function to listen to the baby's room.
The features
- A little light show with adjustable colors
- A camera that shows you live images
- A speaker to play music
- Sensors to capture the baby's moving
- All that showing on a website
Short information
Letme explain this in a short version. So we need a website and for this project I'm using Flask, we also need a database and I'm using mysql, also a script that runs the hardware and this is with python(3) and as last we need a server setup that would be nginx on the PI.
What do we need
- The Raspberry Pi 3
- The stepmotor 28BYJ
- The stepmotor driverchip ULN2003 stepper module
- A rgb led with 3 resistors 330Ohm
- The Pi NoIR camera V2
- The ultrasonic sensor HC-SR04
- The micro module from ardiuno
- The MAX98357A
- A speaker 8Ohm
- And don't forget to buy a bear
Setup raspberry pi ---------------------------------------------------------------------------------------------------------------------------
At first we need to setup the Pi. Start already to login via putty, if you don't have putty I recommend you to download this, simply type your static ip of the Pi with ssh and you go with it. If you have to install your Raspberry Pi then I got bad news, I'm not explaining this in this project.
Install packages
sudo apt update sudo apt install -y python3-venv python3-pip python3-mysqldb mysql-server uwsgi nginx uwsgi-plugin-python3
Virtual environment
python3 -m pip install --upgrade pip setuptools wheel virtualenv mkdir {your project foldername} && cd {your project foldername} python3 -m venv --system-site-packages env source env/bin/activate python -m pip install mysql-connector-python argon2-cffi Flask Flask-HTTPAuth Flask-MySQL mysql-connector-python passlib
Now you have to clone the git repository in your project folder
If you look in your project folder you have to see 5 folders
- conf
- env
- sensor
- sql
- web
Database
sudo systemctl status mysql ss -lt | grep mysql sudo mysql
create a user in the database with all privileges and make your database
create user 'user'@'localhost' identified by 'password'; create database yourdatabasename; grant all privileges on yourdatabasename.* to 'user'@'localhost' with grant option;
Conf files for server
In the uwsgi-flask.ini you change 'module = ...' to 'module= web:app' and the path to your virtualenv that you created. In the other files you need to change the paths to the actual absolute paths of your directory.
Once you figured that out you can set the files in the right place.
sudo cp conf/project1-*.service /etc/systemd/system/ sudo systemctl daemon-reload sudo systemctl start project1-* sudo systemctl status project1-*
now we have to set this available
sudo cp conf/nginx /etc/nginx/sites-available/project1 sudo rm /etc/nginx/sites-enabled/default sudo ln -s /etc/nginx/sites-available/project1 /etc/nginx/sites-enabled/project1 sudo systemctl restart nginx.service sudo nginx -t
If everything did go well you shoud have hello world with this command
wget -qO - localhost
Done ! Well that's for the part to let run your system on...
Step 1: Wiring the Hardware to the Pi
using BCM
audio MAX98357A
- BCK to GPIO 18
- Data to GPIO 21
- LRCK to GPIO 19
light
- red to GPIO 17
- green to GPIO 27
- blue to GPIO 22
motor module ULN2003
- pin 1 to GPIO 5
- pin 2 to GPIO 6
- pin 3 to GPIO 13
- pin 4 to GPIO 26
micro
- D0 to GPIO 21
ultrasonic sensor
- trig to GPIO 16
- echo to GPIO 20
Step 2: Coding the Main Programs
I'm not getting into details right here but you can checkout my code in github.
To begin with I made my html and css, a index, login, register, homescreen, music, addmusic, addbear, light, camera, camerasettings, sensor, dashboard page. The html files has to be in the templates and the css files in static/css folder. You can fully customize the css like you wish.
If you done this part you need to setup your flask. Flask is easy to use just an example of the hello world
# import flask at first from flask import * @app.route('/') def index(): return render_template('index.html')
Now in my code this is already filled in, the only thing you need to do is change the database user and password to that from you and ofcourse make the same database that you also can find in github.
Step 3: Creating the Database
For the real fans I'm going to tell you how to create the same database.
So first we need to create the database if you didn't in step one.
create database beargardian;
Once you did this you create the tables in mysql workbench or phpadmin
user table has
- userID
- babyname
- password with sha1
- userfolder
- playmusic (int)
- playlight (int)
- playrecording (int)
music table has
- musicID
- song
- path
- userfolder
- status
- volume
recording table has
- recordingID
- path
- userfolder
- time
- day
color table has
- colorID
- red
- green
- blue
- brightness
- userID
bear table has
- bearID (decimal(8))
- userID default null
- bearname
sensor table has
- sensorID
- distance
- micro
- bearID
- time
- day
- sleeptime
So now you've created the database succesfully, let's go to the hardware.
Step 4: Hardware Coding
I'll be showning a little bit of code and tell you why I did it that way.
To begin with I used threading, what a absolute must is in this project. What is threading, hmmm good question ! Well threating in python is to run multiple programs at once. So if you for example change the color you can also record. It's easy to use don't worry.
import _thread
def function_name(something,something_else): code to run
_thread.start_new_thread(function_name,tuple_with_the_functions_variables)
If you looked at my program you saw logger.info('...'). This is the print function but much better, because on the Pi you can't print stuff out so I make a file and print it in there. Yoe can set the log file with this code.
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO) # create a file handler handler = logging.FileHandler('logger.log') handler.setLevel(logging.INFO)
# create a logging format formatter = logging.Formatter('%(asctime)s - %(name)s - %(message)s') handler.setFormatter(formatter)
# add the handlers to the logger logger.addHandler(handler)
logger.info('start up hardware\n---------------------------------------')
further in the code itself I explain everything.
Step 5: Great Job
That's it ! You did it !
Be the First to Share
Recommendations
|
https://www.instructables.com/Beargardian/
|
CC-MAIN-2022-27
|
refinedweb
| 1,145
| 57.81
|
Std ViewIvStereoRedGreen
SelBoundingBox
To change the view to red/cyan stereo use the
SelBoundingBox
Std Base
User documentation
Next: ViewIvStereoQuadBuff
Description
The Std ViewIvStereoRedGreen command changes the active 3D view to red/cyan, anaglyph, stereo view mode. To use this stereo mode glasses with colored lenses are requires.
Usage
Preferences
- The eye to eye distance can be changed in the preferences: Edit → Preferences... → Display → 3D View → Eye to eye distance for stereo modes. See Preferences Editor.
Scripting
See also: FreeCAD Scripting Basics.
To change the view to red/cyan stereo use the
setStereoType method of the ActiveView object. This method is not available if FreeCAD is in console mode.
import FreeCADGui FreeCADGui.ActiveDocument.ActiveView.setStereoType('Anaglyph') FreeCADGui.ActiveDocument.ActiveView.getStereoType()
Next: ViewIvStereoQuadBuff
>
|
https://wiki.freecadweb.org/Std_ViewIvStereoRedGreen
|
CC-MAIN-2021-49
|
refinedweb
| 122
| 51.65
|
Table of Contents
Remez tutorial 3/5: changing variables for simpler polynomials
In the previous section, we introduced function g(x) and saw that
RemezSolver looked for a polynomial P(x) and the smallest error value E such that:
Where g(x) is a function used as a weight to minimise relative error.
We will now see that g(x) can be used for other purposes.
Odd and even functions
An odd function is such that in all points, f(-x) = -f(x). Similarly, an even function is such that f(-x) = f(x).
An example of an odd function is sin(x). An example of an even function is cos(x).
A reasonable expectation is that approximating an odd function over an interval centred at zero will lead to an odd polynomial. But due to the way the Remez exchange algorithm works, this is not necessarily the case. We can help the algorithm a little.
The example of sin(x)
Suppose we want to approximate sin(x) over [-π/2; π/2]:
We know sin(x) is an odd function, so instead we look for an odd polynomial. Odd polynomials are polynomials that only use odd powers of x, such as x, x³, x⁵…
We notice that an odd polynomial P(x) can always be rewritten as xQ(x²) where Q has the same coefficients. Our minimax constraint becomes:
Another observation is that replacing x with -x has no effect on the expression in the left. We can therefore restrict the range to [0; π/2]:
Now since x is always positive, we can at last perform our change of variable, replacing x² with y (and x with √y̅, since x is positive), and changing the range accordingly:
This is almost something
RemezSolver can deal with, if only there wasn’t this √y̅ in front of Q(y). Fortunately we can simply divide everything by √y̅:
That’s it! We have our minimax equation, and we can feed it to
RemezSolver.
Source code
#include "lol/math/real.h" #include "lol/math/remez.h" using lol::real; using lol::RemezSolver; real f(real const &y) { real sqrty = sqrt(y); return sin(sqrty) / sqrty; } real g(real const &y) { return re(sqrt(y)); } int main(int argc, char **argv) { RemezSolver<4, real> solver; solver.Run("1e-1000", real::R_PI_2 * real::R_PI_2, f, g, 40); return 0; }
Several subtelties here:
sqrt(y)is computed only once, to save computations
- f is not defined at zero; instead of defining it, we simply reduce the range to [1e-1000; π²/4] instead of [0; π²/4]
- since 1e-1000 is too small for any C++ literal number, we must use a string representation
Compilation and execution
Build and run the above code:
make ./remez
After all the iterations the output should be as follows:
Step 8 error: 3.338112377353099148424378937190071485401e-9 Polynomial estimate: x**0*9.999999765898820673279342160490060830302e-1 +x**1*-1.666664763463971252758602707042821974958e-1 +x**2*8.332899823351751253473706862398940753675e-3 +x**3*-1.980089776279543126829999863143134719418e-4 +x**4*2.590488500536052274124208263889095025208e-6
We can therefore write the corresponding C++ function:
double fastsin(double x) { const double a1 = 9.999999765898820673279342160490060830302e-1; const double a3 = -1.666664763463971252758602707042821974958e-1; const double a5 = 8.332899823351751253473706862398940753675e-3; const double a7 = -1.980089776279543126829999863143134719418e-4; const double a9 = 2.590488500536052274124208263889095025208e-6; return x * (a1 + x*x * (a3 + x*x * (a5 + x*x * (a7 + x*x * a9)))); }
Note that because of our change of variables, the polynomial coefficients are
a1,
a3,
a5… instead of
a0,
a1,
a2…
Analysing the results
The obtained polynomial needs 5 constants for a maximum error of about 3.3381e-9 over [-π/2; π/2]. The real 9th degree minimax polynomial on [-π/2; π/2] needs 10 constants for a maximum error of about… 3.3381e-9!
Again, the error curves match almost perfectly. It means that the odd coefficients in the minimax approximation of an even function play very little role.
Conclusion
You should now be able to give hints to the Remez solver through changes of variables, and find polynomials with fewer constants!
Please report any trouble you may have had with this document to sam@hocevar.net. You may then carry on to the next section: fixing lower-order parameters.
Attachments (1)
- error-even.png (43.0 KB) - added by 7 years ago.
Download all attachments as: .zip
|
http://lolengine.net/wiki/doc/maths/remez/tutorial-changing-variables
|
CC-MAIN-2018-47
|
refinedweb
| 720
| 62.88
|
Xml web services spanish jobs...
Tasks: - Build API tool (NodeJS, ExpressJS) - Parse XML, JSON data - API calls - Datafeed upload - Scripting/scraping Required: - NodeJS, ExpressJS - Javascript - XML, JSON Budget: Fixed
..
We are looking app like urbanclap but some additional features
...:) am looking for translator for our website content. Our website include social/sports content. Our website must support 15+ languages. Spanish, Russian, Bosnian, Romanian, French, Serbian, Italian, etc.. Thanks.
.. And IT Services, I noticed your profile and would like to offer you my project. We can discuss any details over chat..
Hello, I am looking for someone who can find a link or a way to retrieve the xml of odds for some bookmakers (non english bookmakers). Please contact me privately I will provide you with more details. Regards
Need grocery store with delivery services and payment gateway integration app . In low budget . Share your rate, and screen shots. looking for spanish to english translator. for this position, some content writing skills are a big plus. need to have a pc or laptop and be available for a regular phone calls or emails thanks
Re-defining scope, please include examples in your offers. Ideally, i would like to understand what services can you offer around the budget of 20k - 60k INR. Looking forward to the quotes. .
Hello, I have some documents in English, I am looking for someone who can translate those to Spanish and Romanian. Thanks.
We are planning to build website with for Soccer Live score with Content XML or JSON . Must manage can for user Friendly and Responsive website. Please refer to [login to view URL] for more details.
...Pillar 2 - Software Development (Applications, Tools & Databases) for Companies Pillar 3 - Internet shops for companies Pillar 4 - IT maintenance for companies Pillar 5 - IT services for companies Pillar 6 - IT maintenance for private users Pillar 7 - IT service for private users Pillar 8 - IT training for companies and private users We need around
import 2 xml of different language , WPML setup , site is already done
|
https://www.freelancer.pk/job-search/xml-web-services-spanish/
|
CC-MAIN-2019-22
|
refinedweb
| 336
| 65.83
|
View Complete Post
Hi, im hanging on a simple function call in my mainpage.aspx.vb. I want to get the physical server path to the Database1.mdb-file.
Dim strMypath As String
strMypath = mypath()
then in my global.asax.vb I have this function:
Public Function mypath() As String
mypath = ""
Return Server.MapPath("/App_Data/Database1.mdb")
End Function
But this call doesnt work. I get a message in my mainpage.aspx.vb "The name "mypath" was not declarated."
Can anyone help???!
Will.
Hello,
I'm using a static Timer in my Global.asax to run a method at regular intervals. When the method throws an exception, my application is stopped. I have used an empty catch to prevent exceptions from stopping the application something like below code. Is there a disadvantage to use such an approach?
protected void Application_Start(object sender, EventArgs e)
{
TimerStarter.StartTimer();
}
public class TimerStarter
{
private static System.Threading.Timer threadingTimer;
public static void StartTimer()
{
if (null == threadingTimer)
{
threadingTimer = new System.Threading.Timer(new System.Threading.TimerCallback(CheckData),HttpContext.Current, 0, 50000);
}
}
private static void CheckData(object sender)
{
try
{
// a method that reading xml file from another url
}
catch(Exception)
{
// empty catch to prevent stopping the application
}
}
}
Hi,
I
I've moved away from using sqldatasources, and now I exeucte all my sql in my code behind. However, I'm looking to make my codebehind a little more cleaner/neater. For example, on one page, there are three stored procedures that must execute, and All three have these same 8 lines of code for each of my three stored procedures. How can I condense my code behind to not always have to add this...
SqlConnection conn = default(SqlConnection); SqlCommand comm = default(SqlCommand); SqlDataReader reader = default(SqlDataReader); string connectionString = ConfigurationManager.ConnectionStrings["xyz"].ConnectionString; conn = new SqlConnection(connectionString); comm = new SqlCommand(); comm.Connection = conn; comm.CommandType = System.Data.CommandType.StoredProcedure;
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/14473-function-globalasax-is-not-running.aspx
|
CC-MAIN-2017-30
|
refinedweb
| 327
| 51.04
|
gatsby-remark-vscode
A syntax highlighting plugin for Gatsby that uses VS Code’s extensions, themes, and highlighting engine. Any language and theme VS Code supports, whether built-in or via a third-party extension, can be rendered on your Gatsby site.
Includes OS dark mode support 🌙
v3 is out now! 🎉
If you’re updating from v2.x.x (or v1), see MIGRATING.md. New features are line numbers and diff highlighting (thanks @janosh for the latter!).
Table of contents
- Why gatsby-remark-vscode?
- Getting started
- Multi-theme support
Built-in languages and themes
- Using languages and themes from an extension
-
-
- Options reference
- Contributing
Why gatsby-remark-vscode?
JavaScript syntax highlighting libraries that were designed to run in the browser, like Prism, have to make compromises given the constraints of their intended environment. Since they get downloaded and executed whenever a user visits a page, they have to be ultra-fast and ultra-lightweight. Your Gatsby app, on the other hand, renders to HTML at build-time in Node, so these constraints don’t apply. So why make tradeoffs that don’t buy you anything? There’s no reason why the syntax highlighting on your blog should be any less sophisticated than the syntax highlighting in your code editor. And since VS Code is built with JavaScript and CSS, is open source, and has a rich extension ecosystem, it turns out that it’s pretty easy to use its highlighting engine and extensions and get great results. A few examples of where gatsby-remark-vscode excels:
Getting started
Install the package:
npm install --save gatsby-remark-vscode
Add to your
gatsby-config.js:
{ // ... plugins: [{ resolve: `gatsby-transformer-remark`, options: { plugins: [{ resolve: `gatsby-remark-vscode`, options: { theme: 'Abyss' // Or install your favorite theme from GitHub } }] } }] }
Write code examples in your markdown file as usual:
```js this.willBe(highlighted); ```
Multi-theme support
You can select different themes to be activated by media query or by parent selector (e.g. a class or data attribute on the
html or
body element).
Reacting to OS dark mode with
prefers-color-scheme
{ theme: { default: 'Solarized Light', dark: 'Monokai Dimmed' } }
Reacting to a parent selector
{ theme: { default: 'Solarized Light', parentSelector: { // Any CSS selector will work! 'html[data-theme=dark]': 'Monokai Dimed', 'html[data-theme=hc]': 'My Cool Custom High Contrast Theme' } } }
Reacting to other media queries
The
dark option is shorthand for a general-purpose
media option that can be used to match any media query:
{ theme: { default: 'Solarized Light', media: [{ // Longhand for `dark` option. // Don’t forget the parentheses! match: '(prefers-color-scheme: dark)', theme: 'Monokai Dimmed' }, { // Proposed in Media Queries Level 5 Draft match: '(prefers-contrast: high)', theme: 'My Cool Custom High Contrast Theme' }, { match: 'print', theme: 'My Printer Friendly Theme???' }] } }
Built-in languages and themes
The following languages and themes can be used without installing third-party extensions:
Languages
See all 55 languages
- Batch/CMD
- Clojure
- CoffeeScript
- C
- C++
- C Platform
- C#
- CSS
- Dockerfile
- F#
- Git Commit
- Git Rebase
- Diff
- Ignore
- Go
- Groovy
- Handlebars
- Hlsl
- HTML
- CSHTML
- PHP HTML
- INI
- Java
- JavaScript
- JSX
- JSON
- JSON with Comments
- Less
- Log
- Lua
- Makefile
- Markdown
- Objective-C
- Objective-C++
- Perl
- Perl 6
- PHP
- Powershell
- Pug
- Python
- R
- Ruby
- Rust
- Sass
- SassDoc
- ShaderLab
- Shell
- SQL
- Swift
- TypeScript
- TSX
- ASP VB .NET
- XML
- XML XSL
- YAML
Language names are resolved case-insensitively by any aliases and file extensions listed in the grammar’s metadata. For example, a code fence with C++ code in it can use any of these language codes. You could also check the built-in grammar manifest for an exact list of mappings.
Themes
Pro tip: a good way to preview themes is by flipping through them in VS Code. Here’s the list of included ones:
- Abyss
- Dark+ (default dark)
- Light+ (default light)
- Dark (Visual Studio)
- Light (Visual Studio)
- High Contrast
- Kimbie Dark
- Monokai Dimmed
- Monokai
- Quiet Light
- Red
- Solarized Dark
- Solarized Light
- Tomorrow Night Blue
Using languages and themes from an extension
If you want to use a language or theme not included by default, the recommended approach is to
npm install it from GitHub, provided its license permits doing so. For example, you can use robb0wen/synthwave-vscode by running
npm install robb0wen/synthwave-vscode
Then, in gatsby-config.js, use the options
{ theme: `SynthWave '84`, // From package.json: contributes.themes[0].label extensions: ['synthwave-vscode'] // From package.json: name }
You can also clone an extension into your project, or build a .vsix file from its source, and specify its path in
extensions:
{ theme: { default: 'My Custom Theme', dark: 'My Custom Dark Theme' }, extensions: ['./vendor/my-custom-theme', './vendor/my-custom-dark-theme.vsix'] }
Styles
The CSS for token colors and background colors is generated dynamically from each theme you use and included in the resulting HTML. However, you’ll typically want at least a small amount of additional styling to handle padding and horizontal scrolling. These minimal additional styles are included alongside the dynamically generated token CSS by default, but can be disabled by setting the
injectStyles option to
false. If you prefer bundling the styles through your app’s normal asset pipeline, you can simply import the CSS file:
import 'gatsby-remark-vscode/styles.css';
Class names
The generated HTML has ample stable class names, and you can add your own with the
wrapperClassName and
getLineClassName option. All (non-token-color) included styles have a single class name’s worth of specificity, so it should be easy to override the built-in styles.
Variables
The styles also include a few CSS variables you can define to override defaults. The included CSS is written as:
.grvsc-container { padding-top: var(--grvsc-padding-top, var(--grvsc-padding-v, 1rem)); padding-bottom: var(--grvsc-padding-bottom, var(--grvsc-padding-v, 1rem)); border-radius: var(--grvsc-border-radius, 8px); } .grvsc-line { padding-left: var(--grvsc-padding-left, var(--grvsc-padding-h, 1.5rem)); padding-right: var(--grvsc-padding-right, var(--grvsc-padding-h, 1.5rem)); } /* See “Line Highlighting” section for details */ .grvsc-line-highlighted { background-color: var(--grvsc-line-highlighted-background-color, transparent); box-shadow: inset var(--grvsc-line-highlighted-border-width, 4px) 0 0 0 var(--grvsc-line-highlighted-border-color, transparent); }
The padding values are written with cascading fallbacks. As an example, let’s consider the top and bottom padding of
.grvsc-container. Each is set to its own CSS variable,
--grvsc-padding-top and
--grvsc-padding-bottom, respectively. Neither of these is defined by default, so it uses the value of its fallback, which is another CSS variable,
--grvsc-padding-v, with another fallback,
1rem. Since
--grvsc-padding-v is also not defined by default, both padding properties will evaluate to the final fallback,
1rem.
So, if you want to adjust the vertical padding, you could add the following to your own CSS:
:root { --grvsc-padding-v: 20px; /* Adjust padding-top and padding-bottom */ }
If you want to adjust the padding-top or padding-bottom independently, you can use those variables:
:root { --grvsc-padding-top: 24px; /* Adjust padding-top by itself */ }
Tweaking or replacing theme colors
Since the CSS for token colors is auto-generated, it’s fragile and inconvenient to try to override colors by writing more specific CSS. Instead, you can use the
replaceColor option to replace any value specified by the theme with another valid CSS value. This is especially handy for replacing static colors with variables if you want to support a “dark mode” for your site:
{ replaceColor: oldColor => ({ '#ff0000': 'var(--red)', '#00ff00': 'var(--green)', '#0000ff': 'var(--blue)', })[oldColor.toLowerCase()] || oldColor }
Extra stuff
Inline code highlighting
To highlight inline code spans, add an
inlineCode key to the plugin options and choose a
marker string:
{ inlineCode: { marker: '•' } }
Then, in your Markdown, you can prefix code spans by the language name followed by the
marker string to opt into highlighting that span:
Now you can highlight inline code: `js•Array.prototype.concat.apply([], array)`.
The syntax theme defaults to the one selected for code blocks, but you can control the inline code theme independently:
{ theme: 'Default Dark+', inlineCode: { marker: '•', theme: { default: 'Default Light+', dark: 'Default Dark+' } } }
See
inlineCode in the options reference for more API details.
Line highlighting
gatsby-remark-vscode offers the same line-range-after-language-name strategy of highlighting or emphasizing lines as gatsby-remark-prismjs:
Comment directives are also supported:
You can customize the default background color and left border width and color for the highlighted lines by setting CSS variables:
:root { --grvsc-line-highlighted-background-color: rgba(255, 255, 255, 0.2); --grvsc-line-highlighted-border-color: rgba(255, 255, 255, 0.5); --grvsc-line-highlighted-border-width: 2px; }
Line numbers
With code fence info:
```js {numberLines} import * as React from 'react'; React.createElement('span', {}); ```
With code fence info specifying a starting line:
```js {numberLines: 21} return 'blah'; ```
With a comment:
```ts function getDefaultLineTransformers(pluginOptions, cache) { return [ one, // L4 two, three ]; } ```
With both:
```ts {numberLines} import * as React from 'react'; // ... function SomeComponent(props) { // L29 return <div />; } ```
The line number cell’s styling can be overridden on the
.grvsc-line-number class.
Diff highlighting
You can combine syntax highlighting with diff highlighting:
The highlight color can be customized with the CSS variables
--grvsc-line-diff-add-background-color and
--grvsc-line-diff-del-background-color. The default color is static and might not be accessible with all syntax themes. Consider contrast ratios and choose appropriate colors when using this feature.
Using different themes for different code fences
The
theme option can take a function instead of a constant value. The function is called once per code fence with information about that code fence, and should return either a string or an object. See the following section for an example.
Arbitrary code fence options
Line numbers and ranges aren’t the only things you can pass as options on your code fence. A JSON-like syntax is supported:
```jsx{theme: 'Monokai', someNumbers: {1,2,3}, nested: {objects: 'yep'}} <Amazing><Stuff /></Amazing> ```
gatsby-remark-vscode doesn’t inherently understand these things, but it parses the input and allows you to access it in the
theme,
wrapperClassName and
getLineClassName functions:
{ theme: ({ parsedOptions, language, markdownNode, node }) => { // 'language' is 'jsx', in this case // 'markdownNode' is the gatsby-transformer-remark GraphQL node // 'node' is the Markdown AST node of the current code fence // 'parsedOptions' is your parsed object that looks like this: // { // theme: 'Monokai', // someNumbers: { '1': true, '2': true, '3': true }, // nested: { objects: 'yep' } // } return parsedOptions.theme || 'Dark+ (default dark)'; }, wrapperClassName: ({ parsedOptions, language, markdownNode, node }) => ''; }
Options reference
theme
The syntax theme used for code blocks.
- Default:
'Default Dark+'
Accepted types:
string: The name or id of a theme. (See Built-in themes and Using languages and themes from an extension.)
ThemeSettings: An object that selects different themes to use in different contexts. (See Multi-theme support.)
(data: CodeBlockData) => string | ThemeSettings: A function returning the theme selection for a given code block.
CodeBlockDatais an object with properties:
language: The language of the code block, if one was specified.
markdownNode: The MarkdownRemark GraphQL node.
node: The Remark AST node of the code block.
parsedOptions: The object form of of any code fence info supplied. (See Arbitrary code fence options.)
wrapperClassName
A custom class name to be set on the
pre tag.
- Default: None, but the class
grvsc-containerwill always be on the tag.
Accepted types:
languageAliases
An object that allows additional language names to be mapped to recognized languages so they can be used on opening code fences.
- Default: None, but many built-in languages are already recognized by a variety of names.
- Accepted type:
Record<string, string>; that is, an object with string keys and string values.
Example:
{ languageAliases: { fish: 'sh' } }
Then you can use code fences like this: ```fish ls -la ``` And they’ll be parsed as shell script (`sh`).
extensions
A list of third party extensions to search for additional langauges and themes. (See Using languages and themes from an extension.)
- Default: None
- Accepted type:
string[]; that is, an array of strings, where the strings are the package names of the extensions.
inlineCode
Enables syntax highlighting for inline code spans. (See Inline code highlighting.)
- Default: None
Accepted type: An object with properties:
theme: A string or
ThemeSettingsobject selecting the theme, or a function returning a string or
ThemeSettingsobject for a given code span. The type is the same as the one documented in the top-level theme option. Defaults to the value of the top-level theme option.
marker: A string used as a separator between the language name and the content of a code span. For example, with a
markerof value
'•', you can highlight a code span as JavaScript by writing the Markdown code span as
`js•Code.to.highlight("inline")`.
className: A string, or function returning a string for a given code span, that sets a custom class name on the wrapper
codeHTML tag. If the function form is used, it is passed an object parameter describing the code span with properties:
language: The language of the code span (the bit before the
markercharacter).
markdownNode: The MarkdownRemark GraphQL node.
node: The Remark AST node of the code span.
injectStyles
Whether to add supporting CSS to the end of the Markdown document. (See Styles.)
- Default:
true
- Accepted type:
boolean
replaceColor
A function allowing individual color values to be replaced in the generated CSS. (See Tweaking or replacing theme colors.)
- Default: None
- Accepted type:
(colorValue: string, theme: string) => string; that is, a function that takes the original color and the identifier of the theme it came from and returns a new color value.
logLevel
The verbosity of logging. Useful for diagnosing unexpected behavior.
- Default:
'warn'
- Accepted values: From most verbose to least verbose,
'trace',
'debug',
'info',
'warn', or
'error'.
Contributing
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
See CONTRIBUTING.md for development instructions.
|
https://www.gatsbyjs.com/plugins/gatsby-remark-vscode/
|
CC-MAIN-2020-40
|
refinedweb
| 2,311
| 52.19
|
I looked at the DORF implementers view It appears that we still have no solution for record updates that change the type of a polymorphic field. I think this means we need to look to ways to solve this update issue other than just de-sugaring to typeclasses. On Thu, Feb 23, 2012 at 5:01 PM, Greg Weber <greg at gregweber.info> wrote: > On Thu, Feb 23, 2012 at 4:25 PM, AntC <anthony_clayden at clear.net.nz> wrote: >> Greg Weber <greg <at> gregweber.info> writes: >> >>> >>> Thanks to Anthony for his DORF proposal, and spending time to clearly >>> explain it on the wiki. >>> >>> I have a big issue with the approach as presented - it assumes that >>> record fields with the same name should automatically be shared. As I >>> have stated before, to avoid circular dependencies, one is forced to >>> declare types in a single module in Haskell that would ideally be >>> declared in a multiple modules. ... >> >> Thanks Greg, but I'm struggling to understand what the difficulty is with >> sharing the same name, or why your dependencies are circular. Would you be >> able to post some code snippets that illustrate what's going on (or what's >> failing to go on)? >> >> Or perhaps this is an experience from some application where you weren't using >> Haskell? Could you at least describe what was in each record type? > > You can get an idea of things in the section 'Problems with using the > module namespace mechanism' here: > > The attachment that Chris Done left to demonstrate his types seems to > be overwritten. > I will bring back his text as it seems his point does need to be driven home. > A lot of Haskell projects have a separate Types module to avoid issues > with circular dependencies. > >> >>> Continuing the database example, I >>> will have multiple tables with a 'name' column, but they do not have >>> the same meaning. >>> >>> If I have a function: >>> >>> helloName person = "Hello, " ++ person.name >>> >>> The compiler could infer that I want to say hello to inanimate objects! >> >> So the first question is: >> * do your fields labelled `name` all have the same type? (Perhaps all String?) >> * what "meaning" does a name have beyond being a String? >> >> Your code snippet doesn't give the types, but if I guess that you intend >> `person` to be of type `Person`. Then you can give a signature: >> helloName :: Person -> String >> >> If person can be 'anything' then the type inferred from the bare function >> equation would be: >> helloName :: r{ name :: String } => r -> String >> >> So you could say hello to your cat, and your pet rock. You couldn't say hello >> to a pile of bricks (unless it's been given a name as an art installation in >> the Tate Gallery ;-) > > Of course we know that we can always add type annotations to clarify > things. The question is whether we want to be opt-out and have to > explain people that they can end up with weakly typed code when they > don't want to share fields. > >> >> >> >>> Note that I am not completely against abstraction over fields, I just >>> don't think it is the best default behavior. >>> >> >> So what is the best default behaviour, and what is the range of other >> behaviours you want to support? > > I believe the best default is to not share fields, but instead have > the programmer indicate at or outside of the record definition that > they want to share fields. Basically just use type-classes how they > are used now - as opt-in. But I am OK with making an especially easy > way to do this with records if the current techniques for defining > typeclasses are seen as to verbose. > >> >> >>> And the section "Modules and qualified names for records" shows that >>> the proposal doesn't fully solve the name-spacing issue. >>> >> >> I think it shows only that record field labels can run into accidental name >> clash in exactly the same way as everything else in Haskell (or indeed in any >> programming language). And that Haskell has a perfectly good way for dealing >> with that; and that DORF fits in with it. >> >> Greg, please give some specific examples! I'm trying to understand, but I'm >> only guessing from the fragments of info you're giving. >> >> AntC >> >> >> >> >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >>
|
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-February/021945.html
|
CC-MAIN-2014-23
|
refinedweb
| 717
| 69.01
|
GA UpdatesWith the general availability of Batch, we are also announcing a price change to the service. The resource management and job scheduling capabilities will be free. You will not pay an additional cost for running jobs above the charges for compute, storage and networking resources that your application uses. Based on customer feedback from the preview, we are introducing a new API, unifying the Batch and Batch Apps namespaces released at preview. The core job and task API is being introduced now. The Batch documentation, client library and samples will be updated and expanded over the coming days. We’ll provide guidance on how to use and migrate to the new namespace. The management experience for Batch is moving to the new Azure portal. You can monitor pools and jobs, and we will be delivering regular updates with additional capabilities. The Batch Apps capabilities such as a job splitter and task processor remain in preview, and will be included in the new API through regular updates over the next few months.
Customers Using Batch TodaySince the preview release of Azure Batch last year, we’ve been working closely with early adopters to make sure we have a scalable and reliable service. These customers are using Batch today for their production workloads. Towers Watson, the world’s largest provider of actuarial software, chose Azure Batch to quickly bring to market a new cloud service for their RiskAgility FM application that enables life insurers to run financial models that accurately reflect their products, risk, and capital exposure. This service opens up new opportunities for Towers Watson and their customers.
“We have a long history in HPC, moving to Azure Batch was a breath of fresh air. No infrastructure on premises, everything is in the cloud, it automatically scales up and scales down with an auto-scale formula that is very flexible, and you can define how you want your nodes to start and stop – you can have them be always available or spin them up on demand.” – Cameron Murray, Senior Consultant, Towers WatsonAzure Batch is also ideal for media workloads such as rendering and transcoding. Access to compute on demand helps service providers efficiently scale to meet the needs of their customers. Azure Batch works behind the scenes for Microsoft services, providing tens of millions of core hours per month for teams like Azure Media Services and Xbox Video. TVEverywhere is a Microsoft partner in the UK that helps organizations make the most of TV and video in the cloud. Their new service VidCoding, built on Azure Batch, is an automated cloud based encoding platform that offers high throughput and low cost to prepare content for distribution on the web, mobile or to the broadcast market.
“Using Azure Batch we are able to scale up and down from very little activity to handling large volumes of encodes from clients across the world in a very cost efficient manner, we can also optimise our service by selecting the appropriate machine instance for every encode depending on the size of the task and the priority of the job.” - Iolo Jones, CEO, TV EverywhereOur team is excited about this release, and is working hard to add new capabilities such as support for MPI applications and Linux virtual machines later this year. We look forward to hearing what interesting applications you have built on top of Azure Batch. As always, please send us any feedback, comments and any questions you may have through the comments below, the Batch forum or by contacting us directly here. Thank you!
|
https://azure.microsoft.com/nb-no/blog/azure-batch-generally-available/
|
CC-MAIN-2018-39
|
refinedweb
| 592
| 56.08
|
I have a fairly complex model which I is bound to my UI but also is an object for my SQLite-Net table.
Here is the property I'm trying to change
[Table("MyObjectModel")] public class MyObjectModel: INotifyPropertyChanged { ... public string ObjectsType { get { return _objectsType; } set { _objectsType= value; OnPropertyChanged(); } } ... }
I change it by taking a copy of it using a shallow copy:
EditingItem = originalItem.NewCopy(); public MyObjectModel NewCopy() { return (MyObjectModel)this.MemberwiseClone(); }
I then change the ObjectsType property as I want to, let's say to "STRINGVALUE".
The submit button then saves it to my repository and pops the page, passing the item back as a parameter:
private async Task SubmitObjectEdit() { _objectItemRepository.UpdateObjectItem(EditingItem); await CoreMethods.PopPageModel(data: EditingItem, modal: true); }
On the popped to screen, I immediately put this passed object into a previous objects place in the observable collection. ObjectsType can be anything, let's say "PREVIOUSVALUE". It correctly gets set to "STRINGVALUE". However before execution is complete, the ObjectsType setter is called a final time and set to NULL. I check the call stack and see that this is still happening from the popped pages
SubmitObjectEdit() method.
Why does this happen? It seems to be a binding issue or some type of problem with FreshMvvm's PopePageModel. But I'm not explicitly setting the item to null in code, so what gives?
Things I've tried:
Changing to oneway binding in both directions (does not work)
Setting the object property directly instead of copying the passed object into the observable collection (works, but I have to do this with many fields and it's very messy).
Answers
on the setter property always check the null value
set
{
if(value!=null)
{
_objectsType= value;
OnPropertyChanged();
}
}
and EditingItem = originalItem.NewCopy();
you create EditingItem as a new object?
Hi @Harshita. This is the workaround I have employed. I just thought that there must be some bug with binding that causes it to be set to null like this. I'll probably accept it as the answer unless anyone else has any input on it. What if I actually want to set the item to null though?
And yes, I create a new copy with NewCopy. I wanted to be sure I wasn't referencing the object from the previous page in the observable collection, but I wasn't sure how the references would work for this. It also allows me to cancel edits made to an object without having to undo them all manually.
|
https://forums.xamarin.com/discussion/comment/360521
|
CC-MAIN-2019-35
|
refinedweb
| 412
| 56.86
|
This probably looks duplicate of other questions on SO. But, I was wondering the way Java treat
null.
Eg:
public class Work { String x = null; String y; Integer z; public static void main(String arg[]) { Work w = new Work(); w.x = w.x + w.y; // work Line 1 w.x = w.x + w.y + w.z; // work Line 2 w.z = w.z + w.z; // NullPointerException Line 3 System.out.println(w.x.charAt(4)); } }
Commenting
Line 3 prints
n whereas uncommenting throws
NullPointerException. If I'm not wrong for
Line 1 and
Line 2, it being implicitly type cast to
String. But, what happen to
Line 3 ?
|
http://www.howtobuildsoftware.com/index.php/how-do/hfn/java-nullpointerexception-null-concatenation-behaviour-of-null-in-java
|
CC-MAIN-2017-47
|
refinedweb
| 108
| 78.25
|
So I have been looking around and have not been able to find an answer to this. The program is for the most part complete and runs with one exception. When I enter "stop" it still runs through the program before exiting (i.e. after I enter stop it still asks for hourly rate and hours worked before exiting the program). I am just wondering what I could do to fix this so that when I type stop the program immediately exits. Thanks in advance for your response!
Here is my code:
// Week 3 - Day 5 - Payroll Program Part 2 // John Sanders // IT 215 import java.util.Scanner; // class scanner public class Payroll2 // set public class { public static void main( String args[] ) { Scanner input = new Scanner( System.in ); String cleanInputBuffer; // input String empName; // input employee name double hourlyRate; // input hourly rate double hoursWorked; //input hours worked double weeklyPay; // weekly pay amount boolean end = false; // is the input name stop? while( end == false ) // as long as end is false, continue { System.out.print( "Enter Employee's Name:" ); // prompt to enter employee name empName = input.nextLine(); // input employee name if( empName.toLowerCase().equals( "stop" )) // if employee name = stop end = true; // when stop is detected, change the boolean, which ends the while loop while( hourlyRate < 0 ) // while the hourly rate is < 0 { System.out.print( "Enter a positive hourly rate:" ); // print enter a positive hourly rate hourlyRate = input.nextDouble(); } while( hoursWorked < 0 ) // while the hours worked are < 0 { System.out.print( "Enter a positive number of hours worked:" ); // print enter a positive number of hours worked hoursWorked = input.nextDouble(); } weeklyPay = hourlyRate * hoursWorked; // multiply hourly rate by hours worked for weekly pay System.out.printf( "The employee %s was paid $ %.2f this week", empName, weeklyPay ); // print final line System.out.println(); cleanInputBuffer = input.nextLine(); } // end outer while } // end main method } // end class Payroll2
|
https://www.daniweb.com/programming/software-development/threads/170919/payroll-program-2-exiting-problem
|
CC-MAIN-2018-43
|
refinedweb
| 309
| 66.03
|
This is a brief post on how to draw animated GIFs with Python using matplotlib. I tried the code shown here on a Ubuntu machine with ImageMagick installed. ImageMagick is required for matplotlib to render animated GIFs with the save method.
Here's a sample animated graph:
A couple of things to note:
- The scatter part of the graph is unchanging; the line is changing.
- The X axis title is changing in each frame.
Here's the code that produces the above:
import sys import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation fig, ax = plt.subplots() fig.set_tight_layout(True) # Query the figure's on-screen size and DPI. Note that when saving the figure to # a file, we need to provide a DPI for that separately. print('fig size: {0} DPI, size in inches {1}'.format( fig.get_dpi(), fig.get_size_inches())) # Plot a scatter that persists (isn't redrawn) and the initial line. x = np.arange(0, 20, 0.1) ax.scatter(x, x + np.random.normal(0, 3.0, len(x))) line, = ax.plot(x, x - 5, 'r-', linewidth=2) def update(i): label = 'timestep {0}'.format(i) print(label) # Update the line and the axes (with a new xlabel). Return a tuple of # "artists" that have to be redrawn for this frame. line.set_ydata(x - 5 + i) ax.set_xlabel(label) return line, ax if __name__ == '__main__': # FuncAnimation will call the 'update' function for each frame; here # animating over 10 frames, with an interval of 200ms between frames. anim = FuncAnimation(fig, update, frames=np.arange(0, 10), interval=200) if len(sys.argv) > 1 and sys.argv[1] == 'save': anim.save('line.gif', dpi=80, writer='imagemagick') else: # plt.show() will just loop the animation forever. plt.show()
If you want a fancier theme, install the seaborn library and just add:
import seaborn
Then you'll get this image:
A word of warning on size: even though the GIFs I show here only have 10 frames and the graphics is very bare-bones, they weigh in at around 160K each. AFAIU, animated GIFs don't use cross-frame compression, which makes them very byte-hungry for longer frame sequences. Reducing the number of frames to the bare minimum and making the images smaller (by playing with the figure size and/or DPI in matplotlib) can help alleviate the problem somewhat.
|
https://eli.thegreenplace.net/2016/drawing-animated-gifs-with-matplotlib/
|
CC-MAIN-2019-22
|
refinedweb
| 396
| 68.06
|
DRFSAMV — Django Rest Framework + Social Auth + Mongo + VueJS (Part 1)
Announcement (05.2019)
As the Google+ service has been killed (R.I.P) by Google, this article series isn’t supported anymore, but you can read anytime just out of curiosity :)
- —
Hello!
The “tutorial” is splitted to few parts, where I’m showing how I solved the problem with RESTfull social authentication via Google with Django and VueJS.
Part 1 — Setting up Django with databases — PostgreSQL and MongoDB
Part 2 — Setting up VueJS
Part 3 — Configure Google API
Part 4 — Setting up backend social auth API
Part 5 — Communicate with API from VueJS
Part 6 — Raw project ready to use
How does it start?
I was fighting with merging two absolutelly awesome frameworks I used last time — VueJS and Django designed for frontend and backend development respectivelly. I also faced the problem with RESTFull, social authentication (using Google) and found how to fix these lacks in Django and VueJS.
Also I was asking community how to solve similar problem and in the result i got “write your own authentication, it’s not so hard” or “don’t use other packages” or mostly “I don’t know, never tried”. So instead of reinventing wheel, I decided to fix one a bit and show the way I found the solution.
I’m aware of that, there might be more suitable soultions for such apps like Flask, Tornado at the backend and React or Angular at the frontend.
I faced with many problems in case of building my own REST webapp (using DjangoRestFramework) to develop quickly, store datas in the Mongo Database and PostgreSQL alongside for further modeling and develop front using VueJS and backend when I can, in the way I want to with working social authentication. Now it works like a charm and I’d like to show you how to achieve that.
I decided to split it into two databases — first relational PostreSQL for social auth and MongoDB for BIG STUFFS, so we won’t have mess in the future.
What I’m using?
- Python 3.5
- NodeJS
- Virtualenv
- VueJS
- and bought hosting where I could create MongoDB and serve datas (even the cheapest one).
Important info: all was cooked on the Windows 10, so if you’re using Unix OS, please check out the documentation what to do in your case. I assume also you know at least in a tiny bit how to work with Python/Django and VueJS.
If you’re interested about stack I’m using, at the end of this part, I’m sharing with links to read more.
Music plays on!
In this case let’s start, with avoiding surfing on the internet, StackOverflow and wondering with cup of coffee “what the hell is happening there”…
and let’s make two separated folders, which can be uploaded to two different servers — folder frontend and backend.
mkdir frontend
mkdir backend
and now, move into backend folder
cd backend
and built with virtualenv our backend app — to create separated Python environment which won’t mess with global python and installed packages, we’ll also prepare for further communication with frontend apps.
virtualenv venv
venv\Scripts\activate
Let’s install all packages we will need for that:
- Django (1.11.7)
- Django Rest Framework (3.7.3)
- Django MongoEngine (0.15.0)
- Django Cors Headers (2.1.0)
- Django Rest Framework MongoEngine (2.0.2)
- Django Rest Framework Social OAuth (1.0.8)
- Django Oauth2 Toolkit
- Django rest-framework Social Oauth2
- psycopg2 (for PostreSQL purposes)
Note: in the newest wersion of Django (2.0) psycopg2 has been replaced to django.db.backends.postgresql but here we’re not using that.
With a command (including additional packages like markdown django-filters and so on):
pip install django==1.11.7 djangorestframework markdown django-filter django-cors-headers django-rest-framework-mongoengine rest-social-auth psycopg2 django-rest-framework-social-oauth2
and separatedly:
pip install -U mongoengine
you can also check under the command
pip freeze
to check out if you have list like that, which qualifies to go further with tutorial:
certifi==2018.1.18
chardet==3.0.4
defusedxml==0.5.0
Django==1.11.7
django-braces==1.12.0
django-cors-headers==2.1.0
django-filter==1.1.0
django-oauth-toolkit==1.0.0
django-rest-framework-mongoengine==3.3.1
django-rest-framework-social-oauth2==1.0.8
djangorestframework==3.7.7
idna==2.6
Markdown==2.6.11
mongoengine==0.15.0
oauthlib==2.0.6
psycopg2==2.7.3.2
PyJWT==1.5.3
pymongo==3.6.0
python3-openid==3.1.0
pytz==2017.3
requests==2.18.4
requests-oauthlib==0.8.0
rest-social-auth==1.2.0
six==1.11.0
social-auth-app-django==1.2.0
social-auth-core==1.6.0
urllib3==1.22
Building simple app
Still in the backend folder, call the command below to create django project:
django-admin startproject _main
and let’s move from the folder _main another folder _main and file manage.py to folder backend, so finally we should have in backend folder:
- folder _main, which contains settings.py, urls.py, wsgi.py and __init__.py
- our venv folder
- and file manage.py
in the end your backend structure should look like:
─ ${root}
├── _main
├── backend
│ ├── _main
│ │ ├── __init__.py
│ │ ├── settings.py
│ │ ├── urls.py
│ │ ├── wsgi.py
│ ├── venv
│ └── manage.py
└── frontend
Setting up the PostgreSQL Database
At the first, let’s take care of our PostgreSQL connection. In file settings.py find lines DATABASES, like:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
and replace to:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'databasename',
'USER': 'databaseuser',
'PASSWORD': 'yourpassword',
'HOST': 'localhost',
'PORT': 22222,
}
}
Given port is of course an example. Put the correct one and check if the database is connecting properly by just running local server:
python manage.py runserver
If you experienced errors in the console — reconfigure database or check if the datas you put to connect with the database are correct.
Setting up the MongoDB
Put these lines at the top of the settings.py file:
from mongoengine import *
and from the server you bought you have to create new MongoDB database, get name, login, passoword, IP (or http adress) and port and put:
connect(
db='databasename',
username='username',
password='passowrd',
host='address to database like mongodb:// and ip here',
port=27017,
alias="mongo"
)
the field alias is the name of database we want to connect with, pushing new datas. So, if you have multiple Mongo databases, like:
connect(
db='databasename',
username='username',
password='passowrd',
host='address to database like mongodb:// and ip here',
port=27017,
alias="mongo"
)connect(
db='databasename_2',
username='username_2',
password='passowrd_2',
host='address to database like mongodb:// and ip here',
port=27017,
alias="mongo_2"
)
you can precisely point to which one database you want to store data, mongo or mongo_2.
Next step is setting up simple frontend environment, using VueJS. I’ll tell also a bit about Mixins, how to use it and Route Guards to prevent not logged in users to go into parts of site we don’t want to. Also we will setup our login configuration for GoogleOAuth2.
Next Part:
Part 2 — Setting up VueJS
|
https://medium.com/@dobrotek6/drfsamv-django-rest-framework-social-auth-mongo-vuejs-part-1-c5c907dd7b69
|
CC-MAIN-2020-24
|
refinedweb
| 1,212
| 52.19
|
import format keys for genbooks use the syntax: $$$Path/To/Entry Entries text Current restriction is that the import file must be in the sequencial order that the book is indented to read in. ie. if I ++ the module, it will go to the next entry that you had in your import file and NOT the preorder entry in the tree. This should be changed sometime in the near future, but just wanted to let you know the current status. Christian Renz wrote: >> and a general book module. I need to know things like what the file >> formats should be and what values to place in the .conf file >> (especially for the gen book module). > > > It works the same way for the lexdict modules. Concerning the genbook > modules, I'd like to know that myself :-) > > Greetings, > Christian >
|
http://www.crosswire.org/pipermail/sword-devel/2002-August/015346.html
|
CC-MAIN-2014-35
|
refinedweb
| 137
| 79.8
|
Template:Country
This documentation is transcluded from Template:Country {{country}} template brings in a bright bold stylised full-width banner across the top of a wiki page about a country.
Usage
If you wish to use this on your country's wiki page, place the template near the top of the page. Note that it assumes the existence of a number of headings, which you will need to create lower down your page: #Events #Projects #Status and #Guidelines.
All template parameters are like those used in {{Place}}, including :
- map
- The map displayed at bottom is optional, it can be disabled by setting
map=to an empty value or to
map=no.
- It can be also static (instead of slippy by default) by setting
map=static(the displayed static map is still clickable to browse the dynamic map on the main OSM website).
The only additional parameters are:
- flag
- the name of the country flag image (without the "File:" namespace prefix)
- hello
- to replace or translate the default text "Hello!"
See also
- {{CountryLang}} to translate/internationalize this template.
- {{Place}}
|
https://wiki.openstreetmap.org/wiki/Template:Country
|
CC-MAIN-2019-18
|
refinedweb
| 178
| 55.47
|
13 July 2011 04:39 [Source: ICIS news]
GUANGZHOU (ICIS)--China National Offshore Oil Corp (CNOOC) has shut down production at its Suizhong (SZ) 36-1 oilfield located at ?xml:namespace>
“The oilfield remains shut now. At the moment, we are not able to tell when production will be resumed,” said CNOOC spokeswoman Ding Jianchun.
The volume of the spill was estimated at 0.1 to 0.15cbm (cubic metres) and was spread over an area of one square kilometre, CNOOC said in a statement.
The source of the leak was disconnected immediately after the spill and most of the oil slick was cleared by late Tuesday, the statement said.
The equipment fault was also fixed on Tuesday, it added.
The SZ36-1 is an independent oilfield owned by CNOOC, with CNOOC China Limited Tianjin branch in charge of its daily operation and management, the statement said.
This is the third time in two months that CNOOC has had its operations disrupted at different production sites. On 4 June, a spill occurred at Peng Lai 19-3 oilfield, while a fire broke out at the company’s 240,000 bb/day Huizhou refinery on 11 J
|
http://www.icis.com/Articles/2011/07/13/9476992/chinas-cnooc-shuts-suizhong-36-1-oilfield-after-oil-spill.html
|
CC-MAIN-2014-15
|
refinedweb
| 196
| 71.34
|
This article was originally posted as “C#: Extracting Zipped Files with SharpZipLib” at Programmer’s Ranch on 4th October 2014. Minor amendments have been applied in this republished version. The article illustrates how to reference and work with a third-party library. Nowadays it’s simpler to do this using NuGet.
In this article, we’re going to learn how to extract files from a .zip file using a library called SharpZipLib. Additionally, we will also learn how to work with third-party libraries. This is because SharpZipLib is not part of the .NET Framework, and is developed by an independent team.
To get started, we need to create a new Console Application in SharpDevelop (or Visual Studio, if you prefer), and also download SharpZipLib from its website. When you extract the contents of the .zip file you downloaded, you’ll find different versions of ICSharpCode.SharpZipLib.dll for different .NET versions. That’s the library that will allow us to work with .zip files, and we need the one in the net-20 folder (the SharpZipLib download page says that’s the one for .NET 4.0, which this article is based on).
Now, to use ICSharpCode.SharpZipLib.dll, we need to add a reference to it from our project. We’ve done this before in my article “C#: Unit Testing with SharpDevelop and NUnit“. You need to right click on the project and select “Add Reference”:
In the window that comes up, select the tab called “.NET Assembly Browser”, since we want to reference a third party .dll file. Open the ICSharpCode.SharpZipLib.dll file, click the “OK” button in the “Add Reference” dialog, and you’re ready to use SharpZipLib.
In fact, you will now see it listed among the project’s references in the Projects window (on the right in the screenshot below):
Great, now how do we use it?
This is where our old friend, Intellisense, comes in. You might recall how we used it to discover things you can do with strings in one of my earliest articles, “C# Basics: Working with Strings“. This applies equally well here: as you begin to type your
using statement to import the functionality you need from the library, Intellisense suggests the namespace for you:
Now SharpZipLib has a lot of functionality that allows us to work with .zip files, .tar.gz files and more. In our case we just want to experiment with .zip files, so we’re fine with the following:
using ICSharpCode.SharpZipLib.Zip;
That thing is called a namespace: it contains a set of classes. If you type it into your
Main() method and type in a dot (.) after it, you’ll get a list of classes in it:
A namespace is used to categorise a set of related classes so that they can’t be confused with other classes with the same name. Java’s Vector class (a kind of resizable array) is a typical example. If you create your own Vector class to represent a mathematical vector, then you might run into a naming conflict. However, since the Java Vector is actually in the java.util namespace, then its full name is actually java.util.Vector.
This works the same way in C#. The List class you’ve been using all along is called is actually called
System.Collections.Generic.List. We usually don’t want to have to write all that, which is why we put in a
using statement at the top with the namespace.
When we’re working with a new namespace, however, typing the full name and using Intellisense allows us to discover what that namespace contains, without the need to look at documentation. In the screenshot above, we can already guess that ZipFile is probably the class we need to work with .zip files.
Intellisense also helps us when working with methods, constructors and properties:
I suppose you get the idea by now. Let’s finally actually get something working. To try this out, I’m going to create a zip file with the following structure:
+ test1.txt + folder + test2.txt + test3.txt
I’ve used WinRAR to create the zip file, but you can use anything you like. I named it “zipfile.zip” and put it in C:\ (you might need administrator privileges to put it there… otherwise put it wherever you like). Now, we can easily obtain a list of files and folders in the .zip file with the following code:
public static void Main(string[] args) { using (ZipFile zipFile = new ZipFile(@"C:\\zipfile.zip")) { foreach (ZipEntry entry in zipFile) { Console.WriteLine(entry.Name); } } Console.ReadLine(); }
This gives us:
We use the
using keyword to close the .zip file once we’re done – something we’ve been doing since my article “C#: Working with Streams“. You realise you need this whenever you see either a
Dispose() or a
Close() method in Intellisense. We are also using looping over the zipFile itself – you realise you can do a
foreach when you see a
GetEnumerator() method in Intellisense. Each iteration over the zipFile gives us a ZipEntry instance, which contains information about each item in the .zip file. As you can see in the output above, entries comprise not just files, but also folders.
Since we want to extract files, folders are of no interest for us. We can use the IsFile property to deal only with files:
if (entry.IsFile) Console.WriteLine(entry.Name);
In order to extract the files, I’m going to change the code as follows:
public static void Main(string[] args) { using (ZipFile zipFile = new ZipFile(@"C:\\zipfile.zip")) { foreach (ZipEntry entry in zipFile) { if (entry.IsFile) { Console.WriteLine("Extracting {0}", entry.Name); Stream stream = zipFile.GetInputStream(entry); using (StreamReader reader = new StreamReader(stream)) { String filename = entry.Name; if (filename.Contains("/")) filename = Path.GetFileName(filename); using (StreamWriter writer = File.CreateText(filename)) { writer.Write(reader.ReadToEnd()); } } } } } Console.ReadLine(); }
Note that I also added the following to work with
File and
Path:
using System.IO;
Extracting files involves a bit of work with streams. The zipFile’s
GetInputStream() method gives you a stream for a particular entry (file in the .zip file), which you can then read with a StreamReader as if you’re reading a normal file.
I added a bit of code to handle cases when files are in folders in the .zip file – I am finding them by looking for the “/” directory separator in the entry name, and then extracting only the filename using
Path.GetFileName(). [In practice you might have files with the same name in different folders, so you’d need to actually recreate the folders and put the files in the right folders, but I’m trying to keep things simple here.]
Finally, we read the contents of the entry using
reader.ReadToEnd(), and write it to an appropriately named text file. If you run this program and go to your project’s bin\Debug folder in Windows Explorer, you should see the test1.txt, test2.txt and test3.txt files with their proper contents. [Again, the proper way to deal with streams is to read chunks into a buffer and then write the file from it, but I’m using reader.ReadToEnd() for the sake of simplicity.]
Excellent! In this article, we have learned to list and extract files from a .zip file. We also learned why namespaces are important. But most importantly, we have looked at how to reference third party .dlls and discover how to use them based only on hints from Intellisense and our own experience. In fact, the above code was written without consulting any documentation whatsoever, solely by observing the intellisense for SharpZipLib. While it is usually easier to just find an example on the internet (possibly in some documentation), you’ll find that this is a great skill to have when documentation is not readily available.
|
http://gigi.nullneuron.net/gigilabs/2016/08/07/
|
CC-MAIN-2019-51
|
refinedweb
| 1,315
| 65.83
|
How to set the text of a label in Excel (C#)?
- Thursday, December 14, 2006 7:15 AM
Hi all,
There is an existing Excel document within a label named "testLabel", and I need write a C# prgramme to open this Excel document and set the value of "testLabel" to be "MSDN Sample", How can I do that?
Now, I get the worksheet object, it's type is "Microsoft.Office.Interop.Excel.Worksheet", but how can I cover it tobe "Microsoft.Office.Tools.Excel"? And how can I get the "testLabel" and set it's text?
Regards!
All Replies
- Thursday, December 14, 2006 12:36 PMModerator
Does this have to be a VSTO solution, or would you be doing this, for example, through a Windows Forms application?
the namespace Microsoft.Office.Tools is only valid for VSTO solutions. The value of a cell in a worksheet, opened using automation, can be set using only Microsoft.Office.Interop.
- Friday, December 15, 2006 12:40 AM
Thanks Cindy!
My solution is a website, there's a button on the page, click the button can export some data from the page to a Excel workbook, this target workbook is create by a template workbook, there are some labels in the template, and I also need set some label's text as the data from page.
Now I use Microsoft.Office.Interop.Excel to get the Excel Application, Excel Workbook and Excel Worksheet. I know the labels are in the worksheet, but how can I get them and set it?
The way now I'm using is
((InteropExcel.Label)myWorksheet.Labels("solutionNameLbl")).Text = "Xinwen's Solution";
But it is not correct. The compiler notice me "COMException was unhandled by user code", and "Exception from HRESULT: 0x800A03EC".
- Friday, December 15, 2006 1:30 AM
This forum is intended for VSTO-related questions. Please refer to the Excel newsgroup for support on the use of Excel's object model.
Thanks.
General programming issues: excel.programming newsgroup
- Friday, December 15, 2006 1:41 AM
As Cindy points out, the namespace Microsoft.Office.Tools is only valid for VSTO solutions, and Worksheet.Labels is a method of the Microsoft.Office.Tools.Excel.Worksheet object.
Not only that, but also, as the documentation clearly states: "The Labels method supports the Visual Studio Tools for Office infrastructure and is not intended to be used directly from your code."
One way you could redesign your code to meet your requirements is to use named ranges instead. You can access named ranges through the Excel object model, without using the VSTO Tools objects. As Daniel points out, you should consult the Excel documentation to see how to work with named ranges or cell references.
For example, look at this article that describes various ways of interacting with cell ranges:
...and for further details on Excel programming, please look at the link that Daniel pointed out:
- Friday, December 15, 2006 2:16 AM
Thank Andrew and Cindy so much!!!
Good luck!
|
http://social.msdn.microsoft.com/Forums/en-US/vsto/thread/28b3ba75-9544-4a32-ad3c-6c703c0f8c39
|
CC-MAIN-2013-20
|
refinedweb
| 498
| 55.44
|
perlmeditation w-ber <p> In a legacy Perl module where I currently fix bugs at work (yes, [id://670914|the same]), there is a certain hash that is accessed globally in the <code>main</code> namespace. To protect the innocent, let the name of the hash be <code>foo</code>. (In reality, it did have a descriptive name.) In a certain part of the file, I noticed the following two consecutive lines: </p> <code> %foo ={}; unlink %foo; </code> <p>. </p> <p> I'm guessing the intent was to empty the hash, then to make really, really sure that it is empty by calling some deletion function... What do you make of it? Why would a programmer write something like this? The funniest explanation will win. </p> <p> UPDATE: Well, you learn something new every day. [unlink] takes a list as an argument, so the above would actually attempt to delete a file named <code>HASH(0x814dc30)</code> (the numbers may very in your machine). Still, assigning a hash reference into a hash is not what he meant to do either, that's for sure. </p> <div class="pmsig"> <div class="pmsig-399589"> <p> -- <br> print "Just Another [href:// Adept]\n"; </p> </div> </div>
|
http://www.perlmonks.org/?displaytype=xml;node_id=670917
|
CC-MAIN-2015-22
|
refinedweb
| 203
| 72.56
|
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
4
results of 4
Sam Steingold wrote:
>> gcc -shared -oregshared.o regex.o regexi.o
>> ../lisp.run -M ../lispinit.mem
>> (load"preload.lisp") ; preload
>> (sys::dynload-modules "regshared.so" '("regexp"))
>> (load"regexp.fas") ; post-load
>try the same with the i18n module (see src/TODO for expected errors).
I tried it on the Linux box. It worked. The tiny i18n testsuite passed.
I didn't even use -fPIC (IIRC, somebody explained whether -fPIC is needed long time ago in this list).
src/TODO mentions i18n.dll. So obviously, failure to load the code has to do with cygwin or some such on MS-Windows.
My wild guess is that dlopen() emulation is lacking there. It looks like it fails to resolve symbols present and exported in the current binary.
This would not surprise me that much since I believe a .dll is like an executable on its own, i.e. fully linked, with a set of entry points, whereas a .so is just an object file, waiting to be linked against a binary.
E.g. I'd even expect dlsym("_main",NULL-handle-means-local-code) to fail, however the FFI seems to be able to locate symbols?!?
As to a work-around for cygwin, don't ask me now.
Probably Bruno would mention libtld or some such (if tld was created specifically because of limitations or absence of dlsym), but I don't know if that's the answer, or just an answer among many others.
Regards,
Jorg Hohle.
Hi,
>actually, I don't see why clisp should exit if dynload-modules fails.
>could you please debug this?
There's nothing to debug. SYS::dynload-modules invokes functionality normally only used when clisp starts up. In the startup code, all clisp does upon encountering an error is to call exit().
>> module regexp requires packages REGEXP
>> -- and CLISP exits :-(
Maybe SF still holds the patch I once submitted, where packages needed by modules were created on the fly?
However, perhaps more is needed than just this patch to make the scenario work. It needs a bit more thought.
>> It's not so nice that using dynload precludes creating images :-(
>I think this can be fixed in a fairly straightforward way.
This use-case also needs more thought.
Regards,
Jorg Hohle.
Hello people,
On Monday 19 December 2005 16:11, Sam Steingold wrote:
> > Yet Debian might then still want to let CLISP depend on all packages.
> > You'd need to install postgres to be able to install clisp?!?
>
> this is for the debian people to handle.
> they have the notion of "weak dependencies" ("recommendation").
If the program loads the library at runtime I can use Suggests or Recommends,
but if the clisp library is linked to the library the dependency will be
generated automaticly.
In general, if there is something that can be improved in the debian packages,
just yell.
Groetjes, Peter
--
signature -at- pvaneynd.mailworks.org
"God, root, what is difference?" Pitr | "God is more forgiving." Dave Aronson| configure,1.93,1.94 (Jörg Höhle)
2. clisp/src ChangeLog,1.5184,1.5185 stream.d,1.549,1.550 (Sam Steingold)
3. clisp/src stream.d,1.550,1.551 (Sam Steingold)
--__--__--
Message: 1
From: Jörg Höhle <hoehle@...>
To: clisp-cvs@...
Subject: clisp configure,1.93,1.94
Date: Mon, 19 Dec 2005 10:55:02 +0000
Reply-To: clisp-devel@...
Update of /cvsroot/clisp/clisp
In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv7880
Modified Files:
configure
Log Message:
configure: fix patch 20051121, need quotes around if [ ... = "set" ]
Index: configure
===================================================================
RCS file: /cvsroot/clisp/clisp/configure,v
retrieving revision 1.93
retrieving revision 1.94
diff -u -d -r1.93 -r1.94
--- configure 21 Nov 2005 18:08:50 -0000 1.93
+++ configure 19 Dec 2005 10:55:00 -0000 1.94
@@ -650,7 +650,7 @@
cannot be provided.
Please do this:
EOF
- if [ ${CC+set} = set ]; then
+ if [ "${CC+set}" = "set" ]; then
echo " CC='$CC'; export CC" 1>&2
fi
cat <<EOF 1>&2
--__--__--
Message: 2
From: Sam Steingold <sds@...>
To: clisp-cvs@...
Subject: clisp/src ChangeLog,1.5184,1.5185 stream.d,1.549,1.550
Date: Mon, 19 Dec 2005 14:53:03 +0000
Reply-To: clisp-devel@...
Update of /cvsroot/clisp/clisp/src
In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv25287
Modified Files:
ChangeLog stream.d
Log Message:
(find_open_file): fix compilation with gcc4
Index: stream.d
===================================================================
RCS file: /cvsroot/clisp/clisp/src/stream.d,v
retrieving revision 1.549
retrieving revision 1.550
diff -u -d -r1.549 -r1.550
--- stream.d 18 Dec 2005 17:10:22 -0000 1.549
+++ stream.d 19 Dec 2005 14:53:00 -0000 1.550
@@ -7683,8 +7683,10 @@
var object stream = Car(tail); tail = Cdr(tail);
if (TheStream(stream)->strmtype == strmtype_file
&& TheStream(stream)->strmflags & flags
- && file_id_eq(fid,&ChannelStream_file_id(stream)))
- return (void*)&(pushSTACK(stream));
+ && file_id_eq(fid,&ChannelStream_file_id(stream))) {
+ pushSTACK(stream);
+ return (void*)&STACK_0;
+ }
}
return NULL;
}
Index: ChangeLog
===================================================================
RCS file: /cvsroot/clisp/clisp/src/ChangeLog,v
retrieving revision 1.5184
retrieving revision 1.5185
diff -u -d -r1.5184 -r1.5185
--- ChangeLog 18 Dec 2005 17:10:33 -0000 1.5184
+++ ChangeLog 19 Dec 2005 14:53:00 -0000 1.5185
@@ -1,3 +1,7 @@
+2005-12-19 Sam Steingold <sds@...>
+
+ * stream.d (find_open_file): fix compilation with gcc4
+
2005-12-18 Sam Steingold <sds@...>
base check_file_re_open() on unique file ID, not truenames that
--__--__--
Message: 3
From: Sam Steingold <sds@...>
To: clisp-cvs@...
Subject: clisp/src stream.d,1.550,1.551
Date: Mon, 19 Dec 2005 15:18:48 +0000
Reply-To: clisp-devel@...
Update of /cvsroot/clisp/clisp/src
In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv30066/src
Modified Files:
stream.d
Log Message:
Index: stream.d
===================================================================
RCS file: /cvsroot/clisp/clisp/src/stream.d,v
retrieving revision 1.550
retrieving revision 1.551
diff -u -d -r1.550 -r1.551
--- stream.d 19 Dec 2005 14:53:00 -0000 1.550
+++ stream.d 19 Dec 2005 15:18:42 -0000 1.551
@@ -7657,10 +7657,10 @@
return stream;
}
-# UP: add a stream to the list of open streams O(open_files)
-# add_to_open_streams()
-# <> stream
-# can trigger GC
+/* UP: add a stream to the list of open streams O(open_files)
+ add_to_open_streams()
+ <> stream
+ can trigger GC */
local maygc object add_to_open_streams (object stream) {
pushSTACK(stream);
var object new_cons = allocate_cons();
@@ -7674,7 +7674,7 @@
> struct file_id fid = file ID to match
> uintB* data = open flags to filter
< pointer to the stream saved on STACK or NULL
- i.e., on success, addes 1 element to STACK */
+ i.e., on success, adds 1 element to STACK */
global void* find_open_file (struct file_id *fid, void* data);
global void* find_open_file (struct file_id *fid, void* data) {
var object tail = O(open_files);
@@ -7691,28 +7691,27 @@
return NULL;
}
-# */
global maygc object make_file_stream (direction_t direction, bool append_flag,
bool handle_fresh) {
var decoded_el_t eltype;
--__--__--
_______________________________________________
clisp-cvs mailing list
clisp-cvs@...
End of clisp-cvs Digest
|
http://sourceforge.net/p/clisp/mailman/clisp-devel/?viewmonth=200512&viewday=20
|
CC-MAIN-2015-18
|
refinedweb
| 1,184
| 69.38
|
Login...;
<title>Struts 2 Login Application!</title>
<link href="<...
Struts 2.1.8 Login Form
Login form in Struts2 version 2.3.16
how to make a login form in Struts2.
Check more Struts 2
tutorials.
Download... of creating the Login form in Struts 2:
In the above video you will find all the steps necessary to create the login
form in Struts 2. This application 2
Struts 2 we can extend DispatchAction class to implement a common session validation in struts 1.x. how to do the same in the struts2
struts2 - Struts
Struts2 and Ajax Example how to use struts2 and ajax
Login - Struts
Struts Login page I want to page in which user must login to see... type that url... Login Page after logging that page must return to my page.please...;Next Page to view the session value</a><p></body>
struts2 - Struts
struts2
Struts 2 Login Application
Struts 2 Login Application
Developing Struts 2 Login Application
In this section we are going to develop login application based on Struts 2
Framework. Our current login application
login application - Struts
of Login and User Registration Application using Struts Hibernate and Spring.
In this tutorial you will learn
1. Develop application using Struts
2. Write...login application Can anyone give me complete code of user login Training
development and extensibility.
Apply for Struts 2 Training
Strut2 contains the combined features of Struts Ti and WebWork 2 projects
that advocates higher...
Day--2
Struts2 Actions
Simple Action
Redirect
Struts2 ajax validation example.
Struts2 ajax validation example.
In this example, you will see how to validate login through Ajax in
struts2.
1-index.jsp
<html>
<...;
2_ LoginActionForm.jsp
<%@taglib
Struts2 Actions
.
However with struts 2 actions you can get different return types other than...
name of the action to
be executed.
Struts 2 processes an
action...Struts2 Actions
When properties file
struts2 properties file How to set properties file in struts 2 ?
Struts 2 Format Examples
Struts 2 Tutorial
Login Program
Login Program Login program in struts to set username & password in session. and this session is available in all screens for authentication...,
Please read it at:
Login/Logout With Session
Accessing Session Object
Thanks
Struts 2 Session Scope
Struts 2 Session Scope
In this section, you will learn to create an AJAX
application in Struts2...;
<struts>
<!-- Rose India Struts 2 Tutorials -->
Login/Logout With Session
class that handles the login request.
The Struts 2 framework provides a base...Login/Logout With Session
In this section, we are going to develop a login/logout
application with session
Running and Testing Struts 2 Login application
Running and Testing Struts 2 Login application
Running Struts 2 Login Example
In this section we will run... developed and
tested your Struts 2 Login application. In the next section we
Login Action Class - Struts
Login Action Class Hi
Any one can you please give me example of Struts How Login Action Class Communicate with i-batis
Captcha in Struts2 Application
of using Captcha in Struts 2 application. We have provided the code and given the detailed description of using captcha in Struts 2 web application.
Read at Java Captcha in Struts 2 Application.
Thanks
Struts 2 - Beganner
Struts 2 - Beganner Hi !
I am new to Struts2 and I need clarification over Difference between Action Proxy and Action Mapper in struts2 and When both perform their activity??????
Many Thanks
Venkateshlu
Login Form
Login Form
Login form in struts: Whenever you goes to access data from... for a
login form using struts.
UserLoginAction Class: When you download Login upload image public_html
struts2 upload image public_html How to upload images in public_html file in Struts 2 application.
Thanks in 2 + hibernate
Struts 2 + hibernate How to integrate Struts 2 and Hibernate?
Its... struts 2 and hibernate.
Struts2.2.1 and hibernate integration application
Integrate Hibernate to struts2
session maintain in struts 2
session maintain in struts 2 hi i am new to Struts 2.....
in Action class i wrote
**HttpSession session = request.getSession();
session.setAttribute("name", name1);**
but in jsp class
String session_name=(String
Struts 2 Actions
Struts 2 Actions
In this section we will learn about Struts 2 Actions, which is a fundamental
concept in most of the web application frameworks. Struts 2 Action are the
important concept in Struts 2
Integrate Hibernate to struts2.
Integrate Hibernate to struts2. How to Integrate Struts Hibernate project samples
struts 2 project samples please forward struts 2 sample projects like hotel management system.
i've done with general login application and all.
Ur answers are appreciated.
Thanks in advance
Raneesh
Example of login form validation in struts2.2.1framework.
;
2- Login.jsp It display a user interface for login user.
Which...Example of login form validation in struts2.2.1 framework.
In this example, we will introduce you to about the login form validation in
struts2.2.1 framework
Application |
Struts 2 |
Struts1 vs
Struts2 |
Introduction... |
Developing
Login Application in Struts 2 |
Running
and testing application |
Login/Logout With Session |
Connecting to MySQL Database in Struts
Ajax validation in struts2.
Ajax validation in struts2.
In this section, you will see how to validate fields of form in
struts2.
1-index.jsp
<html>
<head>...;
2_ LoginActionForm.jsp
<%@taglib
arg[]){
new Login();
}
}
2)NextPage.java:
import javax.swing.*;
import... java.awt.event.*;
class Login
{
JButton SUBMIT;
JLabel label1,label2;
final JTextField text1,text2;
{
final JFrame f=new JFrame("Login Form
How to use session in struts 1.3
How to use session in struts 1.3 i want to know how to use session in Struts specially in logIn authentication
login how to create login page in jsp
Here is a jsp code that creates the login page and check whether the user is valid or not.
1...;tr><td></td><td><input type="submit" value="Login">1 vs Struts2
;
Feature
Struts 1
Struts 2
Action classes....
While in Struts 2, an Action class implements an Action interface, along with other interfaces use optional and custom services. Struts 2 provides a base
program code for login page in struts by using eclipse
program code for login page in struts by using eclipse I want program code for login page in struts by using eclipse
;
<hr />
Please visit the following link:
Struts Login...struts <p>hi here is my code can you please help me to solve...;input
<input type="submit" value="login"/>
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/88634
|
CC-MAIN-2015-18
|
refinedweb
| 1,078
| 56.55
|
0
I am trying to get some data off a Brazilian government website.
The data is accessible through a form with some javascript. I am able to get the form and fill it out, but have trouble submitting it (a button needs to be clicked). I am using the library mechanize (which includes clientform) but of course would be happy to try others.
Below is the website and the code so far. Any help or pointers would be highly appreciated.
Here is the website:
And here is the my code so far:
import mechanize, urllib, urllib2 # Start Browser br = mechanize.Browser(factory=mechanize.RobustFactory()) # User-Agent (this is cheating, ok?) br.addheaders = [('User-agent', 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.6')] br.open('') # already choose state here for now html = br.response().read() # print html # Select the form br.select_form(nr=0) # since there is only 1 form on the site print br.form # get all values for states and munis # to eventually loop over them states = br.form.possible_items("sgUf") munis = br.form.possible_items("sgUe") # Enter info in web form br.form.set_all_readonly(False) # make all form items changeable #br.form['acao'] = 'Pesquisar' # send action "pesquisar"?? br.form.set(True, states[1] , "sgUf") # state br.form.set(True, munis[1] , "sgUe") # municipality br.form.set(True, "11" , "candidatura") # post (prefeito-11 or vereador-13) br.form.set(True, "2" , "parcial") # parcial 1 or 2 (choose 2) # Submit the form -- does not work yet request2 = br.form.click() #request2 = br.submit() try: response2 = urllib2.urlopen(request2) except urllib2.HTTPError, response2: pass print response2.geturl() print response2.info() # headers print response2.read() # body
Thanks for any help in advance!
-Thomas
|
https://www.daniweb.com/programming/software-development/threads/280270/submitting-a-web-form-with-python-using-mechanize-or-clientform
|
CC-MAIN-2016-50
|
refinedweb
| 288
| 55.2
|
Browser abstraction for Touch and Mouse.
Warning: This is not an official Google or Dart project.
import 'dart:html'; import 'package:touch/touch.dart' as touch; void main() { var div = document.querySelector('#element'); new touch.Listener() ..onStart.listen((_) { }) ..onMove.listen((_) { }) ..onEnd.listen((_) { }); }
Add this to your package's pubspec.yaml file:
dependencies: touch: "^0.1.0"
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:touch/touch.dart';
We analyzed this package on Jun 12, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: web
Primary library:
package:touch/touch.dartwith components:
html,
js.
Fix analysis and formatting issues.
Analysis or formatting checks reported 3 errors.
Strong-mode analysis of
lib/src/listener.dartfailed with the following error:
line: 81 col: 67
The method tear-off '_point' has type '(PointerEvent) → Point<num>' that isn't of expected type '(Event) → Point<num>'. This means its parameter or return type does not match what is expected.
Package is getting outdated.
The package was released 53
touch.dart.
|
https://pub.dartlang.org/packages/touch
|
CC-MAIN-2018-26
|
refinedweb
| 206
| 54.18
|
I recently restored FATE testing on SPARC/Solaris. One test is presently failing, where ffmpeg crashes when given the argument -pix_fmts. The crash occurs due to arg being NULL in the fprintf mentioned in the patch below. The patch fixes the specific symptom, and seems to be a good idea anyway as in the general case it appears that arg can be NULL. However this doesn't fix the underlying problem that when processing -pix_fmts (and others like it), the error case is being executed to begin with. This is because several of the arg processing functions, the one for -pix_fmts included, are accessed via function pointer int (*fp)(const char *, const char *) but, in reality, are functions declared & implemented as void f(void). On sparc, executing these with the expectation of returning int does not yield a 0 return (as it apparently does on other platforms). I was going to send patches to fix this, too, but that would require changing all the arg processing functions to match the pointer signature and thought that it best to put the problem out there and collect any comments/input before working on such a large change. Note that most of the problematic functions would have to be changed to both take args (where they don't presently, and hence the args would be dummy) and to return a value (fixed 0) to fix this to be "proper". Input welcome. -Jeff --- cmdutils.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/cmdutils.c b/cmdutils.c index cd6d133..f538d98 100644 --- a/cmdutils.c +++ b/cmdutils.c @@ -281,7 +281,7 @@ unknown_opt: *po->u.float_arg = parse_number_or_die(opt, arg, OPT_FLOAT, -INFINITY, INFINITY); } else if (po->u.func_arg) { if (po->u.func_arg(opt, arg) < 0) { - fprintf(stderr, "%s: failed to set value '%s' for option '%s'\n", argv[0], arg, opt); + fprintf(stderr, "%s: failed to set value '%s' for option '%s'\n", argv[0], arg ? arg : "[null]", opt); exit(1); } } -- 1.7.3.4
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2011-June/112879.html
|
CC-MAIN-2014-35
|
refinedweb
| 333
| 62.17
|
Introducing XML canonical form
Making XML suitable for regression testing, digital signatures, and more
XML's heritage lies in the document world, and this is reflected in its syntax rules. Its syntax is looser than that of data formats concerned with database records. An XML parser converts an encoded form of an XML document (the encoding being specified in the XML declaration) to an abstract model representing the information in the XML document. The W3C formalized this abstract model as the XML Infoset (see Related topics), but a lot of XML processing has to focus on the encoded source form, which allows a lot of lexical variance: Attributes can come in any order; whitespace rules are flexible in places such as between an element name and its attributes; several means can be used for representing characters, and for escaping special characters, and so on. Namespaces introduce even more lexical flexibility (such as a choice of prefixes). The result is that you can have numerous documents that are exactly equivalent in XML 1.0 rules, while being very different under byte-by-byte comparison of the encoded source.
This lexical flexibility causes problems in areas such as regression testing and digital signatures. Suppose you create a test suite that includes a case that expects the document in Listing 1 as a correct output.
Listing 1. Sample XML document
<doc> <a a1="1" a2="2">123</a> </doc>
If you do proper XML testing you will want to recognize the document in Listing 2 as a correct output.
Listing 2. Equivalent form of XML document in Listing 1
<?xml version="1.0" encoding="UTF-8"?> <doc> <a a2="2" a1="1" >123</a> </doc>
The in-tag white-space is different, attributes are in different order, and character entities have been replaced with the equivalent literal characters -- but the Infosets are nevertheless the same. It would be hard to establish this sameness through byte-by-byte comparison. In the case of digital signatures, you might want to be sure that when you send a document through a messaging system it has not been corrupted or tampered with in the process. To do so, you would want to have a cryptographic hash or full-blown digital signature of the document. However, if you send Listing 1 through the messaging system, it could through normal processing emerge looking like Listing 2. If so, a simple hash or digital signature won't match, even though the document hasn't materially changed.
The W3C's solution to this was developed as part of digital signature specifications for XML. The W3C defines canonical XML (see Related topics), which is a normalized lexical form for XML where all of the allowed variations have been removed, and strict rules are imposed to allow consistent byte-by-byte comparison. The process of converting to canonical form is known as canonicalization (popularly abbreviated "c14n"). In this article you will learn the XML canonical form.
The rules of canonical form
The best overview of the c14n process is the following list (which I've edited), provided in the specification:
- The document is encoded in UTF-8.
- Line breaks are normalized to "#xA" on input, before parsing.
- Attribute values are normalized, as if by a validating processor.
- Default attributes are added to each element, as if by a validating processor.
- CDATA sections are replaced with their literal character content.
- Character and parsed entity references are replaced with the literal characters (excepting special characters).
- Special characters in attribute values and character content are replaced by character references (as usual for well-formed XML).
- The XML declaration and DTD are removed. (Note: I always recommend using an XML declaration in general, but I appreciate the reasoning behind omitting it in canonical XML form.)
-).
- Superfluous namespace declarations are removed from each element.
- Lexicographic order is imposed on the namespace declarations and attributes of each element.
Don't worry if some of these rules seem a bit unclear at this point. I'll provide longer explanations and examples of the more common rules in action. In this article, I don't cover any of the c14n steps that involve DTD validation. I have mentioned the XML Infoset several times, but interestingly enough the W3C chose to define c14n not in terms of the Infoset, but rather in terms of the XPath data model which is a simpler (and some argue cleaner) data model than the Infoset. This is probably a minor detail that will not affect much of your understanding of canonical form, but it's worth keeping in mind if you also have to work with Infoset-based technologies.
Canonicalizing tags
Tags are canonicalized by applying specific white space rules within the tag, as well as a specific order of namespace declarations and regular attributes. The following is my own informal sequence of the format of a canonicalized start tag:
- The open angle bracket (<), followed by the element QName (prefix plus colon plus local name).
- The default namespace declaration, if any, then all other namespace declarations, in alphabetical order of the prefixes they define. Omit all redundant namespace declarations (those that have already been declared in an ancestor element, and have not been overridden). Use a single space before each namespace declaration, no space on either side of the equals sign, and double quotes around the namespace URI.
- All attributes in alphabetical order, preceded by a single space, with no space on either side of the equals sign, and double quotes around the attribute value.
- Finally, a close angle bracket (>).
A canonical form end tag is a much simpler matter: The open angle bracket (<) is followed by the element QName, and then the close angle bracket (>). Listing 3 is a sample of XML that is not in canonical form.
Listing 3. Sample of XML that is not in canonical form
<?xml version="1.0" encoding="UTF-8"?> <doc xmlns: <a a2="2" a1="1" >123</a> <b y: </doc>
Listing 4 is the same document in canonical form.
Listing 4. Listing 3 as canonical XML
<doc xmlns="" xmlns: <a a1="1" a2="2">123</a> <b xmlns:</b> </doc>
The following changes are required to canonicalize Listing 3:
- Remove the XML declaration (the document is already in UTF-8, so no conversion is necessary).
- Place the default namespace declaration on
docbefore the declaration of any other namespaces (the one for prefix
xin this case).
- Reduce the whitespace within the
astart tag so that there is a single space before each attribute.
- Remove the redundant default namespace declaration on the
bstart tag.
- Make sure the remaining namespace declaration (for the
yprefix) comes before all other attributes.
- Place the remaining attributes in alphabetical order of their QNames (for example, "a3" then "y:a1" then "y:a2").
- Change the quote delimiter on the
xmlns:ynamespace declaration and the
y:a1,
y:a2, and
a3attributes from a single quote (
') to a double quote (
"), which in the case of
a3also requires that embedded double quote (
") characters be escaped to
".
I tested the canonical form conversion using the c14n module for Python, which comes with PyXML (see Related topics). Listing 5 is the code I used to canonicalize Listing 3 to Listing 4.
Listing 5. Python code to canonicalize XML
from xml.dom import minidom from xml.dom.ext import c14n doc = minidom.parse('listing3.xml') canonical_xml = c14n.Canonicalize(doc) print canonical_xml
Canonicalizing character data
Character data in canonical form is basically as literal as possible: Character entities are resolved to the raw Unicode (which is then serialized as UTF-8); CDATA sections are replaced with their raw content; and more changes along these lines. This is true for character data in attribute values as well as content. Attributes are also normalized according to rules for their DTD type, but this mostly affects documents that use a DTD, which I do not cover in this article. Listing 6 is a sample document that's based in part on an example in the c14n spec.
Listing 6. Sample XML for demonstrating canonicalization of character data
<?xml version="1.0" encoding="ISO-8859-1"?> <doc> <text>First line
Second line</text> <value>2</value> <compute><![CDATA[value>"0" && value<"10" ?"valid":"error"]]></compute> <compute expr='value>"0" && value<"10" ?"valid":"error"'>valid</compute> </doc>
Listing 7 is the same document in canonical form.
The following changes are required to canonicalize Listing 6:
- Remove the XML declaration and convert to UTF-8.
- Change the character references
2to the actual numeral 2.
- Replace the CDATA section with its contents, and escape the close angle brackets (>) with
>, ampersand (&) with
&, and the open angled brackets (<) with
<.
- Replace the single quotes used for the
exprattribute with double quotes, and then escape the double quote (
") characters to
".
One important step I didn't cover in Listings 6 and 7 is the conversion to UTF-8, which is not easy to illustrate in an article listing. Imagine that the source document has the character reference
© (which represents the copyright sign) in content. The canonical form would replace this with a UTF-8 sequence comprising hex byte C2 followed by hex byte A9.
Don't forget the exclusive option
Sometimes you actually want to sign or compare a subtree of an XML document, rather than the whole thing. Perhaps you want to sign only the body of a SOAP message and ignore the envelope. The W3C provides for this in the exclusive canonical form specification, which is almost entirely concerned with sorting out namespace declarations within and outside the target subtree.
I mentioned the potential variance caused by the choice of prefixes. XML Namespaces stipulates that prefixes are inconsequential, and so two files that vary only in choice of namespace prefixes should be treated as the same. Unfortunately, c14n does not cover this case. Some perfectly valid XML processing operations may modify prefixes, so beware of this potential issue.
Canonical XML is an important tool to keep at hand. You may not be immediately involved in XML-related security or software testing, but you'll be surprised at how often the need for c14n pops up once you are familiar with it. It's one of those things that helps cut a lot of corners that you may have never thought of avoiding in the first place.
Downloadable resources
Related topics
- Skim the Canonical XML specifications, which are produced by the W3C XML Signature working group. Canonical XML Version 1.0 is a W3C Recommendation (March 2001), as is Exclusive XML Canonicalization Version 1.0 (July 2002). Both can be rather hard to read, because they use very formal terms to define Canonicalization rules.
- Check out the XML Information Set (Infoset) (W3C Recommendation, February 2004), an abstract, low-level model of the information content of an XML document.
- Find more XML resources on the developerWorks XML zone, including Uche Ogbuji's Thinking XML column.
- Find out how you can become an IBM Certified Developer.
|
https://www.ibm.com/developerworks/xml/library/x-c14n/
|
CC-MAIN-2019-22
|
refinedweb
| 1,822
| 52.8
|
Yes, I’m still on the Windows 8.1 preview even though Windows 8.1 has gone out of the door and is generally available as per posts below;
Windows 8.1 now available!
Now ready for you- Windows 8.1 and the Windows Store
Visual Studio 2013 released to web!
I just need to get around to installing it
Meanwhile, I wanted to experiment a little more with what I started on in my previous post Windows 8.1 Preview- Multiple Windows and specifically I wanted to see what the experience was like for an application that had multiple windows on a machine where a user may have multiple monitors with different display settings and, further, might drag windows from one monitor to another. I’d been kind of curious as to how that would work and especially in the case where those windows contained content that had loaded resources using the standard mechanisms – how dynamic are those mechanisms with respect to display settings changing while the app is running?
Experimenting with DisplayInformation
I knocked together a sort of skeleton which would allow me to create 2 windows. It took about 5 minutes to add a button to the first screen;
and stuck a little bit of code behind that to create my secondary window;
public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } async void OnSecondWindow(object sender, RoutedEventArgs e) { if (!this.secondWindowShown) { var view = CoreApplication.CreateNewView(); var viewId = 0; await view.Dispatcher.RunAsync( CoreDispatcherPriority.Normal, () => { viewId = ApplicationView.GetForCurrentView().Id; Window w = Window.Current; Frame f = new Frame(); w.Content = f; f.Navigate(typeof(SecondWindow)); }); await ApplicationViewSwitcher.TryShowAsStandaloneAsync(viewId, ViewSizePreference.UseHalf, ApplicationView.GetForCurrentView().Id, ViewSizePreference.UseHalf); this.secondWindowShown = true; } } bool secondWindowShown; }
which is just a bit of XAML to set up a view model and display some information from it;
<Page x: <Page.DataContext> <local:WindowDetailsViewModel /> </Page.DataContext> <Grid Background="Gray"> "/> </Grid.RowDefinitions> <TextBlock Text="Resolution Scale" /> <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. </Grid> </Page>
and then there’s just a simple view model class which that second window instantiates declaratively (i.e. default constructor time) via its XAML as you can see up there in the Page.DataContext setter on line 8 above. That view model class is largely just surfacing up the DisplayInformation class from WinRT;
class WindowDetailsViewModel : ViewModelBase { public WindowDetailsViewModel() { DisplayInformation di = DisplayInformation.GetForCurrentView(); di.DpiChanged += OnDpiChanged; di.OrientationChanged += OnOrientationChanged; } public DisplayInformation DisplayInformation { get { return (DisplayInformation.GetForCurrentView()); } } void OnOrientationChanged(DisplayInformation sender, object args) { this.ChangeDisplayInformation(); } void OnDpiChanged(DisplayInformation sender, object args) { this.ChangeDisplayInformation(); } void ChangeDisplayInformation() { base.OnPropertyChanged("DisplayInformation"); } }
and that ViewModelBase class just implements IPropertyChangedNotification so I won’t include that here.
I thought I’d try out that code across the system that I have set up here. I have a Dell XPS 12 which has a 12” monitor (I think – the name would suggest so but I haven’t measured it) running at 1920×1080 and, over DisplayPort, I have a Dell U2412M monitor which is 24” and is running at 1920×1200.
You’d have to assume that there were differences in pixel density across those 2 screens.
So, I ran up the app and pressed the button to create the second window side-by-side with the first window on my 12” laptop;
So, that looks like I have a 175ish DPI screen and it’s in Landscape (it is) and it would usually be landscape (it would, it’s a laptop) and Windows would apply 140% scaling to this UI. Fair enough.
I dragged this window to my second monitor, crossed my fingers and waited to see if things updated. They did;
and so on this monitor, the DPI is closer to the regular 96 and Windows would apply 100% scaling to this UI.
Clearly, at an API level then Windows “knows” that my window has moved monitors and it fires an event such that my app code can pick up that event and do “something”. What I wondered next was whether Windows could do anything to automatically resolve resources based on this knowledge.
Experimenting with Scaling
I’ll admit that I find I have to think hard about scaling. I find it easier to think in terms of logical pixels being 1/96th of an inch and then thinking about designing in terms of logical pixels.
For instance, if I temporarily change my secondary page to be a Grid which is defined as 96×96;
<Grid Width="96" Height="96" Background="Red">
Then when I run that on my 12” screen with high DPI, it presents a rectangle that’s around 1 inch square and if I drag the window to my 24” screen with lower DPI it presents a rectangle that around 1 inch square and that, I guess, is the point
Similarly, if I use a piece of text;
<TextBlock FontSize="96" Text="Hello World" Foreground="White"/>
then that looks pretty much the same size on my 2 monitors as I drag the window backwards and forwards. The problem is that some resources (particularly) images just aren’t going to play ball nicely and that’s why Windows has its scaling system to provide alternate images based on scale factors.
I went out and find an image of the Surface 2 on the web that was 485px by 323px (not a great size – should really be a multiple of 5 pixels) and dropped it into my UI with a simple image tag;
<Image Width="485" Height="323" Stretch="Fill" Source="Assets/surface2.jpg" />
but I also found an image that was twice the size in terms of pixels and I attributed the 2 images so that I could tell them apart;
and added them to my project using the regular naming convention;
Now, I may have gone a little over the top in terms of the size of my 140% image but I ran up the app and, sure enough on my 24” inch monitor the second window displayed;
and on my 12” monitor the second window displayed;
and so the resource-resolution system is clearing listening to the right events and is refreshing (in this case) my image for me as it realises that the surrounding graphic capabilities have changed as the window moved from one monitor to another – very neat.
Experimenting with Data-Binding
That made me wonder. What if the picture to display was actually hidden away inside of a value that was data-bound. As a simple example, what if I had this “ViewModel”;
class ViewModel { public string ImageUrl { get { return ("Assets/surface2.png"); } } }
and I change my UI such that it sets up one of those objects as its DataContext and then binds to it;
<Page x: <Page.DataContext> <local:ViewModel /> </Page.DataContext> <Grid> <Image Width="485" Height="323" Stretch="Fill" Source="{Binding ImageUrl}" />
then does that still work? Does my image still resolve to a different image as I move the window from my 12” screen to my 24” screen?
Not surprisingly, yes, it does
Local/Web Images
Naturally, if you’re loading up images that your app has stored in local storage (i.e. local/roaming/temp storage hanging off ApplicationData.Current) then the system isn’t magically going to swap those images around for you as and when the Window transitions from one monitor to another and the DPI changes.
As an aside, one of the things that I’m not 100% sure on is whether there is a way to use the resource system to resolve images in data folders using the same naming system that it uses when loading up packaged resources. For example, if I have a path like ms-appdata:///local/surface2.jpg then can I get the system to resolve that for me to ms-appdata:///local/surface2.scale-100.jpg and follow the same naming convention as for resources? I’m not sure whether I can but I’ll return to that if I plug that gap in my knowledge.
Equally, if you’ve loaded images from the web then the system is not usually going to be able to magically pick up display changes and go back to the web for “the right image”.
I think in both of those cases, you’re going to have to do some work yourself and make use of the DisplayInformation.ResolutionScale property to figure out what images to load or, if you’re in the JavaScript world, you can use something like the CSS media-resolution feature in order to specify different background-image values for various media queries and hit the problem that way.
App Resource Packages and Bundles
This is one I haven’t tried yet. I wrote a little blog post about Windows 8.1–Resource Packages and App Bundles where I took a basic look at how for Windows 8.1 it’s possible to build resource packages – e.g. it’s possible to build a package of resources that only apply to the 140% scale so that all those 140% scale images are in that package and that package would not need downloading onto a system running at 100% scale.
So…what happens in the circumstance where a user has a system with a DPI that would resolve to 100% scale but then plugs in a monitor that resolves to 140% or 180% scale and then drags a window from their 100% scale monitor to their 140% scale monitor?
Does the system go back to the Store and magically get the additional resources required and, if so, how does that affect the running app package?
As far as I know, the answer is “yes, the system does do that” but given that the Windows 8.1 Store only opened yesterday I’ve not tried it yet but I’ll update the post as/when I have any more detail.
Update: I checked on what happens in the above situation where the app has downloaded its “100%” resources and then one of its windows is dragged to a monitor which would require “140%” resources. In this case, this will prompt the system to realise that it needs to get hold of those “140%” resources the next time it services that app. That would mean that the user might get a slightly different experience when they first run the app and it is running with “100%” resources versus a later run of the app where it has been serviced and has downloaded those “140%” resources and so can now use them.
|
https://mtaulty.com/2013/10/18/m_14980/
|
CC-MAIN-2020-45
|
refinedweb
| 1,766
| 58.32
|
strspn man page
Prolog
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
strspn — get length of a substring
Synopsis
#include <string.h> size_t strspn() function shall compute the length (in bytes) of the maximum initial segment of the string pointed to by s1 which consists entirely of bytes from the string pointed to by s2.
Return Value
The strspn() function shall return the computed length; no return value is reserved to indicate an error.
Errors
No errors are defined.
The following sections are informative.
Examples
None.
Application Usage
None.
Rationale
None.
Future Directions
None.
See Also
strceconv(3p), strcspn(3p), string.h(0p).
|
https://www.mankier.com/3p/strspn
|
CC-MAIN-2018-05
|
refinedweb
| 134
| 50.02
|
A Safer STL Container Class
Posted by Alex Kravchenko on November 1st, 2000
return (*((unsigned int*)((int)pBlock + 4)) == 0xfeedface);I also added some functions to make it MFC like: GetFirst(), GetNext(), RemoveLastRead(), etc. In the sample code I added 10 elements into the map. Then I deleted elements 5 and 7 outside the map. Then I looped through displaying the content. Notice that after going through all the elements CTidyMap reports 8 to be the actual size.
Example Usage
CTidyMap<CElement, int> mymap; CElement* pElement[10]; for(int i = 0; i < 10; i++) { pElement[i] = new CElement(i); mymap.Add(pElement[i], i); } cout << "Number of elements in the map: " << mymap.GetCount() << endl; delete pElement[5]; cout << "Deleted Element number 5" << endl; delete pElement[7]; cout << "Deleted Element number 7" << endl; cout << "Still does not know about elements being deleted, size: " << mymap.GetCount() << endl; CElement* pExElem = mymap.GetFirst(); while(pExElem) { cout << "Found Element number: " << pExElem->GetIndex() << endl; pExElem = mymap.GetNext(); } cout << "Now the number of elements in the map: " << mymap.GetCount() << endl;
Output
Number of elements in the map: 10 Deleted Element number 5 Deleted Element number 7 Still does not know about elements being deleted, size: 10 Found Element number: 0 Found Element number: 1 Found Element number: 2 Found Element number: 3 Found Element number: 4 Found Element number: 6 Found Element number: 8 Found Element number: 9 Now the number of elements in the map: 8
BonusI also made the feedface optional so if you initialize it like this:
CTidyMap<CElement, int> mymap = false;you can watch the program crash.
Use auto_ptrPosted by Legacy on 11/26/2002 12:00am
Originally posted by: Gandalf Hudlow
map <int, auto_ptr<cMyClass>> m_mapData;
m_mapData.insert(1, auto_ptr<cMyClass>(new cMyClass);
auto pointers are great!Reply
So, you think that your new bad code will prevent you from coding - bad!?Posted by Legacy on 08/17/2002 12:00am
Originally posted by: JK
This is a hack - and a hack is NEVER a good solution! Let's say I use a list of pointers - should I wrap stl's list class also? What if I use a plain array of pointers - should I create an array class just for being SURE that it's OK for me to delete the object and forget about the pointer stored in that array?
It would be a lot cleaner if you would just take care of deleting the object and removing the pointer from map. And I'm sure you can find a much better way of removing the pointer from the map automatically when deleting the object.
Anyway, you still have useless allocated memory for the map nodes that hold invalid pointers. What if I have 100000 invalid pointers STORED in a map - isn't that a waste of memory - and searching performance?
Btw. What if I have 10 threads and each one of them wants to iterate through the map at some point (just theory)?
Just code your programs bug free. Don't try to hack out your own bugs - DE-bug them.Reply
An additionPosted by Legacy on 11/15/2001 12:00am
Originally posted by: Yasuhiko Yoshimura
Today I have implemented this idea to my ATL test project and have found a fact that troubled me.
It stands on 'virtual function table' of class object.
I made destructor of my class corresponding to 'class CElement' to 'virtual' one.
This idea set value of the pointer of my class object with its vftable pointer.
Then the magic number, 0xfeedface was stored at the second pointer neighbor of the above.
And caused 'IsBlockValid(...)' always returned false like the following dump image.
02874D28: 3C 6C 6D 02 CE FA ED FE
vftable magic_number
Thanks.
What does 'feedface' mean?Posted by Legacy on 11/12/2001 12:00am
Originally posted by: Yasuhiko Yoshimura
Hi Alex,Reply
Thanks for your good works and expalnation.
I would like to know what 'feedface' does suggest.
Such as 'A feeder's face' or 'A face fed and saticefied'.
I think it stands for VC++'s debug values on memory.
The following article is written in '\VC98\crt\src\dbheap.c'.
------------------------------------------------------------
/* fill no-man's land with this */
static unsigned char _bNoMansLandFill = 0xFD;
/* fill free objects with this */
static unsigned char _bDeadLandFill = 0xDD;
/* fill new objects with this */
static unsigned char _bCleanLandFill = 0xCD;
------------------------------------------------------------
Thanks.
|
http://www.codeguru.com/cpp/cpp/cpp_mfc/templates/article.php/c787/A-Safer-STL-Container-Class.htm
|
CC-MAIN-2015-48
|
refinedweb
| 726
| 62.98
|
13 June 2013 22:35 [Source: ICIS news]
HOUSTON (ICIS)--An announcement of progress on a long-planned Huntsman-BASF joint venture methyl di-p-phenylene (MDI) expansion project near ?xml:namespace>
In 2011, US-based Huntsman and Germany-based BASF announced plans to build a 240,000 tonne/year expansion to Huntsman’s current 240,000 tonne/year MDI plant in Caojing, but the project has been awaiting approval from Chinese authorities.
But progress could be forthcoming, said J Kimo Esplin, Huntsman’s chief financial officer and executive vice president. He made his comments at Deutsche Bank’s Global Industrials and Basic Materials Conference in
“I don't know that it's in the next couple of months, but I think, certainly, by the end of this year, you will see an announcement that we're moving forward on that, I would hope,” Esplin said.
Huntsman has said that the project would take about three years to complete from the time of approval.
MDI is a core component used in the production of polyurethane (PU) products.
PU is used extensively for cooling as well as heat insulation applications. PU can be used for keeping food and medications cold during production, distribution and
|
http://www.icis.com/Articles/2013/06/13/9678518/chinas-approval-on-huntsman-basf-mdi-project-may-come-by-years-end.html
|
CC-MAIN-2014-41
|
refinedweb
| 203
| 56.59
|
Re: Registry Keys to set Explorer settings
From: GiJO (newsgroups_at_THEOBVIOUSmutestyle.com)
Date: 07/22/04
- Next message: Christoph Graber: "network drive and printer icon"
- Previous message: Gany_md: "Re: Customize "Files stored on this computer""
- In reply to: David Candy: "Re: Registry Keys to set Explorer settings"
- Messages sorted by: [ date ] [ thread ]
Date: Thu, 22 Jul 2004 07:11:51 GMT
that's ok... i'll just use "%systemdrive%\documents and settings\default
user"
so all the reg tweaks which are for the HKEY_CURRENT_USER i should put in
the file to be imported into the default user's ntuser.dat, and all the
HKEY_LOCAL_MACHINE can stay the same?
this is what i have, a file called hkutwks.reg which currently hold all the
HKEY_CURRENT_USER keys. I'll go through that and replace that with
HKEY_USERS\NTUSER. I have another file called hklmtwks.reg which has all the
current HKEY_LOCAL_MACHINE keys in it which i will leave as they are.
I've got a batch file which has this in it.
reg load hku\ntuser "%systemdrive%\documents and settings\default
user\ntuser.dat"
reg import %systemdrive%\install\hkutwks.reg
reg unload hku\ntuser
REGEDIT /S %systemdrive%\install\hklmtwks.reg
This will load the default user's ntuser.dat, import the HKEY_USERS tweaks
into it, unload the default user's ntuser.dat and then load the
HKEY_LOCAL_MACHINE tweaks into the normal registry.
I tried having the HKLM and the HKU tweaks all in the one file and doing the
reg import command but it doesn't like importing in the HKLM keys, that's
why i had to have a separate file for them and use the regedit /s command to
import
Thanks for your help... now to test it all properly!
-- Thanks Andrew David Candy wrote: > There is no variable available from the command line for D&S or > D&S\Default User. > > "GiJO" <newsgroups@THEOBVIOUSmutestyle.com> wrote in message > news:hIELc.10710$K53.656@news-server.bigpond.net.au... >> ok cool... but in the case of setting it to the default user's hive, >> how do you do that automatically? is there such a thing as a >> %DefaultUserProfile% that will allow me to reference the Default >> User Profile in a batch file like you can with %AllUsersProfile% so >> i can use the REG command like this? >> >> REG LOAD %DefaultUserProfile%\NTUser.dat >> REG ADD whatever key >> REG ADD another key >> >> Or can i have it load the NTUser.dat file and then import a registry >> file that i have already to merge like this? >> REG LOAD %DefaultUserProfile%\NTUser.dat >> REG IMPORT %systemdrive%\regtweaks.reg >> >> -- >> Thanks again >> Andrew >> >> >> >> David Candy wrote: >>> .default is the settings for when noone is logged in. You have to >>> load the user's hive to make changes to another user (in this case >>> Default User who is NOT .default). Doc & Settings\Default >>> User\NTUSER.dat (see load hive in Regedit's help). >>> >>> Saved folder settings are stored in BagMRU. Defaults and >>> network/removable drives are stored in Streams key (as everything >>> was in earlier versions) >>> >> HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Streams >>> setting= >>> is the global default >>> >> HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Streams >> \Defaults >>> LongNumber= >>> Is the default for that object type. >>> >>> {F3364BA0-65B9-11CE-A9BA-00AA004AE837} is ordinary folders, and >>> other numbers are what ever they are (My Comp, Control Panel, etc - >>> note My Docs is an ordinary folder) >>> >>> The global and type defaults only exist if you do an apply to all. >>> If you do an apply to all in My Comp then a default for My Comp >>> objects (and the global default) is written. >>> >>> "GiJO" <newsgroups@THEOBVIOUSmutestyle.com> wrote in message >>> news:s_vLc.10266$K53.1433@news-server.bigpond.net.au... >>>> I'm trying to setup a registry file that i can put a whole lot of >>>> tweaks in and run it to set a whole lot settings automatically and >>>> also apply those settings to any future user accounts that are >>>> created. >>>> >>>> I created them all in the HKEY_USERS\.DEFAULT hive, but they aren't >>>> applying the the user that's logged in. >>>> >>>> Another thing is when i apply them to the HKEY_CURRENT_USER hive >>>> most of them apply. The only things i can't get to happen are >>>> displaying the Status Bar in Windows Explorer (IE and Notepad >>>> work), and i also want to set the view to Details in windows >>>> explorer for everyone but it's not happening. >>>> >>>> Any idea's what i'm doing wrong? I thought for the folder views i >>>> had to import the Software\Microsoft\Windows\ShellNoRoam\BagMRU and >>>> the Software\Microsoft\Windows\ShellNoRoam\Bags key from a computer >>>> that has the views i want (deleted all the stuff in those keys then >>>> reset the folder views to the way i want then exported those keys). >>>> Doesn't work though. >>>> >>>> For the Status Bar i have this "Show_StatusBar"="yes" and >>>> "Show_ToolBar"="yes" set under the \Software\Microsoft\Internet >>>> Explorer\Main but that's Internet Explorer... are these settings >>>> available for just Explorer or should this filter across to windows >>>> explorer? It doesn't... >>>> >>>> Any help would be appreciated. >>>> >>>> -- >>>> Thanks >>>> Andrew
- Next message: Christoph Graber: "network drive and printer icon"
- Previous message: Gany_md: "Re: Customize "Files stored on this computer""
- In reply to: David Candy: "Re: Registry Keys to set Explorer settings"
- Messages sorted by: [ date ] [ thread ]
|
http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.customize/2004-07/1500.html
|
crawl-002
|
refinedweb
| 887
| 63.9
|
11-07-2017 08:37 AM
I currently have one namenode in a 'stopped' state due to a node failure. I am unable to access any data or services on the cluster, as this was the main namenode.
However, there is a second namenode that I am hoping can be used to recover. I have been working on the issue in this thread and currently I all hdfs instances started except for the bad namenode. This seems to have improved the situation as far as node health status but I still can't access any data.
Here is the relevant command and error:
ubuntu@ip-10-0-0-154:~/backup/data1$ hdfs dfs -ls hdfs://10.0.0.154:8020/ ls: Operation category READ is not supported in state standby
In the previous thread, I also pointed out that there was the option to enable automatic failure in CM. I am wondering if that is the best course of action right now. Any help is greatly appreciated.
11-07-2017 08:01 PM - edited 11-07-2017 08:03 PM
The issue might be related to the below jira which is opened a long back still in open status
as an alternate way to connect to hdfs, go to hdfs-site.xml and get dfs.nameservices and try to connect to hdfs using namespace as follows, it may help you
hdfs://<ClusterName>-ns/<hdfs_path>
Note: I didn't get a chance to explore this... also not sure how it will respond in old cdh version
11-07-2017 08:24 PM - edited 11-07-2017 08:28 PM
Thank you for your response.
I followed you advice below but I am getting the error below. This is the same error as when I try a plain 'hdfs dfs -ls' command.
root@ip-10-0-0-154:/home/ubuntu/backup/data1# grep -B 1 -A 2 nameservices /var/run/cloudera-scm-agent/process/9908-hdfs-NAMENODE/hdfs-site.xml <property> <name>dfs.nameservices</name> <value>nameservice1</value> </property> ubuntu@ip-10-0-0-154:~/backup/data1$ hdfs dfs -ls hdfs://nameservice1/ 17/11/08 04:29:50 WARN retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB after 1 fail over attempts. Trying to fail over after sleeping for 796ms.
Also, I should mention that when I go to CM, it shows that my one good namenode is in 'standby'. Would it help to try a command like this?
./hdfs haadmin -transitionToActive <nodename>
A second thing is that CM shows Automatic Failover is not enabled but there is a link to 'Enable' (see screenshot). Maybe this is another option to help the standby node get promoted to active?
11-07-2017 08:54 PM
11-07-2017 08:56 PM
I do not know how to check if the "Failover Controller daemon running on the remaining NameNode".
Can you please tell me how to check?
11-07-2017 09:48 PM
11-07-2017 09:55 PM
11-08-2017 08:51 AM
It appears I do not have any nodes with the Failover Controller role. The screenshot below shows the hdfs instances filtered by that role.
11-13-2017 11:27 AM
As noted in the previous reply, I did not have any nodes with the Failover Controller role. Importantly, I also had not enabled Automatic Failover despite running in an HA configuration.
I went ahead and added the Failover Controller role to both namenodes - the good one and the bad one.
After that, I attempted enable the Automatic Failover using the link shown in the screenshot from this post. To do that, however, I needed to first start Zookeeper.
At that point, If I recall correctly, the other namenode was still not active but I then restarted the entire cluster and the automatic failover kicked in, using the other namenode as the active one and leaving the bad namenode in a stopped state.
|
https://community.cloudera.com/t5/Storage-Random-Access-HDFS/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61775
|
CC-MAIN-2019-26
|
refinedweb
| 654
| 63.09
|
systemd_nspawn
chroot
Before getting into the main subject, there are some points to understand.
In Linux, a process has a ** root directory ** as one of its attributes.
It is usually used in the process to interpret absolute paths starting with
/ and is usually set to the system root directory (
/).
The
chroot () system call changes the root directory to the specified path.
This makes it possible to limit the files that can be accessed.
systemd-nspawn
systemd-nspawn can be interpreted as an enhanced version of
chroot.
The difference from
chroot is that
systemd-nspawn
The file system hierarchy, process tree, various IPCs, host names, domains, etc. are completely virtualized and isolated.
It isolates the namespace and can be treated as a lightweight container.
Also, one of the motivations for using
systemd-nspawn is that Linux distributions that use
systemd can be used without thinking.
You can do anything you can in the directory tree of the recreated distribution.
Docker builds a container for each application, while
systemd-nspawn configures one Linux system in the container.
You can build an environment close to VM-type virtualization that allows you to start multiple applications in a container in the same way as a normal Linux environment.
I use Ubuntu, which I usually use, as a host, and create a container environment there using
systemd-nspawn.
Linux karkador 5.0.0-37-generic #40~18.04.1-Ubuntu SMP Thu Nov 14 12:06:39 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Use
debootstrap to prepare the necessary files on the container.
debootstrap is a tool for constructing a minimal directory tree for
debian, the distribution on which Ubuntu is based. This time, I will prepare the one for Ubuntu.
This time, prepare a directory for the container with the name
/ var / lib / machines / ubuntu and install the files there.
# debootstrap --arch=$(dpkg --print-architecture) $(lsb_release -cs) /var/lib/machines/ubuntu
Now the files needed for the container are created
# systemd-nspawn -D /path/to/container
Put it in the root shell of the prepared environment. For the time being, just set the password with
passwd and exit.
You can use
Ctrl]]] to get out of the container.
# systemd-nspawn -b -D /var/lib/machines/ubuntu
Boot the container. That is, start ʻinit` in the container. After the familiar Ubuntu boot screen, a login shell is launched. If you come this far, you can do most of the usual things. The way to exit is the same.
# systemctl start [email protected]
The container is started with. Operations such as automatic startup are performed with ʻenable | disable` as in the other cases.
The
systemd-nspawn command has a number of options.
For example, the first
-D is an option to make the specified directory the root of the container's file system.
In addition to
systemd-nspawn, there are commands that are convenient for operating containers.
--syscall-filter
You can change the rules for system calls issued inside the container.
For example, when you do not allow
read
--syscall-filter="~read"
You can do it with (Note that you can not execute it unless you allow
read)
You can check the details of the specified system call with
systemd-analyze syscall-filter. (Designation in group units, etc.)
machinectl
systemctl start [email protected]hostname
Containers started with can be operated with the
machinectl command.
Show the list of running containers with
machinectl list,
Use
machinectl list-images to display a list of container images located in
/ var / lib / machines.
You can also use
machinectl enable | disable to start automatically.
machinectl is a command that combines
systemctl operations around the container and
systemd-nspawn, and most operations can be done with this.
The default is to share the network with the host.
Also, setting
systemd-networkd to a container or host enables a wide range of network settings such as virtual Ethernet links, bridges, and port forwarding.
There are many differences from Docker, and I think it's interesting as a completely different container technology. I would like to operate it a little more and add information such as networks.
|
https://memotut.com/systemd-nspawn-beginning-3146b/
|
CC-MAIN-2022-40
|
refinedweb
| 684
| 56.35
|
#include <glib.h> GError; GError* g_error_new (GQuark domain, gint code, const gchar *format, ...); GError* g_error_new_literal (GQuark domain, gint code, const gchar *message); void g_error_free (GError *error); GError* g_error_copy (const GError *error); gboolean g_error_matches (const GError *error, GQuark domain, gint code); void g_set_error (GError **err, GQuark domain, gint code, const gchar *format, ...); void g_set_error_literal (GError **err, GQuark domain, gint code, const gchar *message); void g_propagate_error (GError **dest, GError *src); void g_clear_error (GError **err); void g_prefix_error (GError **err, const gchar *format, ...); void g_propagate_prefixed_error (GError **dest, GError *src, const gchar *format, ...);
GLib provides a standard method of reporting errors from a called function to the calling code. (This is the same problem solved by exceptions in other languages.) It's important to understand that this method is both a data type (the GError object) and a set of rules. If you use GError incorrectly, then your code will not properly interoperate with other code that uses GError, and users of your API will probably get confused.
First and foremost: GError should only be used to report
recoverable runtime errors, never to report programming errors. If
the programmer has screwed up, then you should use
g_warning(),
g_return_if_fail(),
g_assert(),
g_error(), or some similar facility.
(Incidentally, remember that the
g_error() function should
only be used for programming errors, it should not be used
to print any error reportable via GError.)
Examples of recoverable runtime errors are "file not found" or "failed to parse
input." Examples of programming errors are "NULL passed to
strcmp()" or
"attempted to free the same pointer twice." These two kinds of errors are
fundamentally different: runtime errors should be handled or reported to the
user, programming errors should be eliminated by fixing the bug in the program.
This is why most functions in GLib and GTK+ do not use the GError facility.
Functions that can fail take a return location for a GError as their last argument. For example:
gboolean g_file_get_contents (const gchar *filename, gchar **contents, gsize *length, GError **error);
If you pass a non-
NULL value for the
error argument, it should
point to a location where an error can be placed. For example:
gchar *contents; GError *err = NULL; g_file_get_contents ("foo.txt", &contents, NULL, &err); g_assert ((contents == NULL && err != NULL) || (contents != NULL && err == NULL)); if (err != NULL) { /* Report error to user, and free error */ g_assert (contents == NULL); fprintf (stderr, "Unable to read file: %s\n", err->message); g_error_free (err); } else { /* Use file contents */ g_assert (contents != NULL); }
Note that
err != NULL in this example is a
reliable indicator of whether
g_file_get_contents() failed. Additionally,
g_file_get_contents() returns
a boolean which indicates whether it was successful.
Because
g_file_get_contents() returns
FALSE on failure, if you are only
interested in whether it failed and don't need to display an error message, you
can pass
NULL for the
error argument:
if (g_file_get_contents ("foo.txt", &contents, NULL, NULL)) /* ignore errors */ /* no error occurred */ ; else /* error */ ;
The GError object contains three fields:
domain indicates
the module the error-reporting function is located in,
code
indicates the specific error that occurred, and
message is a
user-readable error message with as many details as possible. Several functions
are provided to deal with an error received from a called function:
g_error_matches() returns
TRUE if the error matches a given domain and code,
g_propagate_error() copies an error into an error location (so the calling
function will receive it), and
g_clear_error() clears an error location by
freeing the error and resetting the location to
NULL. To display an error to the
user, simply display
error->message, perhaps along with
additional context known only to the calling function (the file being opened, or
whatever -- though in the
g_file_get_contents() case,
error->message already contains a filename).
When implementing a function that can report errors, the basic tool is
g_set_error(). Typically, if a fatal error occurs you want to
g_set_error(),
then return immediately.
g_set_error() does nothing if the error location passed
to it is
NULL. Here's an example:
gint foo_open_file (GError **error) { gint fd; fd = open ("file.txt", O_RDONLY); if (fd < 0) { g_set_error (error, FOO_ERROR, /* error domain */ FOO_ERROR_BLAH, /* error code */ "Failed to open file: %s", /* error message format string */ g_strerror (errno)); return -1; } else return fd; }
Things are somewhat more complicated if you yourself call another function that
can report a GError. If the sub-function indicates fatal errors in some way
other than reporting a GError, such as by returning
TRUE on success, you can
simply do the following:
gboolean my_function_that_can_fail (GError **err) { g_return_val_if_fail (err == NULL || *err == NULL, FALSE); if (!sub_function_that_can_fail (err)) { /* assert that error was set by the sub-function */ g_assert (err == NULL || *err != NULL); return FALSE; } /* otherwise continue, no error occurred */ g_assert (err == NULL || *err == NULL); }
If the sub-function does not indicate errors other than by reporting a GError,
you need to create a temporary GError since the passed-in one may be
NULL.
g_propagate_error() is intended for use in this case.
gboolean my_function_that_can_fail (GError **err) { GError *tmp_error; g_return_val_if_fail (err == NULL || *err == NULL, FALSE); tmp_error = NULL; sub_function_that_can_fail (&tmp_error); if (tmp_error != NULL) { /* store tmp_error in err, if err != NULL, * otherwise call g_error_free() on tmp_error */ g_propagate_error (err, tmp_error); return FALSE; } /* otherwise continue, no error occurred */ }
Error pileups are always a bug. For example, this code is incorrect:
gboolean my_function_that_can_fail (GError **err) { GError *tmp_error; g_return_val_if_fail (err == NULL || *err == NULL, FALSE); tmp_error = NULL; sub_function_that_can_fail (&tmp_error); other_function_that_can_fail (&tmp_error); if (tmp_error != NULL) { g_propagate_error (err, tmp_error); return FALSE; } }
tmp_error should be checked immediately after
, and either cleared or propagated upward. The rule
is: after each error, you must either handle the error, or return it to the
calling function. Note that passing
sub_function_that_can_fail()
NULL for the error location is the
equivalent of handling an error by always doing nothing about it. So the
following code is fine, assuming errors in
are not
fatal to
sub_function_that_can_fail()
:
my_function_that_can_fail()
gboolean my_function_that_can_fail (GError **err) { GError *tmp_error; g_return_val_if_fail (err == NULL || *err == NULL, FALSE); sub_function_that_can_fail (NULL); /* ignore errors */ tmp_error = NULL; other_function_that_can_fail (&tmp_error); if (tmp_error != NULL) { g_propagate_error (err, tmp_error); return FALSE; } }
Note that passing
NULL for the error location ignores
errors; it's equivalent to
try { in C++. It does not mean to leave errors
unhandled; it means to handle them by doing nothing.
sub_function_that_can_fail(); } catch
(...) {}
Error domains and codes are conventionally named as follows:
The error domain is called
<NAMESPACE>_<MODULE>_ERROR, for example
G_SPAWN_ERROR or
G_THREAD_ERROR:
#define G_SPAWN_ERROR g_spawn_error_quark () GQuark g_spawn_error_quark (void) { return g_quark_from_static_string ("g-spawn-error-quark"); }
The error codes are in an enumeration called
<Namespace><Module>Error; for example,
GThreadError or GSpawnError.
Members of the error code enumeration are called
<NAMESPACE>_<MODULE>_ERROR_<CODE>, for example
G_SPAWN_ERROR_FORK or
G_THREAD_ERROR_AGAIN.
If there's a "generic" or "unknown" error code for unrecoverable errors it
doesn't make sense to distinguish with specific codes, it should be called
<NAMESPACE>_<MODULE>_ERROR_FAILED, for
example
G_SPAWN_ERROR_FAILED or
G_THREAD_ERROR_FAILED.
Summary of rules for use of ""
Do not report programming errors via GError.
The last argument of a function that returns an error should be a location where a GError can be placed (i.e. "GError** error"). If GError is used with varargs, the GError** should be the last argument before the "...".
The caller may pass
NULL for the GError** if they are not interested
in details of the exact error that occurred.
If
NULL is passed for the GError** argument, then errors should
not be returned to the caller, but your function should still
abort and return if an error occurs. That is, control flow should
not be affected by whether the caller wants to get a GError.
If a GError is reported, then your function by definition had a fatal failure and did not complete whatever it was supposed to do. If the failure was not fatal, then you handled it and you should not report it. If it was fatal, then you must report it and discontinue whatever you were doing immediately.
A GError* must be initialized to
NULL before passing its address to
a function that can report errors.
"Piling up" errors is always a bug. That is, if you assign a new
GError to a GError* that is non-
NULL, thus overwriting the previous
error, it indicates that you should have aborted the operation instead
of continuing. If you were able to continue, you should have cleared
the previous error with
g_clear_error().
g_set_error() will complain
if you pile up errors.
By convention, if you return a boolean value indicating success
then
TRUE means success and
FALSE means failure. If
FALSE is returned,
the error must be set to a non-
NULL value.
A
NULL return value is also frequently used to mean that an error
occurred. You should make clear in your documentation whether
NULL is
a valid return value in non-error cases; if
NULL is a valid value,
then users must check whether an error was returned to see if the
function succeeded.
When implementing a function that can report errors, you may want to
add a check at the top of your function that the error return location
is either
NULL or contains a
NULL error
(e.g.
g_return_if_fail (error == NULL || *error ==
NULL);).
typedef struct { GQuark domain; gint code; gchar *message; } GError;
The GError structure contains information about an error that has occurred.
GError* g_error_new (GQuark domain, gint code, const gchar *format, ...);
Creates a new GError with the given
domain and
code,
and a message formatted with
format.
GError* g_error_new_literal (GQuark domain, gint code, const gchar *message);
Creates a new GError; unlike
g_error_new(),
message is not
a
printf()-style format string. Use this
function if
message contains text you don't have control over,
that could include
printf() escape sequences.
gboolean g_error_matches (const GError *error, GQuark domain, gint code);
Returns
TRUE if
error matches
domain and
code,
FALSE
otherwise.
void g_set_error (GError **err, GQuark domain, gint code, const gchar *format, ...);
Does nothing if
err is
NULL; if
err is non-
NULL, then *
err must
be
NULL. A new GError is created and assigned to *
err.
void g_set_error_literal (GError **err, GQuark domain, gint code, const gchar *message);
Does nothing if
err is
NULL; if
err is non-
NULL, then *
err must
be
NULL. A new GError is created and assigned to *
err.
Unlike
g_set_error(),
message is not a
printf()-style format string.
Use this function if
message contains text you don't have control over,
that could include
printf() escape sequences.
Since 2.18
void g_propagate_error (GError **dest, GError *src);
If
dest is
NULL, free
src; otherwise, moves
src into *
dest.
The error variable
dest points to must be
NULL.
void g_clear_error (GError **err);
If
err is
NULL, does nothing. If
err is non-
NULL,
calls
g_error_free() on *
err and sets *
err to
NULL.
void g_prefix_error (GError **err, const gchar *format, ...);
Formats a string according to
format and
prefix it to an existing error message. If
err is
NULL (ie: no error variable) then do
nothing.
If *
err is
NULL (ie: an error variable is
present but there is no error condition) then
also do nothing. Whether or not it makes
sense to take advantage of this feature is up
to you.
Since 2.16
|
http://maemo.org/api_refs/5.0/5.0-final/glib/glib-Error-Reporting.html
|
CC-MAIN-2013-48
|
refinedweb
| 1,837
| 51.68
|
tidal-vis installation tutorial
There’s a hidden gem in the TidalCycles git repository called
tidal-vis. It
allows you to use TidalCycles pattern syntax (and Haskell functions) to create
visual color patterns:
I have no idea what code I used to generate the image above, but I assure you that it was really easy, given my existing knowledge of TidalCycles and Haskell.
BUT, installing
tidal-vis takes a bit of effort. So here’s how you do it.
DISCLAIMER: I have only gotten this install to work on Linux Mint 18/Cinnamon. I imagine the install is similar on a Mac. I tried it on Windows first, but my threshold for pain only goes so far. Good luck, Windows users.
1. Install Dependencies
Open a terminal and run each of these commands (assuming apt-get on debian Linux):
sudo apt-get install git haskell-platform libghc-gtk-dev cabal update cabal install cabal-install cabal install gtk2hs-buildtools
2. Install TidalCycles
You can skip this step if you already have TidalCycles installed.
cabal install tidal
3. Clone the TidalCycles repository.
There currently is some ambiguity around whether
tidal-vis is correctly
published to Hackage to be installed from
cabal. My approach is to
just get the source code and install
tidal-vis from the source code later.
git clone
NOTE: Remember where you cloned (e.g. ~/tidal).
4. Install tidal-vis
Did you remember where you cloned? I’ll assume
~/tidal:
cd ~/tidal/vis cabal install
Bam. But you’re not ready yet…
5. Install Atom and the TidalCycles package
If you’re already using Atom and TidalCycles, then you’ve already done this.
- go to
- download and install Atom
- In Atom, install the TidalCycles package
- restart Atom
The instructions are also given at tidalcycles.org/getting_started.html
6. Evil Laugh
If you got this far without any problems, congratulations. Cackle deeply.
Let’s Make Patterns
Open Atom. Create and save a file with a .tidal extension. If you ignore this step, TidalCycles will not work.
Enter the following code in the file:
import Sound.Tidal.Vis import qualified Graphics.Rendering.Cairo as C import Data.Colour import Data.Colour.Names import Data.Colour.SRGB let draw pat = vLines (C.withSVGSurface) "test.svg" (600,200) pat 3 1
Evaluate each of those lines one by one (Shift+Enter in Atom).
Then type or paste the following code into the file:
draw $ superimpose (iter 8) $ every 2 (slow 3) $ every 3 (density 5) $ "[grey white black, lightgrey darkgrey]"
Eval the code. It will generate a file named “test.svg” somewhere on your system. In my case, it was put in my home directory. It will look like this:
Hell yes.
What’s next?
Play! Modify the
draw code above and see what else you come up with.
tidal-vis isn’t documented very well yet. If you’d like to really dig in
to some of the color- and drawing-specific functions, you’ll have to kind of
find that information on your own. My best recommendation is to look at
the
example.hs file located in the TidalCycles repository:
github.com/tidalcycles/Tidal/blob/master/vis/examples/example.hs.
Of course if you’d like to help contribute some documentation, that would be amazing and the TidalCycles community would love you.
|
http://blog.kindohm.com/2016/09/02/tidal-vis
|
CC-MAIN-2017-13
|
refinedweb
| 552
| 67.96
|
27 April 2012 08:25 [Source: ICIS news]
SINGAPORE (ICIS)--Yara International’s first-quarter 2012 net profit rose by 4.5% year on year to Norwegian kroner (NKr) 3.02bn ($527m, €401m) as sales volumes “returned to normal levels” and margins remained healthy, the fertilizer major said on Friday.
The firm’s revenue rose by 5.4% year on year to NKr20.9bn in the first quarter of this year, the company said.
Its earnings before interest, tax, depreciation and amortisation (EBITDA) – excluding special items – slipped by 7.6% to NKr3.94bn in the first three months of 2012 due to lower nitrate premiums and higher energy and raw material cost.
“Yara reports strong first-quarter results, as margins remained healthy and European deliveries picked up,” said Jorgen Ole Haslestad, president and CEO of Yara.
“As expected, northern hemisphere fertilizer demand is strengthening following a slow first half of the buying season. This recovery is needed to avoid a further decline in global grain stocks,” Ole Haslestad added.
In its outlook, Yara said that its ammonia and urea plant at Belle Plaine in ?xml:namespace>
($1 = NKr5
|
http://www.icis.com/Articles/2012/04/27/9554250/norways-yara-q1-net-profit-rises-4.5-on-healthy-margins.html
|
CC-MAIN-2014-35
|
refinedweb
| 188
| 67.25
|
[open] Help is needed on the inline script for ISI
edited October 2015 in OpenSesame
Hi I am using the following script to present picture stimuli with different ISI but it not works. Can you please help on this issue. I supposed to have ISIs = 1000,1500,2000, and 2500.
The script is as follows:
import random minimum = 1000 maximum = 2500
ISI_options = [1000,1500,2000,2500]
ISI = random.choice(ISI_options)
exp.set("ISI", ISI)
Thanks in advance,
Masoud
Hi Masoud,
This script should not give any problems. Do you get an error window? What does it say?
And what are you trying to do? Where do you use ISI?
Cheers,
Josh
I have this inline script then a sketchpad included with the stimuli and a logger. This runs well when I add a # before minimum = 1000 maximum = 2500, but there is no difference in the ISIs as I can see the presentation of the trials.
I need to know what to add to the Duration section in the Sketchpad. Shall I add a second sketchpad as a blank screen?
When I have the # removed before minimum = 1000 maximum = 2500 the error windows says:
Failed to compile inline script, code import random minimum=1000 maximum=2500
Masoud
Hi Masoud, make sure that all your commands are on seperate lines when you use code. That means:
instead of:
Also, I don't think you're using the variables minimum and maximum, so why not delete them entirely?
Another reason that your problem arose is that when placing a '#' before that line, it's not used. Hence you haven't imported random, and therefore it can't pick a random choice from ISI_options. This is what you want:
In order to have ISI function as the duration of your sketchpad, insert
[ISI]in the duration box (so with '[' and ']'!). Make sure that the sketchpad is placed after the inline_script; (because of course you have to create the variable before you use it). Let me know if there's still problems!
Cheers,
Josh
Hi Masoud, good to hear that it works well. Everything that is created by the keyboard_response item should automatically be logged by the logger. Did you place the logger in the sequence? (for example right below to the keyboard response item).
Cheers,
Josh not equal to the distractors (i.e., yellow circle) then it is hard to use the inline_script. Instead I added a column as presentation_time in the loop along with all the possible trials with necessary cycles (repeatations). Now the problem is that the logger cannot save the RTs maybe because the Timeout is set to 0. The order in the Loop is Stim (fixed 100 ms), a skechpad called ISI with the Duration=[presentation_time], a keyboard response and a logger. Can you please help to answer this request too?
Masoud
Hi Masoud,
This is actually quite a clever solution, you came up with.
However, the problem with the
loggeris quite weird, though. Based on your description, it should work. How does the output look like? Are there no values at all, or missing data points? Are other variables saved properly? Could you maybe share the file with us?
Thanks,
Eduard
Thanks Eduard
Please could you kindly see the afile in the following link?
Masoud
Alright, I am not entirely sure whether this will solve everything, but what you could do, is removing the ISI
sketchpadand instead setting the
durationof the
keyboard_responseto the value of [presentation_time]. Also, you might have to capitalize the
nones and decapitalize the
spaces in your loop table. Try it out and let us know whether this does the trick. Btw. on the opensesame documentation website, there are multiple example experiments. A few of them are quite a bit similar to yours. If you browse through them, you might find something that will help you with your experiment.
Edu with this issue.
Masoud
What do you mean by that? The duration of the stimuli should not be changed. They will still be shown for 100ms. Removing the ISI
sketchpadand set the duration of the
keyboard_responseto the desired ISI, should also not change the appearance of the
sequence, because in both cases the display stays empty for the same duration.
Could you clarify what exactly the problem is now?
Thanks,
Edu the mentioned ISI in the loop for the keyboard_response, i.e., 1000, 1500, 2000 or 2500.
Is there any method to have the stimulus presented exactly for 100 ms and then a blank screen with the duration changing in 1000, 1500, 2000 or 2500 ms. Of course adding a sketchpad with the mentioned ISI works well but the RTs are not logged after two consequent skethpads.
Ah. I see. Sorry for the misunderstanding. Well, in this case, you can leave the blank
sketchpadbut set its duration to 100ms or some arbitrary number smaller than 1000ms. Then in the duration of the
keyboard_responseyou can set the time of it to the respective ISI minus 100. Alternatively, you can set those new values right away in the
loop_table. So, instead of
[1000,1500,2000,2500], you'll have
[900,1400,1900,2400]. Does this make sense?
Best,
Eduard
Hello,
I am trying to create an experiment where I have to vary the duration of multiple sketchpads which are being presented in a sequence under a loop. I used the the solution above and it worked. However, I do not know how to prevent a duration value used in one sketchpad from being repeated in the next sketchpad. Please help! :(
|
https://forum.cogsci.nl/discussion/comment/6030
|
CC-MAIN-2022-05
|
refinedweb
| 922
| 73.98
|
Nicola Ken Barozzi wrote:
> This seems interestig, and brings up what XML namespaces can be used for.
XML namespaces are indented to disambiguate short local element
and attribute names. Any sematic associated to XML namespaces
beside this has to be weighted carefully.
Lets take an example. There are two projects, Foo and Bar,
each providing a taske, lets call them <foo> and <bar>
respectively. Both tasks take a <part> child, by coincidence.
Of course, because the projects act uncoordinated, the <part>
child element has a different semantic. In order to make this
clearer, let's say the Foo <part> takes an optional <mumble>
child while the Bar <part> takes three mandatory <xonx>
children.
Someone finds both the <foo> and the <bar> task exciting and
wants to use both in an Ant build file. No problem so far:
because ot the way Ant elements get their child elements and
create associated Java objects, this should work.
Now said someone got a super-duper schema directed XML editor
and wants to use it for editing the build.xml file. He asks
all projects for a schema (DTD, XSD, RNG, whatever) for this
purpose and merges them in order to get a schema for his build
file. At this point the two <part> elements are likely to clash
(at least for DTDs, where element names are global). While
it is possible to merge the content models so that <part> now
takes either an optional <mumble> or three <xonx> children, this
would allow the user to put <xonx> childre into the <part> of
the <foo> task. This is only a minor inconvenience for most
people, but an unthinkable horror for true purists.
Introduce namespaces: the Foo projects names its namespace
"" while the Bar project uses
"URI:bar" or whatever. For the XML parser it is only really
important that two different strings are used. You see, the
longer the strings the less tha chance they will clash, and
they probably won't clash if they start with the URLs of the
project's homepages (the intent behind the recommendation to
use URLs, because it's the closest thing to a global registry
you can get short of actually creating a global registry).
Anyway, because the expanded names of the <part> elements are
now "{}part" and "{URI:bar}part"
respectively they obviously no longer clash.
BTW you can write this as
<target name="foo">
<foo xmlns="">
<part>
<mumble>
</part>
</foo>
<bar xmlns="URI:bar">
<part><xonx/><xonx/><xonx/></part>
</bar>
<target>
or as
<target name="foo"
xmlns:foo=""
xmlns:
<foo:foo>
<foo:part>
<foo:mumble>
</foo:part>
</foo:foo>
<bar:bar>
<bar:part><bar:xonx/><bar:xonx/><bar:xonx/></bar:part>
</bar:bar>
<target>
take your pick (if you think the "foo" and "bar" prefixes are too
long, use "a" and "b" instead, it doesn't matter).
So far, the namespace names should only be different for different
projects, so why is it dangerous to associate some semantic with it,
like letting them point to a jar file? The problem is again that
general purpose XML tools, like the above mentioned super-duper XML
editor may associate their own semantics with the namespace, like
how to auto-format certain elements. This information will be stored
in some config files, and it requires that the namespace name is
the same until the semantics of the elements in it have changed
enough that it warrants assigning a new namespace name.
Summary:
1. XML namespaces are there to facilitate aggregation of XML adhering
to schemas (content models) of different, uncoordinated origin.
2. XML Namespaces should be used in a way that no end user action
can result in two namespace names becoming unintentionally the
same.
3. XML Namespace names should preferably be assigned by the people
or project which specifies the semantics of the XML elemnets and
attributes therein.
4. XML Namespace names should be kept unchanged until a change of
the semantic of the elements warrants a change.
5. Good tools should not monopolize XML namespace syntax for its
own semantics.
The schema directed editor should provide an example hoe tools
can take advantage of XML namespaces: use them as a key into a
DB/config to get it's own associated semantic.
In particular for Ant/Antlib I can imagine that each library
provides a factory object associated to the XML namespace for
the library.
The FOP parser uses such a two stage lookup: first the namespace
is used to get a factory object from a hash table, then the factory
is used with the local XML element name to create a Java object
which is inserted into the FO tree. The hash table with the factories
is initialized at startup, the associations between namespace name
and factory class name is read from a Services file. Want to add
a FOP extension? Get the default Services file, add a line with
your namespace-to-factoryclassname mapping put it into the jar with
all the classes and drop the jar as first into the classpath. If the
user wants to use multiple extensions, well, edit the main Services
instead, dead easy.
HTH
J.Pietschmann
|
http://mail-archives.apache.org/mod_mbox/ant-dev/200305.mbox/%3C3EB3DF74.1080409@yahoo.de%3E
|
CC-MAIN-2015-06
|
refinedweb
| 857
| 56.29
|
Graphic user interfaces (GUIs) changed the way a program communicates with the user. It made applications interactive. Today, interactivity is the norm. Mobiles are no exceptions. PyS60 makes creating GUI-based applications easy. It can be thought of as one of the RAD environments for the Symbian OS.
Types of Controls
PyS60 provides controls or widgets (including dialogs) in two forms. They are Functions and Python types. Controls or widgets under the former are mostly methods, where as those in the latter category are Python objects implemented in C. Here are the details.
Functions: Many of the dialogs are implemented as functions. In PyS60, dialogs take precedence over the other controls, such as text boxes, list-boxes, etc. This means if a control and a dialog both need to be shown, then the dialog will be shown on top of the control, thus hiding the control. The examples of dialogs implemented as functions are note, query, multi_query etc.
Python types: The controls, such as text boxes, are implemented in C and directly accessed in PyS60. Their precedence is lower than the UI implemented as Functions. Text, Listbox, and Canvas are examples of Python types. These controls are displayed the instant they are set as part of the application’s body. In other words, these controls are not displayed until they have been registered as part of the application.
One of the controls is a dialog that has been implemented as a Python type. It is the Form control. That completes our discussion about the types of UI controls. Let’s move on to next section.
{mospagebreak title=UI Controls: Query and Notes}
Query and Notes controls are the most commonly used of all the controls. They are used to take input and display messages. Here are the details:
Query is a dialog type. It is used to gather input from the user. It presents a single line text box with a label to the user. Since query is a dialog, it is also implemented as a function. There are three parameters to the function. They are label, type, and initial value. The first two are mandatory parameters, whereas the third is optional. Here are the details:
The label is the question or prompt that is shown to the user when the dialog box is displayed. The value for this argument is a Unicode string. For example, to show “Enter name” as the prompt, one will use the following as the value of the argument:
U”Enter name”
Type can be compared with the standard input dialog box that is common on the PC. Since it is a dialog, it can be of different dialog types. The type is decided by the type parameter of the query function. The different types are:
- Text – is a simple text. The text is of Unicode type.
- Code
- Number – in this case, the input box will accept only numbers and not decimals.
- Date – the input is just a date.
- Time – to take time as input, the type needs to be set to time.
- Float – to accept only decimals, this type can be used.
All of these are string values. For example, to set the type to number, the value passed will be:
‘number’
The initial value is an optional parameter. It sets the initial value shown to the user. However, for the type float, setting this value does not have any effect. For text fields (i.e. the type having the value of either ‘text’ or ‘code’), the value for this argument is Unicode. If the type is ‘number,’ the value that can be passed is a numeric value only. If the type is date, then the initial value will be seconds, since an epoch is rounded to the nearest local midnight. For example, to set the initial value to 3, the value passed will be:
3
The return value of the query is the value provided by the user and the type of the input box. Therefore, to display a query with “Enter any number between 1 and 9” as a message, with number as the only acceptable data type and 0 as the initial value, the statement will be:
guess=query(u“Enter any number between 1 and 9”, ‘number’,0)
This is the end of the second section. In the next section, we will discuss the note control.
{mospagebreak title=The Note Control}
Note is another control implemented as a dialog. It is used to display messages to the user. This is, again, one of the most common controls in the toolkit of PyS60. It can be used to display different messages. These messages can be anything from simple messages, such as information about the size of the current file or critical messages, such as low battery. The note function takes three arguments. They are text, type, and global. Of these, only text is a mandatory argument. The other two are optional. Here are the details.
Text is the message to be shown. It is a Unicode string. For example, to show a message stating that the memory is full, the text argument will be in the following format:
u”Memory is full”
The u at the start indicates that it is a Unicode text.
Now we’ll discuss type. As stated earlier, the note function can be used to display different kinds of messages. The kind of message is determined by the value passed for ‘type’ parameter. This parameter can accept the following string values:
- Info – is the default value, if no value is passed. When the type is set to info, the message shown will be simple. Hence, the icon will have ‘i’ as a symbol.
- Error – to show error messages, one can use this string as the value. If the type is set to ‘error,’ the icon will have ‘e’ as the symbol.
- Conf – set the type to ‘conf’ when the message to be shown is related to configuration. An example of configuration is selecting and setting the fonts. Once a font has been set, a message whose type is conf can be shown to the user informing him or her of the name of the new font.
The global argument decides whether the note displayed is global or not. A global note is a note that will be displayed even if the application calling the note function is not in the foreground. The global argument takes an integer value. Any value other than zero is used to make a note or the displayed message global. The default value for this argument is zero. For example, to show a non-global message to the user, the value passed to the global argument will be:
0
For example, to display a global error message stating that the memory is full, the statement will be:
note(u”Memory is full”, error, 1)
That brings us to the end of the third section. The next section will be about building an application using what we’ve discussed in this article.
{mospagebreak title=PyS60 in Real World}
Let’s see how these controls can be used within an application. The application that’s going to be developed is a simple guessing game. It will perform the following tasks:
- Choose a random number.
- Ask the user to enter his or her guess.
- Compare and show the result.
- Ask if he or she will like to continue.
So let’s start. First the imports:
from appuifw import *
As you already know, the above statement imports all the classes within the appuifw module. Next, the application has to choose a random number. However, since this will be a repeating process until the user chooses to stop, choosing the random number will be within a loop. The loop will terminate only when the user says he or she would like to stop. The code is as follows:
from appuifw import *
continue_guess=true
while continue_guess:
guess_no=random()
The loop will continue until continue_guess becomes false. Next, the application needs to accept the user’s guess. To accept the value, the query function will be used. The message will be “Enter your guess,” the type will be number (since decimals will not be allowed), and no initial value will be displayed. The value will be saved in user_guess variable. Here is the code:
from appuifw import *
continue_guess=true
while continue_guess:
guess_no=random()
user_guess=query(u”Enter your guess”, ‘number’)
Next comes the comparison and displaying the result functionality. First, the user_guess will be checked for validity (i.e. if the user clicked cancel, it would be null). If it is not null, then it will be compared with the guess_no, and the appropriate number will be displayed. If the user pressed cancel, a “You opted out” will be shown and the continue_guess will be set to false. Here is the code:”)
Next comes the code that asks the user whether or not to continue. The query box is shown to the user with “Enter Y to continue and N quit.” If Y is entered, the loop is continued, otherwise the application is exited.”)
user_choice=query(u”Enter Y to continue or N to quit”)
if user_choice is Null or user_choice==’N’:
continue_guess=false
else:
continue_guess=true
That completes the application. In this discussion, the focus was on simple UI controls. The next part will discuss complex UI controls, such as forms. Till then…
|
http://www.devshed.com/c/a/python/mobile-programming-in-python-using-pys60-ui-controls/1/
|
CC-MAIN-2015-27
|
refinedweb
| 1,564
| 74.59
|
Executable jar won't execute843804 Jul 31, 2008 7:19 PM
So I wrote a board game program as an application, and I wanted to turn it into a standalone executable. To do this, I used the "Create Jar" function of my IDE (Jcreator), since I'm not completely sure how to add java to the classpath (a little help on that would be appreciated too. I'm running Windows Vista), but it seems to work fine. The first time I ran it (by double clicking), the jar executed, but it didn't run properly because my code referenced the location of the images directly. However, it still opened up the window of the game. Then I went back to my code to change all the image references to include "getResource". Then when I tried running it again, it still didn't run, but opened up the window. Then I figured out that this was because my code referenced some text files too, and I fixed this. However, this time, when I try making a jar and executing it, nothing happens! No window opens up at all. I went back and changed the code back to before I modified it for the text files, but the resulting jar still won't execute. I also tested the jar function on a different program of mine, and it works perfectly. Does anyone know what could be the problem?
This content has been marked as final. Show 7 replies
1. Re: Executable jar won't execute843804 Aug 1, 2008 8:59 AM (in response to 843804)Run your application through console using 'java -jar yourApp.jar'.
Reading the output it will be easier to detect the problem. If necessary post it here latter.
2. Re: Executable jar won't execute843804 Aug 1, 2008 5:21 PM (in response to 843804)Thanks so much! Now I have an error:
Exception in thread "main" java.lang.IllegalArgumentException: URI is not hierarchical
I used this: File file = new File((getClass().getResource("blahblah.txt")).toURI()); in order to grab the text file from with the jar. How can I fix this?
3. Re: Executable jar won't execute843804 Aug 1, 2008 7:54 PM (in response to 843804)You have to use getResourceAsStream() to read the txt file inside your .jar application.
Here follows an example:
If you have a test.txt file inside your .jar application, it will read the text and show it in the console
public class Test { public static void trying(){ InputStream is = null; BufferedReader br = null; String line; try{ is = Test.class.getResourceAsStream("/test.txt"); br = new BufferedReader(new InputStreamReader(is)); line = br.readLine(); System.out.println(line); }catch(IOException e){} } public static void main(String[] args) { // TODO Auto-generated method stub trying(); } }
4. Re: Executable jar won't execute843804 Aug 2, 2008 10:47 AM (in response to 843804)That worked, thanks! Unfortunately, that doesn't seem to be the end of my problems... Apparently, I didn't fix the image references properly. I used:
Image image = (new ImageIcon(getClass().getResource("images/image.gif"))).getImage();
However, now I get a NullPointerException. The program runs fine when it's compiled and run normally though. How can I correct this?
5. Re: Executable jar won't execute843804 Aug 2, 2008 11:04 AM (in response to 843804)Wait a second... I used this same approach on one of my other programs, and it works fine. The only difference is that in the previous program, I put the gifs in the same directory as the class files, rather than in an "image" directory.
6. Re: Executable jar won't execute843804 Aug 2, 2008 11:23 AM (in response to 843804)Okay, nvm, I figured it out. Apparently, when you want to make a jar, everything becomes case sensitive. Thank you so much, ThomYork!!!
7. Re: Executable jar won't execute843804 Jul 25, 2010 10:27 AM (in response to 843804)I had the same problem. I renamed my Icons folder to icons halfway through. So some had the referens /Icons/example.jpg and others /icons/example2.jpg. When I ran it from Netbeans it work correctly, but the .jar did not work. After changing all the paths to /icons/ it worked perfectly.
Thank you so much for this tip.
|
https://community.oracle.com/thread/1313898?tstart=44
|
CC-MAIN-2015-32
|
refinedweb
| 715
| 66.74
|
.
[...] today’s Programming Praxis, our task is to calculate the total distance traveled by Santa based on data [...]
My Haskell solution (see for a version with comments):
Hey guys,
I’m new to Ruby so I did my best to throw this together in about 30 min…
My solution
Any help and comments are welcome!
With the file data.js saved in the same directory, having deleted the ‘var locations = ‘ bit at the beginning:
that’s 320,627 km to the metric-inclined.
A few days late, but here’s my Python
version. I went with the arctangent formula given by Wikiepdia, which it calls
“a more complicated formula that is accurate for all distances.” I also followed
Jebb’s lead of saving “data.js” with the
var locations =portion
removed. This can be run (at least on my system) with
./santa.py data.js.
Actually I screwed up my first solution, this is correct I believe:
Another Python version.
Download the data.js and save to santaflightpath.py. Edit the first line to remove “var “, so it starts with “location =”.
Remove the “;” from the last line. This creates a python compatible statement defining ‘locations’ to be a list of dictionaries. Importing ‘santaflightpath’ executes the code in the file, so there are clearly security implications.
pairwise is from the recipies in the itertools documentation.
Of interest, the flight plan is 28 hours long; stays within 600 miles of the north pole for the first 4 hours; doesn’t appear to visit Antarctica; and doesn’t return to the starting point.
from santaflightplan import locations
from math import asin, sin, cos, radians, sqrt
from itertools import starmap
from utils import pairwise
earth_radius = 3958.76 #average
def distance(p1, p2):
lat1, lng1 = map(radians, p1)
lat2, lng2 = map(radians, p2)
dlat = lat1 – lat2
dlng = lng1 – lng2
ang = 2*asin(sqrt(sin(dlat/2)**2 + cos(lat1)*cos(lat2)*sin(dlng/2)**2))
return ang * earth_radius
locs = ((float(p['lat']),float(p['lng'])) for p in locations)
print sum(starmap(distance, pairwise(locs)))
[/sourcecode/
With a little delay, my commented Python version (looks like the one above)
|
http://programmingpraxis.com/2010/12/24/tracking-santa/?like=1&_wpnonce=274c59a080
|
CC-MAIN-2014-35
|
refinedweb
| 354
| 64
|
What To Do After You Fire a Bad Sysadmin Or Developer."
Here be Dragons (Score:5, Informative)
The answer has been widely discussed here:
Re:Here be Dragons (Score:5, Insightful)
The actions necessary depends on what you mean with "underperforming". If that person didn't do much more than sitting in a corner playing games I would say that there's not much to do, but if it was a person taking shortcuts you need to figure out all traces from that person and remove them one by one. And you can't be sure if that was a skilled person..
This also highlights the need of segmenting the network into different segments, one for sales, another for HR, a third for management and then one or more for the operations so that if one segment is compromised you don't run the risk of having everything exposed. Of course - this goes against the process of using virtualized servers since you can't do physical segmentation on a virtual machine.: (Score:2).
Re:Here be Dragons (Score:4, Funny).
LOL. That would defeat the purpose of having a scapegoat in the first place!
Re:Here be Dragons (Score:5, Interesting)
Wow, it's like this t-shirt [thinkgeek.com] in real life. I have also replaced somebody with a very small shell-script, I felt like I should have gotten an award or something.
Re: (Score:3)
Ah, but you can. Modern hypervisors (and this includes lightweight Linux paravirtualization containers such as OpenVZ) are able to provide a virtual network for the nodes running under it. Often they have fairly limited capabilities, but anything worthy of the name will support basic VLANs. That's to meet exactly your segmentation requirement.
Re: (Score:2)
Which means that you run it on one single physical server and if you have an admin that's going bad that has access to that server you are really into the crapper.
Same thing if the hosting server itself gets compromised.
Re:Here be Dragons (Score:5, Insightful)
You forgot about hypervisor exploits.
If you must use hardware separation, you ***MUST*** ***USE*** ***HARDWARE*** ***SEPARATION***.
Re:Here be Dragons (Score:4, Insightful)
Mod up.
"If it can be accessed, it is vulnerable." -Geezer's First Law of System Security.
Re:Here be Dragons (Score:4, Informative)
Just look at this report: Cross-VM Side Channels and Their Use to Extract Private Keys [unc.edu]
Pretty clear that the virtualized server aren't as safe as physically separated servers.
Re: (Score:2)
Re:Here be Dragons (Score:4, Insightful)
To be fair, he was complaining about phyisically segmenting the virtual machines that exist on a single physical machine. Of course, that's fundementally impossible, since these virtual machines share the same computing resoruce. His complaint may be a ridiculous complaint, but nevertheless.
Not so ridiculous I think. There was an article here on Slashdot a couple of days ago about the possibility to spy from one virtual machine onto another one running on the same virtual host by observing the cache line eviction pattern. All VM's share a same cache, and by observing which cache lines gets thrown out (presumably due to the usage by the other VM's), it is possible to infere what goes on in these other VMs.
Re: (Score:2, Insightful)
Ah, once again HR proves itself incapable of hiring a good system administrator / employee and instead either went with the cheapest person available or one with lots of certifications and little experience. I'd fire the HR department as well after showing the bad employee to the door.
Re: (Score:2)
Run an audit.
That is what I do. I run an audit on all access methods and devices and change the Pwd while I am at it.
Re:Here be Dragons (Score:4, Insightful)
I would also advise, informing your legal team of the decision. You could also hire a security firm (one with a good reputation) to scan your network for security flaws. If you take enough measures to protect your customers data then even if he does have a backdoor it won't come back to hunt you. Additionally consider instead of having a single admin consider having an admin team that watches each others actions, that way you are less likely to have a single admin ruin everything for you.
Re:Here be Dragons (Score:5, Insightful)
"Why are you requesting three roles here? I thought you just needed a computer guy".
"Having a team adds flexibility and redundancy, for example, if one gets hit by a bus or goes on vacation, the others can cover."
"How likely is he/she to be hit by a bus? And we'll just not let them go on vacation if that's what it takes."
"I doubt we'll be able to hire someone qualified if we don't allow them vacation time."
"Oh, we'll give them vacation time, we just won't let them take it. Or, if we have to, we'll make them carry their laptop while they're away."
"Then that's not vacation, is it?"
"Quit being such a whiner. Oh, and the salary you asked for? Find someone for 60% of that. Revenues are down."
"Didn't the CEO just get a huge bonus?"
"What does that have to do with anything?"
TL;DR: Companies don't make hiring decisions based on what makes sense, they make them based on how little they can spend.
Re: (Score:3)
More true to life
:(
Re:Here be Dragons (Score:5, Insightful)
Easiest answer: Run an audit. That is what I do. I run an audit on all access methods and devices and change the Pwd while I am at it.
The easiest answer, pray.
A bad (as in lazy, surly, abusive) sysadmin who left traps will leave them in places not detectable by an audit.
I have yet to go to a business as a sysadmin where they didn't use default passwords (P@ss1234, now how many businesses use that gem) which are on just about every device or local admin account. The smartest businesses had a different default password for each type of device/account but you end up with password reuse across a pattern of devices and accounts. The thing is, almost no business will go around and change this on every single device/server when someone who knows the password leaves.
I left my last position on less than amicable terms (basically they were setting me up to get sacked by giving me impossible tasks, so I chose to leave). The CEO had no clue, but my boss understood I knew the public IP addresses, domain admin/root passwords and router passwords of our 5 biggest clients off by heart. I could see the fear in his eyes when I left (it was senior managements decision to sack me, they wanted to downsize without having to pay anyone out). Of course I'd never actually do anything harmful to that business (they were doing that well enough on their own) but anyone who employs a sysadmin knows that you need to hire trustworthy people and treat them well or it will turn around to bite you in the arse.
Hiring good people and not pissing them off is pretty much the only defence.
Re:Here be Dragons (Score:4, Informative)
"A bad (as in lazy, surly, abusive) sysadmin who left traps will leave them in places not detectable by an audit."
The point of an audit is not to uncover and clean all the traps but to gain legal security.
Re: (Score:2)
Calling someone a "hater" only means you can not rationally rebut their argument.
Commenting on your signature: Calling someone a "hater" means that you detest their attitude, you believe that it is a sign of irrational hatred, and that their arguments are not worth rebutting.
Re: (Score:3)
At the last job I quit (an MSP), I gave my boss a bound book with all passwords. It was a fire list: these are the things you need to change when I leave to adhere to best security practices. I had him sign for receipt. I also gave the same book to the client who was endeared to me and was not fond of how I'd been treated by my boss.
:D Rumor has it they spent roughly two months doing basic things like trying to figure out how to get access to systems to change the root and/or service passwords...
Here ARE Dragons (Score:3, Insightful)
Backdoors from the current IT person aren't important?
No easy answers (Score:2, Informative)
Slowly (Score:3)
Even more slowly (Score:5, Funny)
In fact, your entire corporate structure is at risk. How do you know he didn't engineer a brain virus that allows him to use the company's board members as flesh puppets?
He might have even used telepathy to cause major investment banks to sell him all of their shares of the company for pennies on the dollar. He might already own the company. It's best to double check.
In fact, he might be standing behind you right now, brainwashing you with lasers.
Re:Even more slowly (Score:5, Funny)
In fact, he might be standing behind you right now, brainwashing you with lasers.
Impossible. My hat is made of the finest tin and aluminum foil on earth, and is wrapped so tightly that my very hair was crushed. No one could brain wash me (with terrestrial technology at least).
Re:Even more slowly (Score:5, Funny)
Re:Even more slowly (Score:5, Funny)
Have you checked for electrodes in the inside recently? The new tinfoil manufactured in Taiwan comes with built in RFID and WIFI.
Re:Even more slowly (Score:4, Insightful)
Re: (Score:2)
Or think of a small hardware device attached somewhere to the network (can be hidden anywhere where you can get LAN and power) which only listens (so it cannot be detected by the stuff it sends or by taking up an IP) and sends interesting things over the mobile phone network. Probably the network will have lots of interesting unencrypted information (after all, it's internal and cable, so why have encryption overhead, right?)
can also put a box near a printer (Score:3)
can also put a box near a printer and make it look like it's part of the printer or even a fake network to usb printer box (that can be a mini pc) just say on this printer there is some stuff that can only be done over USB.
Re: (Score:2)
Don't forget back doors, layoff scripts, and manual tasks that were never documented.
Not every security hole is as obvious as a modem sitting on a rack. But some are. I found one at the last place I worked. Literally a modem sitting on top of a server, in a corner of the server room. No one knew the purpose behind it. I notified the necessary parties (dep't heads), and then unplugged it mid-day Monday. I expected complaints fairly soon after. There were none. Som
Re:Slowly (Score:5, Insightful)
Re: (Score:3)
Modem lines are so yesterday - an access point put away somewhere configured to not advertise it's name would be a great hole.
Don't forget that some printers can communicate over wireless connection too and they can be a great attack vector. Add to it that it's easy to set up a VPN tunnel. And if it's a tunnel over HTTPS it's not easy to detect - especially if the traffic is low.
So it will be a pain in the butt if you want to stay safe. Lock each client to receive IP address over DHCP depending on MAC addre
Blame them! (Score:5, Insightful)
Re:Blame them! (Score:5, Interesting)
its been my experience that people are generally pretty good, some better than others, but I rarely run into an evil person.
companies, otoh,
...
Re:Blame them! (Score:4, Insightful)
Evil companies (Score:5, Insightful)
Companies are large organizations. Each person in the organizaton may concienciously do their job with good intent but without seeing the bigger picture (not their job) and therefore without knowing the consequences of their actions. The people at the top who, in principle, see the bigger picture, are often so far removed from the details of what is happening that they too do not know what the company is doing, except in respect of the shareholders and overall finanical performance. So, the company runs on policy and no one knows what it is doing. The company can be uber-evil when everyone in it is as nice as can be.
The company is more/other than the sum of its parts.
Re: (Score:3)
Re: (Score:2)
There is no lack of research on how large groups of normally decent people can behave in a highly immoral fashion. Peer pressure and dominance hierarchies are powerful forces for coercion, not to mention more mundane explanations like greed.
Re: (Score:3)
Re: (Score:3)
"companies are made up of people, how can they be evil, if the people in them are not?"
A company is a complex system with emergent properties.
I'd say, anyway, that for true evil to arise there must be evil people somewhere in the organization. In order to be just underperforming or mildly evilesque, you just need your typical corporate organization and it will arise by itself out of goals misalignment and partial information.
Re: (Score:2)
Re: (Score:2)
More often than not, the evil a company perpetrates against employees and customers is directly relational to the number of business school graduates who hold the reigns of power. If they're Californians by schooling, that's an exponential curve.
Re: (Score:3)
idiot? (Score:5, Insightful)
Real mature there guy... With an attitude like that. You'd better have alot of backup plans in place. It sounds like you are a shit place to work for.
Do us ALL a favor. Name your company. So we can avoid it.
Re:idiot? (Score:5, Insightful)
That was my immediate impression as well. When I hear/see the phrase "fire the idiot", my first thought is "was this guy the problem, or is it the workplace?"
Re: (Score:3)
I'm still trying to figure out how an "idiot" and "turkey" was retained for long enough to have any significant impact. Usually an "idiot" becomes pretty obvious as soon as he tries to do anything complicated enough to justify asking this question.
Re: (Score:2)
" I'm currently fixing problems caused by an idiot who for 2 years was the company's only developer"
And the developer was the idiot, you say?
The idiot is obviously the manager that, not having developing experience, hires a single developer lacking the experience to discern a good hiring from a bad one. And the boss on top of that manager that allowed an unexperienced manager to deal with the IT thingie.
"Then they hired a new manager who'd previously worked in IT and he fired the guy within a few weeks"
See
Re: (Score:2)
Sounds like a pretty incompetent manager to me. He should have hired a second dev, let them both work together for some time until the second dev is familiar enough with what's there and then get rid off the first dev..
First thing's first (Score:2)
Scan for intentional backdoors and accidental gaping, well-known flaws with a fine-tooth comb. They may not have seemed too bright on the job but even an underperformer has enough insight into operations to find a way to mess up your day.
Perhaps pose it as a question to your better admins. "Knowing what you know, if you had to crack our system/application, how would you go about it?" Whatever their answer is, find a solution and implemen
Re:First thing's first (Score:4, Informative)
Nope. When the bad guys have got root on your PC the only way to restore confidence in it is to rebuild it from a trusted image. Likewise if your network admin has gone untrusted on your infrastructure you burn it down and build it new again. Nuke it from orbit. It's the only way to be sure.
Frankly that's not near enough to stop a real determined jerk with skills, but thankfully we are rare. Don't hire us in the first place if you can avoid it.
After?! (Score:3)
Re: (Score:2)
It's never that simple. Backdoors are so easy to install, and I've personally seen automated scripts hidden in standard features that created a backdoor several weeks post-firing. That way the changed password was worthless, and even the search for backdoors in the days following the firing was futile. So changing passwords and a thorough search for backdoors was a waste of time.
Bottom line: You can't be sure when it comes to admins. Either part on amicable terms or reinstall everything - or chance it...
Well, with a boss like you... (Score:3, Insightful)
...it's hard to imagine the relationship went sour,
"...after you fire the idiot, such as changing passwords, but that's just one part of the To Do list. More important, in the long run, is the cleanup job that needs to be done after you fire the turkey,.. ").
Re: (Score:3)
Good points. May I add, had they not spent all that money on personality tests and whatnot, but instead compensated the employees for, I dunno, more than they could probably earn elsewhere and just generally showed them respect, maybe things would start working out in the company's favor?
Slavery is so over. We all need to work and pay our bills due. We're all like little companies trying to do the capitalist-thang, by selling our own time, effort, and skills. This is how we compete, to earn a living.
Re: (Score:2)
Google, is that you?
It may be too late (Score:5, Insightful)
If your department was properly managed... (Score:4, Insightful)
under-performing or metrics may them seem to be (Score:5, Insightful)
under-performing or metrics may them seem to be under-performing??
Made to do the work of 2-3 people??
Pulling 80 hour weeks that lead to errors and under-performing over time.
Blog with tips (Score:5, Funny)
My first reaction (before RTFA) was that the problem might not have been the employee, but the person doing the name calling. However, the link is to a blog that lists a generic list of precautions to take. Whoever wrote that blog still has some growing up to do, but I'll give him/her the benefit of doubt and assume they were going for humor.
In any case, I notice that HP paid for the content. Now we know why they are in such trouble.
Stop calling them turkeys for starters (Score:5, Insightful)
Check your wallet!!! (Score:4, Interesting)
Re: (Score:2)
Considering some IT budgets that guy is a genius!
Fire the Abusive PHB (Score:5, Insightful)
The submitter comes off as an angry, abusive tool. Maybe he should fire himself for having a hand in hiring an "idiotic turkey" to begin with.
It's likely that the developer wasn't all that bad, but stopped giving a shit after being berated by an abusive asshole for umpteenth time.
Re: (Score:2)
Re:Fire the Abusive PHB (Score:5, Insightful)
You are being a tad too gentle on management in this case. Anyone who uses that sort of language on a public website is showing a lack of professionalism that goes beyond incompetence. Professionalism in the workplace exists for a bunch of reasons, one is to maintain cordial relations between people who work together so that you don't end up with a tit-for-tat culture in the workplace.
Maybe I'm a bit biased, but .... (Score:5, Insightful)
I tend to side with the critics here, asking if maybe management (including possibly the person posting the original question) are really the ones to blame?
I've worked in I.T. for something like 25 years now, for companies big and small, though the only times I've held a title of "manager", I was really only tasked with managing outside consultants or developers. I've always preferred being relatively "hands on" with the problem solving and system/network administration tasks at-hand, vs. spending my day in meetings and typing up Excel spreadsheets trying to explain what the "team" was doing.
Bottom line? Sure, there are a LOT of people out there trying to get hired in I.T. as support people or sysadmins who REALLY don't know what they're doing. If more companies would let the people actually DOING those jobs interview these people, they'd be able to weed out far more of the bad seeds before they even started. What I see, time and time again, is some I.T. manager who thinks he's simply "too busy" to interview some potentially really good people who apply for positions, and then he gets in a panic when it comes down the wire and he absolutely can't go without employing another person any longer. He winds up asking H.R. to find him someone good, and of course they don't know squat about I.T. so they pick through the resume submissions based on "standard issue" criteria like the college degree they claim to have, or the number of certifications they list. If he does "second interviews" with these pre-selected people, he may just be trying to pick the best of a bad bunch at that point.
But another problem is with how the I.T. workers are managed. You can have some really top-notch people working for you, yet they're made out to be clueless, inefficient screw-ups because they're actually trying to use their brains to decide which tasks on their plates are REALLY most important to the company. Meanwhile, some upper management character is throwing fits about relatively inconsequential items his ego demands be put "front and center". If you're busy working a difficult problem affecting a whole division of the company and by doing so, you didn't get some new computer issued to somebody first thing in the morning
... guess what usually happens? It's that idiot in I.T. who caused the employee not to have that shiny new PC on their desk on time. Nobody's even aware of the work the I.T. guy was actually in the middle of doing.
And here's the kicker.... You can say all you like about this simply being a "lack of communications" issue. "If management was simply kept informed about what I.T. was doing, everyone would be better off." But so many computer problems are of a "need to fix this yesterday!" level of importance, your good I.T. rank and file employees are going to concentrate on getting that done -- not on getting sidetracked with emailing status updates to key people. Management needs to realize that a certain level of TRUST is required here. You have to say, "I don't really know what Joe Q. has been doing the last few days, but that's ok. I trust Joe Q. because when I make an effort to find out if anyone feels Joe helped them with their issues, I get loads of positive feedback that he did." Micro-managing I.T. is almost never wise....
Re:Maybe I'm a bit biased, but .... (Score:5, Insightful)
I enjoyed that rant. We tried to solve the problem of IT setting priorities by forcing all of the department heads to prioritize their top 3 items each week. As an example of what we were dealing with, our CFO took a month to put together his list and came back with 5 items on his "top 3" list of projects. After we started to work on his priorities he came back with a new top priority to add to the list. So we put it ahead of #1 on the list and "Project Zero" was born.
He wasn't alone: the president of the company had a meeting with us about a huge initiative he wanted to undertake immediately. Starting the next week he put other items that were more pressing (but not important) at the top of his list. He did this every week. Every week we warned him that we were not going to work on his other project because he was prioritizing these other things this week. Every week he said he understood and signed off on our statement of work. A year later he got pressure from the board of directors and threw us right under the bus. Called me into a huge meeting to yell at us for not getting his project done "in over a year". I calmly produced 60 pages of signed off work orders from him, proving that at every turn he decided to have us work on something else and he bore the full and sole responsibility for the project's delay. You know what? Nobody cared.... I believe the direct quote was "I'm tired of excuses. I expect results, not excuses."
Lesson learned. Don't work for crazy people.
Re:Maybe I'm a bit biased, but .... (Score:4, Insightful)
Ain't it the truth? On the other hand, there is a lot of knowledge sharing to be gained from respectful listening. If you have weekly operations or status meetings, make sure that someone from IT is at the table. Everywhere I've been where that was the practice has been a pleasant and effective workplace. When systems are running well, they're essentially invisible, and this is a highly desirable state of affairs. It's quite the opposite of neglect, but if there isn't active communication about what's going on, how do you ever expect to tell them apart? (Until it's too late, of course, and the chronically-underfunded, under-appreciated infrastructure finally falls down hard.)
Re:Maybe I'm a bit biased, but .... (Score:4, Insightful)
(posted as AC cause I moderated)
I've worked on all sides of this coin, as developer, sr. dev,, architect, manager and even latent founder and lots of other short temporary roles. I've worked at everything from a 1 man shop, to fortune 100. I've worked in government, restaurant, warehouse, sales, wholesale, entertainment, and basically everything but medical (I have a rule against killing people with code, even if it's not mine).
And after years of experience, I must say one of my first bosses nailed it with his funny anecdotes towards employees...
"There's two kinds of people in this world, lug nuts and ball bearings. Both are good employees, but they have to be managed completely differently."
Lugnuts need to have project plans, statuses and meetings. They need organization, management and regular motivation.
Ballbearings just glide along. You give them a task and they work it, and keep working. Some will go off in wrong directions, but you can be sure they are chugging away at the task. They don't deal with interruption much, they don't like meetings, and they usually prefer to finish things to perfection.
Each type has their advantages and disadvantages. Lugnuts are typically seen as dependable because they are constantly managed. Ballbearings are seen as solvers and self-motivated. But both need to be reset every now and then onto the correct path.
So yeah, complete generalization here - but it does help to understand motivations and managing. And you see a lot of ballbearings in IT. Enjoy...
lolwut (Score:2)
Oh, and "under-performing" instead of "incompetent"? (Which is the word the article used.)
Trying to figure out if submitter is PMSing or just bad at paraphrasing.
An abusive employer? (Score:5, Insightful)
By using terms such as "culprit", "idiot", and "turkey" you indicate that you are a big part of the problem.
Only gross mismanagement would let you get into such a mess in the first place.
It sounds like he is well rid of you.
Re:An abusive employer? (Score:4, Insightful)
Maybe a case of projecting my experience onto the submitter, but it came off to me like he's the poor bastard who has to clean up the mess, rather than the boss. Having been in that boat myself (and still, to this day, occasionally find slushy little coiled piles of things like "converting" AM/PM to 24h format using 13 chained "if/then/else" statements) I'm willing to give a lot of leeway for "frustration venting."
Re:An abusive employer? (Score:4, Insightful)
By using terms such as "culprit", "idiot", and "turkey" you indicate that you are a big part of the problem.
Only gross mismanagement would let you get into such a mess in the first place.
It sounds like he is well rid of you.
Parent post should be modded up +5 insightful.
I agree this poster does sound like a very poor manager or the company he works for has management issues.
What training programs do you have in place ?
Was this person doing a poor job because of company work practices ?
Was he faking that he knew what he was doing because no one showed him how to do it properly ?
If these above questions could be answered then I think you would find that you would not need to be asking what to do after your Sysadmin / developer went off and found greener pastures..
Simple enough (Score:2)
I fired a sysadmin (Score:2, Insightful)
Prepare, and execute quickly.
After too many actual shouting conflicts with others, and numerous lies ("even I will have trouble upgrading X11") he had to go. First I arranged for our previous guy, who had gone off to be a consultant while finishing his PhD, to return (at his new rate+housing) for continuity. Then I spent 3 hours with the firee, discussing in detail why he had screwed up in so many ways. I gave him the option of quitting or being fired, he chose the latter for unemployment benefits.
We wen
You can bet on it (Score:3)
you'll still be cleaning up the problems six months later.
The real issue is not the low productivity techie. It's that there's no manager with enough knowledge and skills to
... manage techies.
Techies are seen somehow as "lone wolves" or "wizards" that "just do the (right) things".
My solution?
Hire a manager with the real knowledge (an former techie) and let him both manage and work with the younger techies.
Re: (Score:2)
I want to check the boxes that say "This guy can handle the freaky social pariahs in IT Tech, because he was one, but he can also put great covers on TPS reports first time around."
The first rule (Score:5, Insightful)
I have been in IT for nearly 25 years now and have learned a few things along the way. The first rule is that most employees referring to others as idiots, turkeys, incompetent etc need to look first in their own seat.
It is generally a reaction I expect from a dev or sysadmin covering his own faults by passing blame to others. I find most people just want to do what they where hired to do and do it well and given the proper chance and assistance will do just that.
In the last 5 - 10 years though it is generally a result of understaffing and insane deadlines causing less than desired results.
Re:The first rule (Score:4, Insightful)
I agree. There's nothing an incompetent manager likes more than a scapegoat.
Re: (Score:2)
There is truth in what you say; see also The Unspoken Truth About Managing Geeks [computerworld.com] happene
Re: (Score:2)
Re: (Score:2)
I have been in IT for nearly 25 years now and have learned a few things along the way. The first rule is that most employees referring to others as idiots, turkeys, incompetent etc need to look first in their own seat.
Damn bloody right. This article is describing a dysfunctional company to me, as opposed to merely a dysfunctional employee.
Turkey farm (Score:3)
I'd start by sacking the turkey that hired the turkey in the first place, and/or the turkey whose piss poor management skills allowed the situation to get so far out of control that someone needed to be sacked.
Re: (Score:2)
Unfortunately, your measure would render the company stalled - it's impossible to substitute all the management at once...
This is why you fail (Score:4, Insightful)
when the culprit is shown the door.
But the person who hired him still works at the firm... that's the real "culprit".: (Score:2)
If you actually had a process with holes large enough for that to get through, I have no sympathy. You should be automatically building the production artefacts from the source control. There should be no intermediate process where someone can throw something in.
Ego should be isolated from production.
Our developers can only check out/check in and we keep deployment to two well trusted specialist guys who have been with us for 10 years. We also audit every damn line of code that goes in the repo.
Even thou
Incompetent or evil? (Score:2)
If they were evil: did some bad things, sabotaged the operations, stole money/data/reputation etc. then your security people should be able to detect the weaknesses ('cos if they were good, yet evil, they'd still be working; undetected). If not, then it
Article is Utter Crap (Score:3)
When sys admins put back doors in for themselves it is usually to get around ridiculous amounts of bureaucracy that stop them from getting anything done. A competent sys admin also does not 'add patches as they become available' willy nilly because those patches need tested, you need to understand what is in them and you need to make a decision as to whether you are affected by it and the disruption is warranted. It also seems to be about security companies selling their wares and installing 'data loss prevention systems', whatever the hells those are. Would I trust and outside set of consultants to come in and do that? No I wouldn't.
Basically, if you're at a point where you are doing what this article says then your own company is incompetent and shooting blanks in the dark.
Re:maybe he isn't such an idiot? (Score:5, Funny)
Quickly leave the island before the dinosaurs escape.
Re: (Score:2)
That seems a little far-fetched. First, there's the delay. Most people aren't cold-blooded and thick-skinned enough to wait that long for their revenge, they'll go for it while the incident's fresh and they're good and mad. Second, above the doorframe? I can see thumb drives going undetected stashed in the sub-floor, but above a doorframe? You mean in 3 years nobody on the cleaning staff wiped off the top of the doorframe and knocked the drive loose? Nobody looked up on their way out and noticed it? 3 years
|
http://it.slashdot.org/story/12/11/09/0149252/what-to-do-after-you-fire-a-bad-sysadmin-or-developer
|
CC-MAIN-2013-48
|
refinedweb
| 5,955
| 71.55
|
The following program does the job of converting long to String.
public class LongToString { public static void main(String args[]) { long l1 = 10; String str = String.valueOf(l1); System.out.println("long l1 in string form is " + str); } }
Output screenshot of long to String Conversion
The long l1 is passed as parameter to valueOf() method. This method converts long l1 to string form. Anywhere long can be done in Java.
\==============================================================================
Your one stop destination for all data type conversions
byte TO
short TO
int TO
float TO
double TO
char TO
boolean TO
String and data type conversions
String TO
TO String
|
https://way2java.com/string-and-string-buffer/long-to-string-conversion-example-java/
|
CC-MAIN-2020-40
|
refinedweb
| 102
| 72.16
|
On Thu, Feb 08, 2018 at 11:08:18AM +0000, Suzuki K Poulose wrote:> On 08/02/18 11:00, Christoffer Dall wrote:> >On Tue, Jan 09, 2018 at 07:04:00PM +0000, Suzuki K Poulose wrote:> >>Add a helper to convert ID_AA64MMFR0_EL1:PARange to they physical> > *the*> >>size shift. Limit the size to the maximum supported by the kernel.> >> >Is this just a cleanup or are we actually going to need this feature in> >the subsequent patches? That would be nice to motivate in the commit> >letter.> > It is a cleanup, plus we are going to move the user of the code around from> one place to the other. So this makes it a bit easier and cleaner.> On its own I'm not sure it really is a cleanup, so it's good to mentionthat this is to make some operation easier later on in the commitletter.> > >>> >>Cc: Mark Rutland <mark.rutland@arm.com>> >>Cc: Catalin Marinas <catalin.marinas@arm.com>> >>Cc: Will Deacon <will.deacon@arm.com>> >>Cc: Marc Zyngier <marc.zyngier@arm.com>> >>Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>> >>---> >> arch/arm64/include/asm/cpufeature.h | 16 ++++++++++++++++> >> arch/arm64/kvm/hyp/s2-setup.c | 28 +++++-----------------------> >> 2 files changed, 21 insertions(+), 23 deletions(-)> >>> >>diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h> >>index ac67cfc2585a..0564e14616eb 100644> >>--- a/arch/arm64/include/asm/cpufeature.h> >>+++ b/arch/arm64/include/asm/cpufeature.h> >>@@ -304,6 +304,22 @@ static inline u64 read_zcr_features(void)> >> return zcr;> >> }> >>+static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)> >>+{> >>+ switch (parange) {> >>+ case 0: return 32;> >>+ case 1: return 36;> >>+ case 2: return 40;> >>+ case 3: return 42;> >>+ case 4: return 44;> >>+> >>+ default:> >> >What is the case we want to cater for with making parange == 5 the> >default for unrecognized values?> >> >(I have a feeling that default label comes from making the compiler> >happy about potentially uninitialized values once upon a time before a> >lot of refactoring happened here.)> > That is there to make sure we return 48 iff 52bit support (for that matter,> if there is a new limit in the future) is not enabled.> duh, yeah, it's obvious when I look at it again now.> >> >>+ case 5: return 48;> >>+#ifdef CONFIG_ARM64_PA_BITS_52> >>+ case 6: return 52;> >>+#endif> >>+ }> >>+}> >> #endif /* __ASSEMBLY__ */> Thanks,-Christoffer
|
https://lkml.org/lkml/2018/2/8/165
|
CC-MAIN-2018-43
|
refinedweb
| 383
| 64.71
|
- [SOLVED] two grids into window (of desktop sample)
- Adding existing panel to a new container like tabpanel
- bottom aligned tabs in region : 'north' of viewport...possible?
- disabling everything
- Dynamic GUI?
- passing params to asp.net
- First active page in paging toolbar is not right
- very simple form always fires failure callback
- JsonStore + fields compound
- ComboBox expand method
- Authentification Form
- TabPanel Multiple Rows
- Tools to work with Ext2.0
- datefield editor
- Filter in the grid is not working
- Authentification Form
- how to remove a button from toolbar?
- IE Grid width error
- decode submitted Date field
- how to use E:nodeValue(foo)?
- Ext.ajax how to send xml to a servlet?
- FireFox: Error on assigning listener to element
- Help with GridPanel and JsonReader
- how to make the init-value of NumberField be zero
- strange behavior of my application
- Setting value to textfield from JSON......
- Ext is not defined
- Generate MenuBar from XML file
- beforeDestroy() is fired too late?
- IE6 vs (IE7 & FF)
- Pagination problem
- Submit params as array?
- Center panel oevrlaps wet panel in nested layout
- Can't get form to submit from Panel
- How to USE to Ext JS
- Problems with TabPanel rendering...
- Detect HTML Element ID
- how to update tabpanel's title
- Ext 2 Slider as Editor in Grid
- Update Ext.Panel
- Execute a php script from a button
- Radio button and 'check' event
- How to display success / failure message on form submission
- [2.0][CLOSED DUP] Possible destruction bug in Component#beforeDestroy
- Table layout setting
- Enable/disable grid items button
- RowExpander Question: Left Justify expanded row
- Help with TabPanelItem
- Adding panels to a viewport on th fly
- Transfrom existent table to GridPanel
- Portal Sample problems...
- Paging question
- Drag and Drop from Tree to Grid
- listener as default value
- Too slow request
- Help with responses
- getChanges in editorgrid
- empty Grid
- Form + Grid
- removing class from a row
- ExtJS design with accessability in mind
- movenode,nodemove,beforemovenode,beforenodemove... none is firing!!!
- [SOLVED]serialize object
- [SOLVED] substring in a grid field.
- why do my tabs have a background color?
- Checkbox in a Form
- SubMenu disappears
- Is it possible to mix Grouping-grid and checkboxs?
- Lazy render CardLayout items
- viewport : when sidebar is collapsed, tables in center region don't expand
- An unusual way to use CardLayout - DataView doesn't seem to work right
- ComboBox: adapt width to data size?
- FireFox and IE display problems
- Help with Ext.Panel
- ComboBox - loadingText
- Capture Id On GridPanel
- Help with setting event for IE
- Change the arrow of ComboBox (via css)
- Date Picker Help
- How to get the fields or id of grid row when click on it?
- Using QuickTips on DIV element?
- Beginner question - Ext.query
- Textfield Validation Question
- Changing ext2 from ltr to rtl? (HELP)
- tabbed navigation best practices?
- Accordion change event ?
- can i force a grid to expand (vertically) to fill a region (center) ?
- Solved : Javascript Help
- [RESOLVED] Simple Radio Button
- Adding dynamic content to panels
- How to Submit a form and post back the content in a different reigon
- Ext 2.0 grid not displaying in IE
- Breadcrumb in a header?
- editor grid editor always visible
- example of a grid in a tab
- [Solved] TabPanel autoheight
- resetting to width: auto
- Insuring HTMLEDITOR is fully loaded before using it
- Icon doesn't appear
- help, load new data into gridpanel.
- Why i can not see the content on panel2
- How to install extension ?
- install Ext JS every folder for use ?
- Accordion Autoheight
- Combo box display issue -- IE6/IE7
- onmouseover on a textfield
- Tree problem
- ExtJS - Effect.Morph is not a constructor
- scripttagproxy & those extra parameters
- column layout add ext.get to items?
- source of Ext.lib.Ajax
- Grid width to 100%
- [SOLVED] A simple question about handlers
- [CLOSED] grid: event "select" instead of "rowclick"
- Using my image in confirm dialog
- [SOLVED]Help, look for a function, tree that can auto load child when click
- [SOLVED] Where to put selectFirstRow
- [SOLVED] Menu doesn't open because of delay.
- Array-Grid example,problem with IE6
- java Json Date
- plz help me in splitting a panel
- simple layout?
- Cell Event in Editor Grid?
- Aptana and Outline
- this.proxy has no properties (Grid) (JsonStore)
- tab not rendered correctly
- TreePanle-MousedDown,MouseUp and BodyScroll events
- How to submit the JSON data from the toolbar button.....
- Don't want to decode text/javascript response
- how to display field value in alert?
- How do i insert an image to a fieldset
- Ext.Ajax.request() just show modal in IE
- Insert in GroupingGrid and sorting
- Extending Ext.tree.TreeNode and populating a tree
- [SOLVED] this.proxy has no properties
- Basic Login form redirect problem
- Cookie
- Can't find element
- Paging Problem: no grid entries are displayed
- defaultUrl not populated with dynamic added TabPanel items?
- Tree ordering
- need help on treeloader
- Clear combo box value
- how to create two grids side by side
- Why Can't I Get My Window to "Show"?
- Table layout Scrolling
- Combo box is auto dropping down?
- Solved by writing a post: GridPanel not displaying JSON data
- Panel autoload problem - help needed.
- Tabpanel: Doesn't opens a Tab a second time
- Dropdown populated field
- Handler - visibility
- links or buttons in grid cells
- Sending store with a Request
- Events - what am I doing wrong?
- Scroll bars in a layout with grids
- Form with columns issue
- Ext 2.0 "Build your own JS" Optimizer
- ComboxBox with animation
- recofigure form during action handling
- Hiding validation errors until submission
- Reuse/Inherit Grid config?
- Preloader
- no scrollbars on my grid
- plz help me in displaying buttons on toolbar
- use Ext.KeyNav for a div in a panel
- is there EXT event and event handler at the page close
- Load combo synchronously?
- [SOLVED]firefox and gridpanel scrolling
- ExtJs in a portlet
- [SOLVED] Image not rendered in IE but FF Yes
- html: don't rendered in IE but in FF Yes
- Access object from outside Ext.onReady{...}
- Form submit under IE7
- Extending GridPanel
- Modify Summary Totals
- Grid Resizing/Layout problem
- Desktop [2.0] getModule
- Editable combobox doesn't fire 'change' on text edit
- [SOLVED] Creating a Panel with hidden toolbars
- Clearing the invalid status upon disable
- Problem populating a tabPanel
- how to check if as a panel is collapsed or expanded
- How to display data in to grid
- reload the tab
- treeloader doesnt load properly- help needed
- Combo editor in only one record of a grid
- adding pagination bar broke application
- panel stretched out
- Changing a button's position?
- Dynamic tab on grid HELP !
- Workaround for invalid JSON
- store has no properties
- Retain Selected Row
- load value for a single component
- Json Grid Empty
- How to selected first item with combo Object after ds.load()
- For LOOP
- Grid summary is not working row wise.......
- setting focus on textbox of window
- fitToFrame of 1.1 to 2.0
- HtmlEditor Scrolling to bottom
- Mysteriously downloading s.gif from Ext JS web site
- changing css if collapse btn
- applying css to window header
- TabPanel:add DOM to tab using autoTabs; then how to change back to normal DOM?
- Cannot run Ext 2.0 documentation
- Problem in downloading XLS file using Ext.Ajax.Request....
- DataView JsonStore and complex data
- Enter
- Subclassing FormPanel, automatically adding events
- [SOLVED] Problem with CardLayout and FormPanel
- Is it possible to embed grid into rowexpander?
- i want Enter key submit my form
- Hiding page content while Ext renders
- How to clear contents from Treeview control
- [SOLVED] How to do small buttons in taskbar??
- ShowToggle() Missing Feature?
- SearchField (TwinTriggerField) incorrect rendering
- how to add multiple children to a tree node?
- button with multiline text, or specific height
- column resizing issue in grid
- two very small problems of text field and check box
- How to submit several FormPanels at once
- how to change scope inline
- ScriptTagProxy & Caching
- grid data from php file
- HELP about grid events such as rowclick
- form validation(urgent)
- help with a viewport
- Don't allow grouping in one column
- record create from JSON
- iframe returns error
- Reusing panel definition
- Question on setting up desktop applications using Ext JS
- Jump (Anchor) Links in Tab Panels
- XTemplates conditional rendering
- Window dynamic contentEl .NET repeater style error in IE
- css for scrollbar,
- Problem getting a record out of JsonStore
- Accordian question: More than one section open at a time?
- checkbox at menu - prevent close menu
- collapsible tabPanel
- A problem with fieldset and column
- Rendering Grid into a Huge page (portal)
- How to use namespaces?
|
https://www.sencha.com/forum/archive/index.php/f-9-p-26.html?s=2f85bc10229f1eaf3d0b2bf434addaaf
|
CC-MAIN-2020-16
|
refinedweb
| 1,387
| 53.41
|
>> Please remind me why we don't use SYNC_INPUT by default > > ... it supposedly may delay). How about this idea... The problem is that the X signal handler (XTread_socket) does stuff that can clobber the heap. Richard's objection to SYNC_INPUT is that C-g may not be read in some loops. But Stefan pointed out that under X, C-g is not processed until we read a QUIT. My proposal is to make signal handling work like SYNC_INPUT when we are running in X (i.e., when read_socket_hook is NULL). The C-g behavior is no worse than before, and the terminal behavior remains the same as before. We lose nothing, while resolving the memory clobberage bugs. This change would look something like this: *** emacs/src/keyboard.c.~1.847.~ 2006-01-20 10:05:43.000000000 -0500 --- emacs/src/keyboard.c 2006-01-24 22:12:26.000000000 -0500 *************** *** 6897,6903 **** EMACS_SET_SECS_USECS (*input_available_clear_time, 0, 0); #ifndef SYNC_INPUT ! handle_async_input (); #endif #ifdef BSD4_1 --- 6897,6904 ---- EMACS_SET_SECS_USECS (*input_available_clear_time, 0, 0); #ifndef SYNC_INPUT ! if (!read_socket_hook) ! handle_async_input (); #endif #ifdef BSD4_1 *** emacs/src/lisp.h.~1.547.~ 2005-12-11 15:37:36.000000000 -0500 --- emacs/src/lisp.h 2006-01-24 22:13:41.000000000 -0500 *************** *** 1830,1835 **** --- 1830,1837 ---- Fthrow (Vthrow_on_input, Qt); \ Fsignal (Qquit, Qnil); \ } \ + else if (!read_socket_hook) \ + handle_async_input (); \ } while (0) #endif /* not SYNC_INPUT */
|
http://lists.gnu.org/archive/html/emacs-devel/2006-01/msg00909.html
|
CC-MAIN-2016-26
|
refinedweb
| 221
| 62.54
|
Matplotlib recognizes the following formats to specify a color:
[0, 1](e.g.,
(0.1, 0.2, 0.5)or
(0.1, 0.2, 0.5, 0.3));
'#0f0f0f'or
'#0f0f0f80'; case-insensitive);
[0, 1]inclusive for gray level (e.g.,
'0.5');
{'b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'};
'xkcd:'(e.g.,
'xkcd:sky blue'; case insensitive);
{'tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:brown', 'tab:pink', 'tab:gray', 'tab:olive', 'tab:cyan'}(case-insensitive);
'C'followed by a number, which is an index into the default property cycle (
matplotlib.rcParams['axes.prop_cycle']); the indexing is intended to occur at rendering time, and defaults to black if the cycle does not include color.
"Red", "Green", and "Blue" are the intensities of those colors, the combination of which span the colorspace.
How "Alpha" behaves depends on the
zorder of the Artist. Higher
zorder Artists are drawn on top of lower Artists, and "Alpha" determines
whether the lower artist is covered by the higher.
If the old RGB of a pixel is
RGBold and the RGB of the
pixel of the Artist being added is
RGBnew with Alpha
alpha,
then the RGB of the pixel is updated to:
RGB = RGBOld * (1 - Alpha) + RGBnew * Alpha. Alpha
of 1 means the old color is completely covered by the new Artist, Alpha of 0
means that pixel of the Artist is transparent.
For more information on colors in matplotlib see
matplotlib.colorsAPI;
"CN" colors are converted to RGBA as soon as.
import matplotlib._color_data as mcd import matplotlib.patches as mpatch overlap = {name for name in mcd.CSS4_COLORS if "xkcd:" + name in mcd.XKCD_COLORS} fig = plt.figure(figsize=[4.8, 16]) ax = fig.add_axes([0, 0, 1, 1]) for j, n in enumerate(sorted(overlap, reverse=True)): weight = None cn = mcd.CSS4_COLORS[n] xkcd = mcd.XKCD_COLORS["xkcd:" + n].upper() if cn == xkcd: weight = 'bold' r1 = mpatch.Rectangle((0, j), 1, 1, color=cn) r2 = mpatch.Rectangle((1, j), 1, 1, color=xkcd) txt = ax.text(2, j+.5, ' ' + n, va='center', fontsize=10, weight=weight) ax.add_patch(r1) ax.add_patch(r2) ax.axhline(j, color='k') ax.text(.5, j + 1.5, 'X11', ha='center', va='center') ax.text(1.5, j + 1.5, 'xkcd', ha='center', va='center') ax.set_xlim(0, 3) ax.set_ylim(0, j + 2) ax.axis('off')
Out:
(0.0, 3.0, 0.0, 50.0)
Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
|
https://matplotlib.org/3.1.3/tutorials/colors/colors.html
|
CC-MAIN-2022-21
|
refinedweb
| 414
| 61.33
|
Barcode Software
ssrs code 128 barcode font
A Closer Look at Methods and Classes in C#.net
Creator qr codes in C#.net A Closer Look at Methods and Classes
The preceding examples have been catching exceptions generated automatically by the runtime system. However, it is possible to throw an exception manually by using the throw statement. Its general form is shown here: throw exceptOb; The exceptOb must be an object of an exception class derived from Exception. Here is an example that illustrates the throw statement by manually throwing a DivideByZeroException:
using show rdlc report to access barcode on asp.net web,windows application
BusinessRefinery.com/ barcodes
generate, create barcodes full none on visual c# projects
BusinessRefinery.com/ bar code
Up to this point, we have been using simple class hierarchies consisting of only a base class and a derived class. However, you can build hierarchies that contain as many layers of inheritance as you like. As mentioned, it is perfectly acceptable to use a derived class as a base class of another. For example, given three classes called A, B, and C, C can be derived from B, which can be derived from A. When this type of situation occurs, each derived class inherits all of the traits found in all of its base classes. In this case, C inherits all aspects of B and A.
use microsoft excel bar code encoding to create barcodes for microsoft excel validate
BusinessRefinery.com/barcode
ssrs barcode font download
using barcode encoder for sql server 2005 reporting services control to generate, create barcodes image in sql server 2005 reporting services applications. forms
BusinessRefinery.com/barcode
Related Functions
generate, create barcode conversion none on visual c# projects
BusinessRefinery.com/barcode
rdlc barcode
using barcode generator for rdlc reports net control to generate, create bar code image in rdlc reports net applications. include
BusinessRefinery.com/ barcodes
char *_strdate(char *buf) char *_strtime(char *buf)
to display qrcode and qr data, size, image with .net barcode sdk stored
BusinessRefinery.com/qrcode
to encode qr-codes and qr code jis x 0510 data, size, image with word microsoft barcode sdk samples
BusinessRefinery.com/qr codes
Designing an Enterprise Citrix Solution
use office excel qr code jis x 0510 integrating to embed qr barcode in office excel alphanumberic
BusinessRefinery.com/Denso QR Bar Code
generate, create qr code jis x 0510 viewer none for .net projects
BusinessRefinery.com/qr bidimensional barcode
S( x ) = A j R j , k ( x ) . . .
qr code jis x 0510 image easy with .net
BusinessRefinery.com/QR Code JIS X 0510
to render qr and qrcode data, size, image with c# barcode sdk webservice
BusinessRefinery.com/qr codes
Why does the occiput posterior (OP) position frequently cause labor dystocia Because the fetal head must be more flexed and rotate more extensively (135 degrees instead of 90 degrees)
create pdf417 barcode in c#
using barcode printing for .net vs 2010 control to generate, create pdf417 image in .net vs 2010 applications. item
BusinessRefinery.com/barcode pdf417
crystal reports 2008 barcode 128
use visual studio .net crystal report code 128 barcode printing to render code 128 code set a for .net part
BusinessRefinery.com/barcode standards 128
This program displays the following output:
ssrs code 39
use sql server reporting services bar code 39 creator to render code 3/9 with .net click
BusinessRefinery.com/3 of 9
crystal reports pdf 417
using type .net framework crystal report to embed pdf417 2d barcode on asp.net web,windows application
BusinessRefinery.com/PDF-417 2d barcode
De ning the integral term as the moment of inertia about the xed point, Jo = this can be rewritten as: KE = 1 J w 2. 2 o (11.22)
rdlc data matrix
using how to rdlc report to print data matrix ecc200 in asp.net web,windows application
BusinessRefinery.com/data matrix barcodes
crystal reports data matrix native barcode generator
using barcode creator for .net framework control to generate, create datamatrix image in .net framework applications. formation
BusinessRefinery.com/barcode data matrix
Figure 15-11. SpeedScreen Multimedia Acceleration settings
.net data matrix reader
Using Barcode decoder for tiff Visual Studio .NET Control to read, scan read, scan image in Visual Studio .NET applications.
BusinessRefinery.com/Data Matrix
ssrs pdf 417
generate, create pdf-417 2d barcode parser none with .net projects
BusinessRefinery.com/PDF-417 2d barcode
8. D. A D in the routing table indicates an EIGRP route. A is incorrect because an I indicates an IGRP route. B is incorrect because an R indicates a RIP route. C is incorrect because an O is an OSPF route. 9. show ip route. Successor routes are populated in the router s IP routing table. 10. show ip eigrp topology.
Cascading Style Sheets 2.0 Programmer's Reference not defined for shorthand properties Percentages n/a Inherited no Applies to all elements Media Groups visual
Copyright 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
STS -48
192.168.5.32/29, 192.168.5.40/29
Transporting Voice by Using IP
PART I
In order for Desktop Intelligence to generate the SQL statement correctly as an HTML link, you must set the Object Format to Read As HTML. Select the object and right-click to bring up the following pop-up menu:
and the business case for moving your organization to the cloud later in this chapter..
Cam angle, deg.
0.97514 0.97876 0.98201 0.98494 0.98754 0.98982 0.99183 0.99355 0.99501 0.99625 0.99725 0.99806 0.99870 0.99918 0.99952 0.99976 0.99990 0.99997 0.99999 1.00000
More QR Code on C#
barcode generator in c# code project: The C# Language in C# Integrate Denso QR Bar Code in C# The C# Language
ssrs code 39: PART I in C# Embed QR Code ISO/IEC18004 in C# PART I
barcode generator in c# code project: The C# Language in C# Access qr barcode in C# The C# Language
barcode generator in c# code project: Operator Overloading in visual C# Writer Denso QR Bar Code in visual C# Operator Overloading
download free qr code barcode excel add-in trial: The C# Language in visual C# Deploy Quick Response Code in visual C# The C# Language
ssrs barcode font: The C# Language in c sharp Implementation qr-codes in c sharp The C# Language
excel qr code macro: Multidimensional Indexers in .net C# Integrate QR in .net C# Multidimensional Indexers
barcode rendering framework c# example: Indexers and Properties in C#.net Creation qr bidimensional barcode in C#.net Indexers and Properties
barcode rendering framework c# example: index - lowerBound in C# Integrating qr barcode in C# index - lowerBound
ssrs barcode font: Constructors and Inheritance in visual C#.net Integrate qr-codes in visual C#.net Constructors and Inheritance
ssrs barcode font: Inheritance in C#.net Print qrcode in C#.net Inheritance
ssrs barcode font: Inheritance in C# Creation qr-codes in C# Inheritance
ssrs barcode font free: Inheritance in visual C# Create qr-codes in visual C# Inheritance
excel qr code free: Methods of the object Class in .net C# Create Quick Response Code in .net C# Methods of the object Class
ssrs barcode font free: csc SeriesDemo.cs ISeries.cs ByTwos.cs in .net C# Compose qr codes in .net C# csc SeriesDemo.cs ISeries.cs ByTwos.cs
excel qr code free: Explicit Implementations in .net C# Build QR in .net C# Explicit Implementations
ssrs barcode font free: Interfaces, Structures, and Enumerations in visual C# Integrated QR Code ISO/IEC18004 in visual C# Interfaces, Structures, and Enumerations
ssrs barcode font free: The C# Language in C#.net Create qr-codes in C#.net The C# Language
ssrs barcode font free: Rethrowing an Exception in visual C#.net Generate qrcode in visual C#.net Rethrowing an Exception
barcode generator in c# code project: PART I in .net C# Build qr bidimensional barcode in .net C# PART I
Articles you may be interested
java api barcode scanner: 12: Biometrics and Privacy in Software Maker code128b in Software 12: Biometrics and Privacy
java code 128 library: Billions of bytes Megabits 7% overhead Adjusted capacity in Software Deploy QR-Code in Software Billions of bytes Megabits 7% overhead Adjusted capacity
barcode generator dll in vb.net: What is happening to hormone levels in the follicular phase in Android Printing QR Code JIS X 0510 in Android What is happening to hormone levels in the follicular phase
qr code generator excel download: 0, x2 in Software Printer QR-Code in Software 0, x2
how to use barcode reader in asp.net c#: Oscillator Design in Software Draw UPCA in Software Oscillator Design
free java barcode reader api: Sensors in Software Implementation QR Code in Software Sensors
java code 39 barcode: Coaching Overview in Microsoft Draw QR Code in Microsoft Coaching Overview
qr code excel font: 9 8 7 6 5 4 3 2 1 0 This is Count() in the Counter2 namespace. in c sharp Embed qr-codes in c sharp 9 8 7 6 5 4 3 2 1 0 This is Count() in the Counter2 namespace.
c# barcode reader example: Operator Overloading, Indexers, and Properties in visual C# Render Code 128 in visual C# Operator Overloading, Indexers, and Properties
barcode generator java source code free: Build Your Own Combat Robot in Software Writer European Article Number 13 in Software Build Your Own Combat Robot
java barcode scanner open source: Figure 20-5: Seven cell pattern in Software Paint QR in Software Figure 20-5: Seven cell pattern
N u c l e i c A c i d B i o p h y s i c s in .NET Embed qr codes
how to create barcode in vb.net 2012: WHAT IS A DATA CENTER in Software Generating qr bidimensional barcode in Software WHAT IS A DATA CENTER
free upc-a barcode font for excel: Business Writing for Results in Software Implement Data Matrix 2d barcode in Software Business Writing for Results
free barcode font for vb.net: Analysis-by-Synthesis (AbS) Codecs in .NET Generator Data Matrix in .NET Analysis-by-Synthesis (AbS) Codecs
barcode printing vb.net: Bode Plots and Butterworth Filters in .NET Integrate barcode code 128 in .NET Bode Plots and Butterworth Filters
barcode project in vb.net: Related Functions in Java Insert pdf417 in Java Related Functions
progress bar code in vb.net 2008: Templates, Exceptions, and RTTI in Java Writer PDF417 in Java Templates, Exceptions, and RTTI
rdlc barcode free: FTP Layer 3/4 Policy Maps in Software Encoder data matrix barcodes in Software FTP Layer 3/4 Policy Maps
Build Your Own Elec tric Vehicle in .NET Implementation qrcode
|
http://www.businessrefinery.com/yc3/436/89/
|
CC-MAIN-2022-40
|
refinedweb
| 1,790
| 55.13
|
Introduction to JSON in Java
The format for data exchange which is light weighted, based on text, independent of language and is easily read and written by both humans and machines is JavaScript Object Notation, also called as JSON in Java. Which represents two types of structures called objects and arrays where object is a collection of none or more than zero name and value pairs and is an unordered collection and an ordered sequence of none or more than zero values is an array and the possible values can be numbers, strings, Booleans, null, objects and arrays.
Working of JSON in Java
- Consider the below example for representing an object in JSON to describe a person.
- First name and last name takes the values of string in the object, age takes the value of numbers in the object, address takes the values of stings and numbers to represents the address of the person in the object, the phone number takes the value of arrays in the object.
Code:
{
"fName": "Shobha",
"lName": "Shivakumar",
"age1": 28,
"address1":
{
"streetAdd": "4, Ibbani street",
"city1": "Bangalore",
"state1": "Karnataka",
"pinCode": 560064
},
"phNumbers":
[
{
"type1": "home1",
"no": "9738128018"
},
{
"type2": "fax1",
"no1": "6366182095"
}
] }
- Notation etc.
- In order to work with JavaScript Object Notation simple library, json-simple-1.1.jar must be downloaded and must be put in the CLASS PATH before we compile and run the JavaScript Object Notation programming examples.
- The JavaScript Object Notation object and structures of array uses the object modules provided by JavaScript Object Notation simple application programming interface.
- These structures of JavaScript Object Notation use the types JSON object and JSON array to be represented as object models. A map view is provided by the JSON object to access the collection of none or more than zero name and value pairs from the model and is an unordered collection and a List view is provided by the JSON array to access the ordered sequence of none or more than zero values in an array from the model.
Examples of JSON in Java
Given below are the examples of JSON in Java:
Example #1
Java program to demonstrate encoding of JavaScript Object Notation (JSON) in Java.
Code:
//Importing JSON simple library
import org.json.simple.JSONObject;
//Creating a public class
public class JsonEncode {
//Calling the main method
public static void main(String[] args) {
//Creating an object of JSON class
JSONObject obje = new JSONObject();
//Entering the values using the created object
obje.put("bal", new Double(100.22));
obje.put("number", new Integer(200));
obje.put("check_vvip", new Boolean(true));
obje.put("name1", "sean");
//Printing the values through the created object
System.out.print(obje);
}
}
In the above example, a JSON object obje is created. Using the JSON object obje. The values like double, integer, Boolean, string etc, are printed as output.
Output:
Example #2
Java program to demonstrate the use of JSON object and JSON array.
Code:
//importing JSON simple libraries
import org.json.simple.JSONObject;
import org.json.simple.JSONArray;
import org.json.simple.parser.ParseException;
import org.json.simple.parser.JSONParser;
//creating a public class
public class JsonDecode{
//calling the main method
public static void main(String[] args) {
//creating an object of JSONparser
JSONParser par = new JSONParser();
//defining and assigning value to a string
String str = "[2,{\"3\":{\"4\":{\"5\":{\"6\":[7,{\"8\":9}]}}}}]";
try{
Object objc = par.parse(str);
//creating a JSON array
JSONArray array = (JSONArray)objc;
System.out.println("The array's second element is");
System.out.println(array.get(1));
System.out.println();
//creating a JSON object
JSONObject objc2 = (JSONObject)array.get(1);
System.out.println("Field \"2\"");
System.out.println(objc2.get("2"));
str = "{}";
objc = par.parse(str);
System.out.println(objc);
str = "[7,]";
objc = par.parse(str);
System.out.println(objc);
str = "[7,,2]";
objc = par.parse(str);
System.out.println(objc);
}catch(ParseException pr) {
System.out.println("The elements position is: " + pr.getPosition());
System.out.println(pr);
}
}
}
In the above example, a JSON object of the JSON parser par is created and then a string value is defined and assigned. A JSON array is created to obtain different specified elements in the string.
Output:
Example #3
Java program to write JavaScript Object Notation data into a file with name JSON.json using JavaScript Object Notation object and JavaScript Object Notation array.
Code:
//importing java simple libraries and JSON libraries
import java.io.FileNotFoundException;
import java.io.PrintWriter;
import java.util.LinkedHashMap;
import java.util.Map;
import org.json.simple.JSONArray;
import org.json.simple.JSONObject;
public class JSONWrite
{
public static void main(String[] args) throws FileNotFoundException
{
// Json object is created
JSONObject job = new JSONObject();
// Adding data using the created object
job.put("fName", "Shobha");
job.put("lName", "Shivakumar");
job.put("age1", 28);
// LinkedHashMap is created for address data
Map m1 = new LinkedHashMap(4);
m1.put("streetAdd", "4, Ibbani Street");
m1.put("city1", "Bangalore");
m1.put("state1", "Karnataka");
m1.put("pinCode", 560064);
// adding address to the created JSON object
job.put("address1", m1);
// JSONArray is created to add the phone numbers
JSONArray jab = new JSONArray();
m1 = new LinkedHashMap(2);
m1.put("type1", "home1");
m1.put("no", "9738128018");
// adding map to list
jab.add(m1);
m1 = new LinkedHashMap(2);
m1.put("type2", "fax1");
m1.put("no1", "6366182095");
// map is added to the list
jab.add(m1);
// adding phone numbers to the created JSON object
job.put("phoneNos", jab);
// the JSON data is written into the file
PrintWriter pwt = new PrintWriter("JSON.json");
pwt.write(job.toJSONString());
pwt.flush();
pwt.close();
}
}
In the above example, a JSON object job is created. First name, last name and age is written to the JSON.json file using the job object. Linked hash map is created to add the address details which are then written to the file using JSON job object. JSON array object is created to add the phone numbers and linked hash map is used to create different types of phone numbers and finally JSON job object is used to write these phone numbers to the file. Ultimately using print writer the contents are written to the file.
Output:
Output of the above program when the file JSON.json is accessed to see the contents of the file.
Example #4
Java program to read the contents of the file JSON. JSON demonstrating the use of JavaScript Object Notation object parser, JavaScript Object Notation object object and JavaScript Object Notation object array.
Code:
//importing JSON simple libraries
import java.io.FileReader;
import java.util.Iterator;
import java.util.Map;
import org.json.simple.JSONArray;
import org.json.simple.JSONObject;
import org.json.simple.parser.*;
public class JSONRead
{
public static void main(String[] args) throws Exception
{
// The file JSON.json is parsed
Object objc = new JSONParser().parse(new FileReader("JSON.json"));
// objc is convereted to JSON object
JSONObject job = (JSONObject) objc;
// obtaining the fname and lname
String fName = (String) job.get("fName");
String lName = (String) job.get("lName");
System.out.println(fName);
System.out.println(lName);
// age is obtained
long age1 = (long) job.get("age1");
System.out.println(age1);
// address is obtained
Map address1 = ((Map)job.get("address1"));
// iterating through the address
Iterator<Map.Entry> itr = address.entrySet().iterator();
while (itr.hasNext()) {
Map.Entry pair1 = itr1.next();
System.out.println(pair1.getKey() + " : " + pair1.getValue());
}
// phone numbers are obtained
JSONArray jab = (JSONArray) job.get("phoneNos");
// iterating phoneNumbers
Iterator itr1 = jab.iterator();
while (itr1.hasNext())
{
itr = ((Map) itr1.next()).entrySet().iterator();
while (itr.hasNext()) {
Map.Entry pair1 = itr.next();
System.out.println(pair1.getKey() + " : " + pair1.getValue());
}
}
}
}
In the above example, the file JSON.json is parsed by creating a object objc which is then converted to JSON object job. First name, last name, age, address and phone numbers are read from the JSON.json file through iteratios and printed as the output.
Output:
The output of the above program after reading the contents from the JSON.json file is shown in the snapshot below:
Recommended Articles
This is a guide to JSON in Java. Here we discuss the introduction, working of JSON in Java, and examples. You may also have a look at the following articles to learn more –
|
https://www.educba.com/json-in-java/?source=leftnav
|
CC-MAIN-2020-34
|
refinedweb
| 1,347
| 51.75
|
I have a workbook (Client Fees) with several sheets. Each sheet represents aI have a workbook (Client Fees) with several sheets. Each sheet represents a
different client's auction fees (I sell on consignment on several sites). I
import my invoice each month from the auction sites and convert it to a
spreadsheet format. I need an external formula that will link the matching
text in the new invoice to the same text in the Client Fees workbook. I need
to eliminate having to put the client's names next to each entry in the
invoice, sort it and then add it to the corresponding Client Fees worksheet.
Hope this makes sense! Thanks in advance! Jennifer
Bookmarks
|
https://www.excelforum.com/excel-formulas-and-functions/495388-formula-linking-like-text.html
|
CC-MAIN-2021-31
|
refinedweb
| 117
| 73.78
|
Basic tutorial (Scala.js 0.6.x)
This page applies only to Scala.js 0.6.x. Go the newer tutorial for Scala.js 1.x.
Scala.js 0.6.x has reached End of Life." % "0.6.33")
We also setup basic project settings and enable this plugin in the sbt build file (
build.sbt, in the project root directory):
enablePlugins(ScalaJSPlugin) name := "Scala.js Tutorial" scalaVersion := "2.13.1" // or any other Scala version >= 2.10.2 //") val textNode = document.createTextNode(text) parNode.appendChild(textNode): Using jQuery
Larger web applications have a tendency to set up reactions to events in JavaScript rather than specifying attributes. We will transform our current mini-application to use this paradigm with the help of jQuery. Also we will replace all usages of the DOM API with jQuery.
Depending on jQuery
Just like for the DOM, there is a typed library for jQuery available in Scala.js: scalajs-jquery (there is a livelier fork which you may prefer if it already supports the version of Scala you are using).
Add the following line in your
build.sbt by:
libraryDependencies += "be.doeraene" %%% "scalajs-jquery" % "1.0.0"
Don’t forget to reload the sbt configuration now:
- Hit enter to abort the
~fastOptJScommand
- Type
reload
- Start
~fastOptJSagain
Again, make sure to update your IDE project files if you are using a plugin.
Using jQuery
In
TutorialApp.scala, remove the imports for the DOM, and add the import for jQuery:
import org.scalajs.jquery._
This allows you to easily access the
jQuery main object of jQuery in your code (also known as
$).
We can now remove
appendPar and replace all calls to it by the simple:
jQuery("body").append("<p>[message]</p>")
Where
[message] is the string originally passed to
appendPar, for example:
jQuery("body").append("<p>Hello World</p>")
If you try to reload your webpage now, it will not work (typically a
TypeError would be reported in the console). The
problem is that we haven’t included the jQuery library itself, which is a plain JavaScript library.
Adding JavaScript libraries
An option is to include
jquery.js from an external source, such as jsDelivr.
<script type="text/javascript" src=""></script>
This can easily become very cumbersome, if you depend on multiple libraries. The Scala.js sbt plugin provides a mechanism for libraries to declare the plain JavaScript libraries they depend on and bundle them in a single file. All you have to do is activate this and then include the file.
In your
build.sbt, set:
skip in packageJSDependencies := false jsDependencies += "org.webjars" % "jquery" % "2.2.1" / "jquery.js" minified "jquery.min.js"
After reloading and rerunning
fastOptJS, this will create
scala-js-tutorial-jsdeps.js containing all JavaScript
libraries next to the main JavaScript file. We can then simply include this file and don’t need to worry about
JavaScript libraries anymore:
<!-- Include JavaScript dependencies --> <script type="text/javascript" src="./target/scala-2.13/scala-js-tutorial-jsdeps.js"></script>
Setup UI in Scala.js
We still want to get rid of the
onclick attribute of our
<button>. After removing the attribute, we add the
setupUI method, in which we use jQuery to add an event handler to the button. We also move the “Hello World” message
into this function.
def setupUI(): Unit = { jQuery("body").append("<p>Hello World</p>") jQuery("#click-me-button").click(() => addClickedMessage()) }
Since we do not call
addClickedMessage from plain JavaScript anymore, we can remove the
@JSExportTopLevel annotation (and the corresponding import).
Finally, we add a last call to
jQuery in the main method, in order to execute
setupUI, once the DOM is loaded:
def main(args: Array[String]): Unit = { jQuery(() => setupUI()) }
Again, since we are not calling
setupUI directly from plain JavaScript, we do not need to export it (even though
jQuery will call it through that callback).:
> run [info] Running tutorial.webapp.TutorialApp [error] TypeError: (0 , $m_Lorg_scalajs_jquery_package$(...).jQuery$1) is not a function [error] at $c_Ltutorial_webapp_TutorialApp$.main__AT__V (.../TutorialApp.scala:9:11) [error] ... [trace] Stack trace suppressed: run last compile:run for the full output. [error] (compile:run) org.scalajs.jsenv.ExternalJSEnv$NonZeroExitException: Node.js exited with code 1 [error] Total time: 1 s, completed Oct 13, 2016 3:06:00 PM
What basically happens here is that jQuery (which is automatically included because of
jsDependencies) cannot properly load, because there is no DOM available in Node.js.
To make the DOM available, add PhantomJS or org.scalajs.jquery._ object TutorialTest extends TestSuite { // Initialize App TutorialApp.setupUI() def tests = Tests { test("HelloWorld") { assert(jQuery("p:contains('Hello World')").length == 1) } } }
This test uses jQuery to verify that our page contains exactly one
<p> element which contains the text “Hello World”
after the UI has been set up.. For this we face another small issue: the button doesn’t
exist when testing, since the tests start with an empty DOM tree. To solve this, we create the button in the
setupUI
method and remove it from the HTML:
jQuery("""<button type="button">Click me!</button>""") .click(() => addClickedMessage()) .appendTo(jQuery("body"))
This brings another unexpected advantage: We don’t need to give it an ID anymore but can directly use the jQuery object to install the on-click handler.
We now define the
ButtonClick test just below the
HelloWorld test:
test("ButtonClick") { def messageCount = jQuery("p:contains('You clicked the button!')").length val button = jQuery("button:contains('Click me!')") assert(button.length == JavaScript dependencies --> <script type="text/javascript" src="./target/scala-2.13/scala-js-tutorial-jsdeps.js"></script> <!--.
|
http://www.scala-js.org/doc/tutorial/basic/0.6.x.html
|
CC-MAIN-2020-40
|
refinedweb
| 921
| 50.33
|
In this blog post I want to show you how you can set up a chat with Node.js, socket.io, Angular.js, flashing title and loading bar. We will take a look into the lightweight architecture angular is giving you and how to set up the services and controllers the right way. Additionally we will use the loading-bar-module to give the user information about what his message is doing after sending it and we will flash the homepage title if a new message arrives. The communication is done with socket-io.js and we use jQuery for the basic javascript-things. Enjoy!
Check these links to get all these libs:
- Socket.io (please make sure to use the one from this example. This will work at 100%)
- jQuery
Here we go:
Folder-Structure
The folder structure in angular.js is, in my opinion, very important because it can give you a nice overview of what you are trying to seperate and is able to encapsulate files, such as services and controllers, etc. in a well regulated way.
So I always make an app-folder which holds all my angular-logic in it and a views folder which encapsulated my views (surprise! ;) ). Within my app folder I have folders for my services, controllers (which are important for the angular-stuff) and for css-files and 3rd-party scripts which is only called “scripts” here. I am trying to do like I would do the namespaces in C#, perhaps you recognized this ;)
The View
Well, to build up a view for a chat client you can do everything you can think of but essentially you only need something which displays the sent messages and you need a button and control where you can send your messages with. This is all for the first shot.
Additionally to this you need to have all your scripts loaded. In the end this looks something like this:
So what we see here is the head-information which is including everything (dont worry, we will get through most of these files during this post) we need to get the things going and the body. The body is giving us a div where we specify the controller “DemoController” and bind the messages we have in a html-list “li” with a angular-statement “ng-repeat”.
Note: You need this “track by $index” as suffix because only with this the message-array can contain the same message multiple times. Without this the message itself would be a key and a key can not occur multiple times. See also here
The form below has a normal submit-action to be called when it gets submitted and we only give the form two input boxes (one for the name and one for the text) including binding it to the (not yet shown) viewmodel. It contains, of course, a button to submit the form. And this is it. You are done with your view.
Lets digg deeper and see the underlying controller. but before we do this, we have to instantiate the whole app with the app.js. Lets take a look at this file first:
App.js
var app = angular.module('MessengerApp', [ 'ngRoute', 'ngResource', 'ui.bootstrap', 'chieffancypants.loadingBar', ]); app.config(function ($routeProvider) { $routeProvider .when('', { controller: 'DemoController', templateUrl: './views/index.html', }) .otherwise({ redirectTo: '/' }); });
Here you can see that we define an app in a variable “app” making it an angular module and we call it “MessengerApp” (This is what you see in the html-opening-tag in the screenshot above). Into this we are including all the 3rd-party-libs I mentioned above (loading-bar and so on). The route provider is not that important because we have one route to show. I wont go into detail here because for this example this would be more theory than practice.
The Controller
As mentioned in the view we have a controller called “DemoController”. And because we instantiated a variable called “app” we can now use it and define a controller on this app:
app.controller( 'DemoController', function ($scope, chatService, cfpLoadingBar, flashService) { var _messages = []; cfpLoadingBar.start(); var socket = io.connect('MyIp:MyPort'); var _sendMessage = function () { cfpLoadingBar.start(); chatService.sendMessage(socket, $scope.name, $scope.messageText); $scope.messageText = ''; }; socket.on('chat', function (data) { $scope.messages.push(data.name + ': ' + data.text); $scope.$apply(); flashService.flashWindow(data.name + ': ' + data.text, 10); $('body').scrollTop($('body')[0].scrollHeight); cfpLoadingBar.complete(); }); $scope.sendMessage = _sendMessage; $scope.messages = _messages; $scope.messageText = ''; $scope.name = ''; } );
Lets take a look into this in detail: First we define a controller which we can call in the view. Because of the dependency injection angular gives us out of the box we can get everything into our controller we want to use.
Then we make an array of messages and connect to our socket via socket-io. “_sendMessage” is a private function here, which only calls the chatService. The controller further makes UI-Stuff like starting the loading bar and reset the messagetext to an empty string so that the user can enter a new string to send.
The “socket.on(…)"-Method is like an eventhandler from socket.js. It is called when a new message gets received from the server. So everything we do here is :
- Getting the object from the server
- throw this new message into the message array ("$scope.messages.push”)
- giving it to the viewmodel and notify the viewmodel that there is something new ("$scope.$apply();")
- Flashing the window through a flashservice, we will get to know later
- scroll the body to the bottom so that everytime the latest message is shown in the browser
After we created all our stuf we are ready to fill the scope-object which is given to the view (so its our viewmodel):
$scope.sendMessage = _sendMessage; $scope.messages = _messages; $scope.messageText = ''; $scope.name = '';
This is the whole controller which is stored under the “Controllers”-folder and included in the view.
The Services
The services are like the base of our application because they are doing the real hard work. Lets take a closer look what these services we included are really doing:
'use strict'; app.factory('chatService', function (chatDataService) { var chatService = {}; var _sendMessage = function (socket, name, stringToSend) { return chatDataService.sendMessage(socket, name, stringToSend); }; // public interface chatService.sendMessage = _sendMessage; return chatService; }); app.factory('chatDataService', function ($ { var chatDataService = {}; var _sendMessage = function (socket, name, stringToSend) { socket.emit('chat', { name: name, text: stringToSend }); }; chatDataService.sendMessage = _sendMessage; return chatDataService; });
And here you can see the seperattion of concerns which I am a big fan of. I divided the data-service from the real service to have a better understanding and a better overview of whom is doing what. So the single-responsibility is used here. So we have the “ChatService” and a “ChatDataService”. We want to look at the real work in the “ChatDataService” which is really sending the messages by calling the method:
socket.emit('chat', { name: name, text: stringToSend });
This line is like doing all the magic using socket.io to send messages to the Server which is described here. We are generating a new object with the properties “name” and “text” and are sending what the user entered.
Due to the fact that the FlashService is only a nice2have-thing I will not refer to it in detail but I want to mention it.
'use strict'; app.factory('flashService', function () { var flashService = {}; var original = document.title; var timeout; var _cancelFlashWindow = function (newMsg, howManyTimes) { clearTimeout(timeout); document.title = original; }; var _flashWindow = function (newMsg, howManyTimes) { function step() { document.title = document.title == original ? newMsg : original; if (--howManyTimes > 0) { timeout = setTimeout(step, 1000); } } howManyTimes = parseInt(howManyTimes); if (isNaN(howManyTimes)) { howManyTimes = 5; } _cancelFlashWindow(timeout); step(); }; flashService.flashWindow = _flashWindow; flashService.cancelFlashWindow = _cancelFlashWindow; return flashService; });
This service is offering us two methods
- flashService.flashWindow = _flashWindow; //Flashes the window with a message and a number of how many times the title shall flash
- flashService.cancelFlashWindow = _cancelFlashWindow; // is only cancelling the flash-progress
To show you how this looks like in the file/folder-structure, see here:
So this was it. This is all you need to get a chat client going. If you include all the angular-files and giving the client the correct IP I am sure you will get the chat going in a second. (Dont forget to load the server) Thanks for reading.
Fabian
|
https://offering.solutions/blog/articles/2014/07/02/chat-with-node.js-socket.io-angularjs-flashing-title-and-loading-bar/
|
CC-MAIN-2022-21
|
refinedweb
| 1,381
| 56.35
|
SDK Setup
If you have not yet configured the SDK, please refer to SDK Setup.
Create
- Being in animator mode is recommended but not required. You can access animator mode in the top right corner of flash.
- An avatar may be any size under 600 x 450 pixels.
- Background colours will not appear in Glowbe.
- All scenes should depict the avatar facing left. Glowbe will automatically flip the avatar when it moves.
- All scenes run the duration of the frames.
- Glowbe avatars must be under thirty frames per second.
States
States are avatar behaviours that continue until you change them. If you walk or play an action, the avatar will promptly return to the selected state. Other people will also be able to see the state you have chosen.
- State names are case-sensitive and must be capitalised properly.
- States automatically loop.
- Scenes must be named according to the state you wish for it to be.
Some example states:
- state_Default
- state_Sit
- state_Dancing
Walking
Walking scenes are triggered when your avatar moves across the room. Walking scenes can also be unique to specific states. If you have a state without a walking scene, Glowbe will move your avatar across the room without walking.
Some example walks:
- state_Default_walking
- state_Sit_walking
- state_Dancing_walking
IdleIdle scenes are triggered when you are inactive for more than a few minutes at a time. Idle scenes are associated with specific states and should be named StateName_sleeping (where state name is the name of the associated state). If you do not add idle state, Glowbe will simply show the avatar's current state. In both cases, animated ZZZ's will be shown coming off the top of your avatar.
Zzz
Some example idles:
- state_Default_sleeping
- state_Sit_sleeping
Actions
Actions are similar to states, but play once rather than looping, and promptly return to the current state.
Some example actions:
- action_Laugh
- action_Attack
- action_Gasp
Transitions
Transitions are scenes that create a smooth change from one state or action to another. They are played in-between the end and start of a scene or action. You do not need to add transitions to all or even any of your states or actions. Please note that naming is different for walking transitions than it is for state or action transitions.
Some example transitions:
Walk transitions
- state_Default_towalking
- state_Default_fromwalking
State/action transitions
- Default_to_Dancing
- Dancing_to_Default
- Attack_to_Default
- Default_to_Attack
Idle transitions
- state_Default_tosleeping
- state_Default_fromsleeping
Incidentals
Incidentals are a series of scenes which create a state that plays normally most of the time, with alternate animations playing randomly. For example, if you want an avatar to yawn every so often in an idle animation, or want a dance state to mix up three different moves, you may use incidentals. To explain, you split the state into multiple numbered versions and then add percent probabilities that total up to one hundred. The first number represents the scene number, and the second represents the probability.
Some example incidentals:
- state_Default_01:95
- state_Default_02:05
or
- state_Dance_01:33
- state_Dance_02:34
- state_Dance_03:33
Setting Up Your Avatar Code
As a typical Whirled avatar is built in Adobe Flash CS3+, the code for the avatar is written in Flash's Actionscript. Moving beyond bases does not require complex code, and is usually as easy as copy and paste.
- If you have not yet done so, download and setup the Glowbe/Whirled SDK.
- Glowbe avatars require actionscript in order to communicate with the server. This code tells the avatar which way it is facing, and whether or not it is walking. You'll need a basic understanding of how to set it up yourself.
Basic Actionscript
Your avatar file should contain a main scene that is above your state scenes. The main scene contains the actionscript for handling avatar behaviour in Glowbe.
- Open the actions window (F9) in your main scene.
- Paste in this code:
import flash.events.Event; import com.whirled.AvatarControl; if (_ctrl == null) { _ctrl = new AvatarControl(this); _ctrl.setMoveSpeed(n); _ctrl.setHotSpot(x, y); _body = new Body(_ctrl, this, W); addEventListener(Event.UNLOAD, handleUnload); function handleUnload (... ignored) :void { _body.shutdown(); } } var _ctrl :AvatarControl; var _body :Body;
3. Replace W with the width of your scene.
4. Click info and hover your mouse on the 'floor' over your avatars centre of gravity. Note the coordinates.
5. Replace x and y with the coordinates from your info tab.
6. The default move speed in Glowbe is 500, you can insert a number in n for to change your move speed. N may not be less than 50.
State Specified Actions
An avatar can be set up to have specific actions available only in certain states. This would be useful, if, for example you wanted to create a 'blow bubbles' action to use in a 'scuba diving' state.
Using this example, lets assume the avatar has no other actions. To set this up correctly, your state_Default scene would contain:
_ctrl.registerActions();
This unregisters all actions whenever your Default scene is triggered, removing "blow bubbles" (and any other actions) from the user's selection. After this, you would add the following to your "scuba diving" scene:
_ctrl.registerActions("blow bubbles");
This would make the action available again.
Non-flipping Scenes
The default Body.as file automatically flips all scenes when an avatar faces right. If you'd like a specific scene to always face the same direction, the following code will reverse the flipping. That scene will always look exactly as drawn no matter how a player walks around.
//Don't Flip if (_ctrl.getOrientation() < 180) { this.x = 0; this.scaleX = 1; }
Automatically Flying
This is code that you can use to make your avatar float automatically in certain states. To use it, add the following to your Main scene:
//Change the 0.2 below to change how high above the floor the avatar will appear once it is moved. 0 is the floor and 1 is the ceiling function fly():void{ updateRoom(); flyHeight = (roomSize[1] as int)*0.2; _ctrl.setPreferredY(flyHeight); } function unFly():void{ _ctrl.setPreferredY(0); } function updateRoom():void{ roomSize = _ctrl.getRoomBounds(); } var flyHeight:int = 0; var roomSize:Array = new Array(100,100,100);
On transitions to scenes where the avatar should fly, or scenes where the avatar should be flying, add the following code:
fly();
On transitions to scenes where the avatar should land, or scenes where the avatar should be on the ground, add the following code:
unFly();
Automatically Walking on the Ceiling
Similar to the code above, you can make your avatar hang from the ceiling. In this case, your avatar will be upside down and you will need to remember to set your HotSpot at the top of the scene.
function fly():void{ updateRoom(); flyHeight = (roomSize[1] as int)*1; _ctrl.setPreferredY(flyHeight); } function unFly():void{ _ctrl.setPreferredY(0); } function updateRoom():void{ roomSize = _ctrl.getRoomBounds(); } var flyHeight:int = 0; var roomSize:Array = new Array(100,100,100);
On scenes where the avatar should hang from the ceiling, add the following:
fly();
On scenes where the avatar should be on the ground, add the following:
unFly();
If you want to have a mix of states that show the avatar on the floor and on the ceiling, you may want to make a transition between the two. Changing the preferred Y won't automatically move the avatar to a new location, so you will need to tell it to move. To do this, you'll want to add these variables on the main scene:
var myLocation:Array = [0.0,0.0,0.0] var myDirection:Number = 180;
And you'll want to add this code to the transition to your hanging state (for the transition from the hanging state, replace the 1.0 with 0.0):
myLocation = _ctrl.getLogicalLocation(); myDirection = _ctrl.getOrientation(); _ctrl.setLogicalLocation (myLocation[0],1.0,myLocation[2],myDirection);
Adding an About Action
This code will allow the user to click an action and view a popup. This action is only visible in the inventory and shop listing screen, the idea being that it's a description built into the avatar file itself. It should not interfere with any version of Body.as or MovieClipBody.as that you may be using.
First, make a new movieClip that shows exactly what you want to see in the About Popup. Click on the Properties of it in the library, and export it for ActionScript. Make sure the class name is AboutPopup.
You'll need to add three chunks of code in your main scene:
Below your import... lines, add this code:
import com.whirled.ControlEvent; import com.whirled.EntityControl; //Create a new instance of the AboutPopup movieClip and name it aboutPopup var aboutPopup = new AboutPopup();
Just under the setMoveSpeed(...) line, add the following code (You must change "Victory" to include the names of all your actions, for example "Attack","Die","Juggle"):
//Check where the avatar is loaded. If it is in the shop or inventory viewer, register your actions and the About action _environment = _ctrl.getEnvironment(); if (_environment == EntityControl.ENV_VIEWER || _environment == EntityControl.ENV_SHOP){ _ctrl.registerActions("Victory","About"); _ctrl.addEventListener(ControlEvent.ACTION_TRIGGERED, triggerAbout); }
- At the bottom of the code (above the var... lines), add:
function triggerAbout(event:ControlEvent):void{ if (event.name == "About"){ //Edit the numbers at the end of this line to be the width and height of your AboutPopup movieClip _ctrl.showPopup("Info:", aboutPopup, 200, 75 ); } } var _environment :String;
Adding an Outline
To create an outline around your entire avatar, add the following code to your main scene:
filters = [new GlowFilter(0x000000, 1, 5, 5, 5)];
To remove the outline, simply add the following code when you want to toggle it off:
filters = [];
State Changed Code
To make certain codes run when you change states make a new layer in the main scene and use this code:
import com.whirled.ControlEvent; ///Only import ControlEvent if you haven't already _ctrl.addEventListener(ControlEvent.STATE_CHANGED, stateChanged); function stateChanged(event:ControlEvent):void { switch(event.name) { case "Fly": ///What will happen if we're in the state Fly? ///Hotspot changes so the name is higher _ctrl.setHotspot(250, 350, 700); ///Change speed so its faster when in this state _ctrl.setMoveSpeed(450); break; default: ///What happens if we're in any other state/default? ///Hotspot changes back if in any other state _ctrl.setHotspot(250, 350, 600); ///Speed changes to default _ctrl.setMoveSpeed(300); break; } }
Action Triggered Code
To make certain codes run when you play actions make a new layer in the main scene and use this code:
import com.whirled.ControlEvent; ///Only import ControlEvent if you haven't already _ctrl.addEventListener(ControlEvent.ACTION_TRIGGERED, actionTriggered); function actionTriggered(event:ControlEvent):void { switch(event.name) { case "Speak": ///What happens when this action is played? ///The symbol with the instance name "Face" plays the frame with the label "speak" Face.gotoAndPlay("speak"); break; } }
Avatar Spoke Code
To make an action/code to play when your avatar has spoken, make a new layer in the main scene and add this code:
import com.whirled.ControlEvent; ///Only import ControlEvent if you haven't already _ctrl.addEventListener(ControlEvent.AVATAR_SPOKE, startTalking); function startTalking(event:ControlEvent):void { ///This is what the avatar does when it talks ///The symbol with the instance name "Face" plays the frame with the label "Talk" Face.gotoAndPlay("Talk"); }
|
http://glowbeonline.wikia.com/wiki/Basic_Avatar_Tutorial
|
CC-MAIN-2018-47
|
refinedweb
| 1,866
| 65.62
|
Source code for google.appengine.api.files.records
#!. # """Files API. .. deprecated:: 1.8.1 Use Google Cloud Storage Client library instead. Lightweight record format. This format implements log file format from leveldb: Full specification of format follows in case leveldb decides to change it. The log file contents are a sequence of 32KB blocks. The only exception is that the tail of the file may contain a partial block. Each block consists of a sequence of records: block := record* trailer? record := checksum: uint32 // masked crc32c of type and data[] length: uint16 type: uint8 // One of FULL, FIRST, MIDDLE, LAST data: uint8[length] A record never starts within the last six bytes of a block (since it won't fit). Any leftover bytes here form the trailer, which must consist entirely of zero bytes and must be skipped by readers. Aside: if exactly seven bytes are left in the current block, and a new non-zero length record is added, the writer must emit a FIRST record (which contains zero bytes of user data) to fill up the trailing seven bytes of the block and then emit all of the user data in subsequent blocks. More types may be added in the future. Some Readers may skip record types they do not understand, others may report that some data was skipped. FULL == 1 FIRST == 2 MIDDLE == 3 LAST == 4 The FULL record contains the contents of an entire user record. FIRST, MIDDLE, LAST are types used for user records that have been split into multiple fragments (typically because of block boundaries). FIRST is the type of the first fragment of a user record, LAST is the type of the last fragment of a user record, and MID is the type of all interior fragments of a user record. Example: consider a sequence of user records: A: length 1000 B: length 97270 C: length 8000 A will be stored as a FULL record in the first block. B will be split into three fragments: first fragment occupies the rest of the first block, second fragment occupies the entirety of the second block, and the third fragment occupies a prefix of the third block. This will leave six bytes free in the third block, which will be left empty as the trailer. C will be stored as a FULL record in the fourth block. """ import logging import struct import google from google.appengine.api.files import crc32c BLOCK_SIZE = 32 * 1024 HEADER_FORMAT = '<IHB' HEADER_LENGTH = struct.calcsize(HEADER_FORMAT) RECORD_TYPE_NONE = 0 RECORD_TYPE_FULL = 1 RECORD_TYPE_FIRST = 2 RECORD_TYPE_MIDDLE = 3 RECORD_TYPE_LAST = 4[docs]class FileWriter(object): """Interface specification for writers to be used with records module."""[docs]class FileReader(object): """Interface specification for writers to be used with recordrecords module. FileReader defines a reader with position and efficient seek/position determining. All reads occur at current position. """_CRC_MASK_DELTA = 0xa282ead8 def _mask_crc(crc): """Mask crc. Args: crc: integer crc. Returns: masked integer crc. """ return (((crc >> 15) | (crc << 17)) + _CRC_MASK_DELTA) & 0xFFFFFFFFL def _unmask_crc(masked_crc): """Unmask crc. Args: masked_crc: masked integer crc. Retruns: orignal crc. """ rot = (masked_crc - _CRC_MASK_DELTA) & 0xFFFFFFFFL return ((rot >> 17) | (rot << 15)) & 0xFFFFFFFFL[docs] def read(self, size): """Read data from file. Reads data from current position and advances position past the read data block. Args: size: number of bytes to read. Returns: iterable over bytes. If number of bytes read is less then 'size' argument, it is assumed that end of file was reached. """ raise NotImplementedError()[docs]class RecordsWriter(object): """A writer for records format. This writer should be used only inside with statement: with records.RecordsWriter(file) as writer: writer.write("record") RecordsWriter will pad last block with 0 when exiting with statement scope. """ def __init__(self, writer, _pad_last_block=True): """Constructor. Args: writer: a writer to use. Should conform to FileWriter interface. """ self.__writer = writer self.__position = 0 self.__entered = False self.__pad_last_block = _pad_last_block def __write_record(self, record_type, data): """Write single physical record.""" length = len(data) crc = crc32c.crc_update(crc32c.CRC_INIT, [record_type]) crc = crc32c.crc_update(crc, data) crc = crc32c.crc_finalize(crc) self.__writer.write( struct.pack(HEADER_FORMAT, _mask_crc(crc), length, record_type)) self.__writer.write(data) self.__position += HEADER_LENGTH + length[docs] def write(self, data): """Write single record. Args: data: record data to write as string, byte array or byte sequence. """ if not self.__entered: raise Exception("RecordWriter should be used only with 'with' statement.") block_remaining = BLOCK_SIZE - self.__position % BLOCK_SIZE if block_remaining < HEADER_LENGTH: self.__writer.write('\x00' * block_remaining) self.__position += block_remaining block_remaining = BLOCK_SIZE if block_remaining < len(data) + HEADER_LENGTH: first_chunk = data[:block_remaining - HEADER_LENGTH] self.__write_record(RECORD_TYPE_FIRST, first_chunk) data = data[len(first_chunk):] while True: block_remaining = BLOCK_SIZE - self.__position % BLOCK_SIZE if block_remaining >= len(data) + HEADER_LENGTH: self.__write_record(RECORD_TYPE_LAST, data) break else: chunk = data[:block_remaining - HEADER_LENGTH] self.__write_record(RECORD_TYPE_MIDDLE, chunk) data = data[len(chunk):] else: self.__write_record(RECORD_TYPE_FULL, data)def __enter__(self): self.__entered = True return self def __exit__(self, atype, value, traceback): self.close()[docs]class RecordsReader(object): """A reader for records format.""" def __init__(self, reader): self.__reader = reader def __try_read_record(self): """Try reading a record. Returns: (data, record_type) tuple. Raises: EOFError: when end of file was reached. InvalidRecordError: when valid record could not be read. """ block_remaining = BLOCK_SIZE - self.__reader.tell() % BLOCK_SIZE if block_remaining < HEADER_LENGTH: return ('', RECORD_TYPE_NONE) header = self.__reader.read(HEADER_LENGTH) if len(header) != HEADER_LENGTH: raise EOFError('Read %s bytes instead of %s' % (len(header), HEADER_LENGTH)) (masked_crc, length, record_type) = struct.unpack(HEADER_FORMAT, header) crc = _unmask_crc(masked_crc) if length + HEADER_LENGTH > block_remaining: raise InvalidRecordError('Length is too big') data = self.__reader.read(length) if len(data) != length: raise EOFError('Not enough data read. Expected: %s but got %s' % (length, len(data))) if record_type == RECORD_TYPE_NONE: return ('', record_type) actual_crc = crc32c.crc_update(crc32c.CRC_INIT, [record_type]) actual_crc = crc32c.crc_update(actual_crc, data) actual_crc = crc32c.crc_finalize(actual_crc) if actual_crc != crc: raise InvalidRecordError('Data crc does not match') return (data, record_type) def __sync(self): """Skip reader to the block boundary.""" pad_length = BLOCK_SIZE - self.__reader.tell() % BLOCK_SIZE if pad_length and pad_length != BLOCK_SIZE: data = self.__reader.read(pad_length) if len(data) != pad_length: raise EOFError('Read %d bytes instead of %d' % (len(data), pad_length))[docs] def read(self): """Reads record from current position in reader.""" data = None while True: last_offset = self.tell() try: (chunk, record_type) = self.__try_read_record() if record_type == RECORD_TYPE_NONE: self.__sync() elif record_type == RECORD_TYPE_FULL: if data is not None: logging.warning( "Ordering corruption: Got FULL record while already " "in a chunk at offset %d", last_offset) return chunk elif record_type == RECORD_TYPE_FIRST: if data is not None: logging.warning( "Ordering corruption: Got FIRST record while already " "in a chunk at offset %d", last_offset) data = chunk elif record_type == RECORD_TYPE_MIDDLE: if data is None: logging.warning( "Ordering corruption: Got MIDDLE record before FIRST " "record at offset %d", last_offset) else: data += chunk elif record_type == RECORD_TYPE_LAST: if data is None: logging.warning( "Ordering corruption: Got LAST record but no chunk is in " "progress at offset %d", last_offset) else: result = data + chunk data = None return result else: raise InvalidRecordError("Unsupported record type: %s" % record_type) except InvalidRecordError, e: logging.warning("Invalid record encountered at %s (%s). Syncing to " "the next block", last_offset, e) data = None self.__sync()def __iter__(self): try: while True: yield self.read() except EOFError: pass
|
https://cloud.google.com/appengine/docs/standard/python/refdocs/modules/google/appengine/api/files/records
|
CC-MAIN-2017-34
|
refinedweb
| 1,170
| 52.56
|
A Functioning Stand Alone Python Program: Well, I think I can answer this question most successfully in mime. (mimes incomprehensibly).
To start, open up trivia.py in your text editor.
Looking at the last tute, we can see that we used the cPickle module, so that will need to be imported. We then opened the existing pickle file, and loaded the stored pickle.
So, first, edit the file to add import cPickle after import random:
import random import cPickle
Next, let’s add a ‘constant'[1] to store our the filename of our pickle file. Constants are not really all that constant in Python, in that Python will let us change their value later if we choose. However, by convention, if you name a variable with all caps, then it is assigned a value only once (usually at the start of the program). Add this after the imports but before the first definition:
QUESTIONS_FILENAME = 'p4kTriviaQuestions.txt'
Now, we add some code to load the questions into an array. In theory, this code could go anywhere in the file after the variable QUESTIONS_FILENAME has been given a value. However, it’s better if we put it at the end of the file after the function definition. It should not be indented (remember that Python identifies code blocks by indentation):
fileObject = open(QUESTIONS_FILENAME,'r') # note: 'r' for read questionsList = cPickle.load(fileObject) # load the questions fileObject.close() # finished reading, so close the file
If you go back through the previous tutes (eg the previous tute) you’ll see that this is the same code we used in the interpreter. This shouldn’t be surprising because Python runs the code in this program as if it was being typed into the interpreter.[2] Now that the questions have been loaded, let’s iterate through each of them, asking them in turn:
for question in questionsList: askQuestion(question) # note, that because we're within the module we can just # use the function's name directly without being qualified by the # trivia namespace. If you put trivia.askQuestion, the code would not work.
Save the file [see below for what the code should look like], then go to a console/command line – ie the thing from which you have run the Python interpreter in earlier tutes. Do not run the interpreter itself. Rather, we run the file by typing the following and pressing the ‘enter’ or ‘return’ key:
> python trivia.py Who expects the Spanish Inquisition? 0 . Brian 1 . Eric the Hallibut 2 . An unladen swallow 3 . Nobody 4 . Me! Enter the number of the correct answer: 3 Correct! Hooray! What is the air-speed velocity of an unladen swallow? 0 . 23.6 m/s 1 . 10 m/s 2 . 14.4 m/s 3 . What do you mean? African or European swallow? Enter the number of the correct answer: 3 Correct! Hooray! Is this the right room for an argument? 0 . Down the hall, first on the left 1 . No 2 . I've told you once 3 . Yes Enter the number of the correct answer: 2 Correct! Hooray!
Note that this is from the command line – not the Python interpreter. You can tell because there is only one > (you might have something different as your > thingamy) where the interpreter has three >>>
As you can see the two lines:
for question in questionsList: askQuestion(question)
Cause the program it iterate through each element in the list questionsList and, for each element, to call the askQuestion() function with that element as a parameter. If you remember, each of these elements is, itself, a list.
Homework: think about how we would need to change this program to keep track of the player’s score
Bonus points: actually change the program so that it keeps track of the player’s score and prints out the score after all questions have been asked.
Notes:
1. Technically, in the code:
QUESTIONS_FILENAME = ‘p4kTriviaQuestions.txt’,
QUESTIONS_FILENAME is a variable, and ‘p4kTriviaQuestions.txt’ is the constant. However, in the text I’m referring to the variable as if it was the constant.
2. Actually, when you run a program from the command line Python reads the whole file and pre-compiles it first (or uses an existing pre-compiled version if you haven’t changed the file) then runs the pre-compiled version. Have a look for file called trivia.pyc in your directory.
Source Code
The complete file should look like this:
''' The place for a doctring which was an exercise for you to complete in the previous tute. ''' import random import cPickle QUESTIONS_FILENAME = 'p4kTriviaQuestions.txt'...' fileObject = open(QUESTIONS_FILENAME,'r') # note: 'r' for read questionsList = cPickle.load(fileObject) # load the questions fileObject.close() # finished reading, so close the file for question in questionsList: askQuestion(question) # note, that because we're within the module we can just # use the function's name directly without being qualified by the # trivia namespace.
Pingback: Python 4 Kids: A Functioning Stand Alone... | Python | Syngu
|
https://python4kids.brendanscott.com/2011/05/17/a-functioning-stand-alone-python-program/?shared=email&msg=fail
|
CC-MAIN-2020-10
|
refinedweb
| 827
| 65.32
|
Sc:
def `alive?` = true
However, I don’t care for that approach, because you then have to use the backticks when calling the method name.
Examples
This is what happens in the Scala REPL if you try to use a ? character at the end of a method name without using an underscore:
scala> def equal? (a: Int, b: Int): Boolean = { a == b } <console>:1: error: '=' expected but identifier found. def equal? (a: Int, b: Int): Boolean = { a == b } ^
Clearly that doesn’t work. But when you add an underscore before the question mark you’ll see that you can create and use the method:
scala> def equal_? (a: Int, b: Int): Boolean = { a == b } equal_$qmark: (a: Int, b: Int)Boolean scala> equal_?(1,1) res3: Boolean = true scala> equal_?(1,2) res4: Boolean = false
Here’s what the backticks approach looks like:
# define the method scala> def `equal?` (a: Int, b: Int): Boolean = { a == b } equal$qmark: (a: Int, b: Int)Boolean # try using it without backticks scala> equal?(1,1) <console>:12: error: not found: value equal equal?(1,1) ^ # use it with the backticks scala> `equal?`(1,1) res6: Boolean = true
Discussion
Using a question mark character at the end of a method name is something I first saw in Ruby. Some people like to use it on method names when a method returns a boolean value.
|
https://alvinalexander.com/scala/scala-faq-question-mark-end-method-name/
|
CC-MAIN-2022-40
|
refinedweb
| 230
| 82.65
|
Portability Goes Metro: A CLR and WinRT Love Affair
Part 1: Creating the Portable Library
Part 2: Portability in Silverlight and WPF: a Tale of Type Forwarders
Part 3: Portability in Metro: A CLR and WinRT Love Affair (this post)
Portability in Metro: A CLR and WinRT Love Affair
In this series we’ve covered the portable library and reviewed how it allows you to create assemblies that can be shared without recompilation across multiple platforms. You created a portable assembly with a view model and a command in it, then successfully integrated it in a WPF and a Silverlight project. Now it’s time to code for the future and go Metro.
Create a new C# Metro application using the blank page template. Reference the PortableCommandLibrary project. Open the BlankPage.xaml file and drop in the same XAML you used for WPF and Silverlight. First, fix up a reference:
xmlns:portable="using:PortableCommandLibrary"
Next, add the XAML inside of the main grid:
<Grid.DataContext> <portable:ViewModel/> </Grid.DataContext> <Button Content="{Binding Text}" Command="{Binding ClickCommand}" HorizontalAlignment="Center" VerticalAlignment="Center" Margin="10"/><span></span>
Now compile, deploy, and run the application. It will work just as it did for WPF and Silverlight.
First the button:
Then the disabled button:
What’s interesting for this example is that you know when you want to wire a command, you have to use a completely separate namespace from Silverlight. In fact, the namespace implies that you are accessing a WinRT component that is part of the operating system and not even a managed object. How do we pull that off with an assembly that isn’t modified?
To begin the journey, start with the assembly that is referenced directly by the portable library. This is System.Windows.dll only this time you’ll inspect it in the .NETCore folder, which is the smaller profile allowed for Metro applications. Once again, the assembly contains no implementation. Opening the manifest, you will find a series of type forwarders. This time the ICommand interface is redirected to System.ObjectModel.dll.
What’s next? You guessed it. Pop open the System.ObjectModel.dll assembly and you’ll find:
So there it is … but there’s a problem. When you specify your own command implementation, you have to reference the Windows.UI.Xaml.Input namespace. So how will this reference work? This is where Metro works a little bit of magic.
It turns out the platform maintains an internal table that maps CLR namespaces to the WinRT equivalents. This allows seamless integration between the types. For example, the CLR may be exposed to the type Windows.Foundation.Uri when dealing with a WinRT component. When this happens, it automatically maps this to the .NET System.Uri. When the CLR passes System.Uri to a WinRT component, it is converted to a Windows.Foundation.Uri reference.
In our case, the code references:
System.Windows.Input.ICommand
The platform will automatically map this to the WinRT counterpart,
Windows.UI.Xaml.Input.ICommand
This is a very powerful feature because it enables compatibility between legacy code and the new runtime with minimal effort on the part of the developer. If your type maps to an actual object that can have an activated instance, rather than just an interface, the CLR will automatically instantiate a Runtime Callable Wrapper (RCW) to proxy calls to the underlying WinRT (essentially COM) component.
The whole portable path looks like this in the end:
If you want to see the “projected” types you can use ildasm.exe with the /project switch and in theory, if you run this against one of the .WinMD files (such as Windows.UI.Xaml.Input.WinMD) located in %windir%\winmetdata you should see .NET projected types instead of Windows Runtime types … I have yet to get this to work but if you have, please post the details here.
And that’s it – you’ve learned how to create an assembly that is completely portable between .NET Framework 4.5 (WPF), Silverlight 4.0 and 5.0, and Windows 8 Metro, and learned a bit about how it works by chasing down ICommand under the covers. Hopefully this helps with understanding the library and also with planning out how to map future projects that need to share code between existing implementations and future Metro targets.
|
https://dzone.com/articles/understanding-portable-library-0
|
CC-MAIN-2015-35
|
refinedweb
| 722
| 57.67
|
The Economist On The Economics of Sharing 345
RCulpepper writes "The Economist, reliably the most insightful English-language news publication, discusses the economics of sharing, from OSS programmers' sharing time, to P2P users' sharing disk space and bandwidth. " True indeed (about The Economist, I have to remember to renew my subscription); one of the main supports for the article comes from Yochai Benkler latest piece, which is excellent.
Sure... (Score:3, Insightful)
and
Re:Sure... (Score:5, Insightful)
Why?
Folks,
Re:Sure... (Score:2, Funny)
Re:Sure... (Score:2, Funny)
Exactly! Its more like Fox news...
Re:Sure... (Score:3, Funny)
Or what Fox would be like if, instead of being run by right-wingers from top to bottom, they switched positions every fifteen minutes: first have the news as reported by a fascist, then by a communist, then by an anarchist, then by a Randroid, then by a monarchist
Re:Sure... (Score:2)
Hrmmm, let's see Slashdot is an organization and it's primary purpose appears to be reporting news so that the raving hordes have something to gab about. So I don't see why they can't meet minimal standards of conduct and drop the personal notes.
Cognitive dissonance much? (Score:2, Funny)
"News for Nerds, Stuff that Matters"
One of those is incorrect. Plz fix, kthx, bye.
Re:Sure... (Score:4, Insightful)
It's true, the editors are not obligated to remove anything. Or for that matter, check for non-dupes, etc.
BUT... One primary reason of slashdot's success is the high signal to noise ratio. Articles are posted that consistently reach a cohesive demographic. Moderation and Meta-Moderation provide methods of locating user comments which have the highest likelyhood of consisting of signal, and not noise.
That being said, I believe the point of the parent post is that we don't care if the editor needs to renew his subscription. We want signal, not noise, and are merely providing feedback to help promote that practice.
Re:Sure... (Score:5, Insightful)
If you have read the Economist and don't realize how important free markets and trade are to them, then there is no hammer big enough to hit you over the head with.
I always think it is a shame that this county (US) doesn't have a party that thinks like the Economist. Bush might like to claim this philosphy, but his strain of Republicanism is to concerned about what you do in the bedroom to fit this model.
Re:Sure... (Score:4, Insightful)
The fiscal conservitives are sticking with the Republican party out of inertia. They should either kick the bible-thumpers out, or jump ship themselves and start a new party under the banner of fiscal responsibility. Shrub and his borrow-and-spend killed whatever lingering illusion that the Republican party represents fiscal conservitives and smaller government.
Re:Sure... (Score:4, Interesting)
Re:Sure... (Score:2)
Personally, I like hearing the editor's opinions once in a while.
Thoughts on sharing (Score:5, Insightful).
There is a word for this (Score:5, Insightful)
For a lot of open source project's and P2P networks it's not the case that developers and users are really sharing fairly..
It's not greed, since it's about sharing.
I don't know what to call it, fear of leeching or something?
To sum it up: When you share, if you constantly think about if everybody else is sharing as much as you, you'll end up not sharing.
Period.
When you share, you share.
If people leech, don't bother.
If they spam or hog resources, limit the resources with technical solutions, but you still don't bother.
This is the truth of sharing. The more you give, the more you get. Karma is absolute truth, but you don't give a damn about it. If you do, you get in trouble. If you analyse it all, you will stop the process itself.
So what if you share more than the next guy for some times? If you think about it, worrying about who is on top is really capitalism.
Strange thought, huh?
If you happen to have more / willing to share more, for some time, then just think what an opportunity!
Nice Advertisement (Score:5, Insightful)
Sharing of information has proven very beneficial in science and there is no mention of this in the article. You'd think that this would be one of the first things that would come to mind when one thinks about innovation in ideas.
Re:Nice Advertisement (Score:4, Insightful)
Not to bash science, but... (Score:3, Insightful)
Uh, there's always the potential "loss" of the credit for other discoveries based on that knowledge. Think Rosalind Franklin and the discovery of DNA; "competitors" saw her crucial photograph and some unpublished work, and she's never really gotten some credit she deserved. Even when you're formally releasing whatever information you have, by publishing it, there's a certain loss in that sense -- of control, or something close to it.
The scientific me
Re:Nice Advertisement (Score:4, Interesting)
Only when Science interfaces with Technology, patent laws turn it into a rivalous good...and the sharing stops. I'm not sure, e.g., that the current efforts to coerce the pharmacuetical companies to report all their trials and results will be successful. If it is, it will continuously require force and oversight, and bribery scadals, because that information has been turned into a rivalous good by the legal system.
I'm just waiting ... (Score:5, Insightful)
Pre-emptive strike: when The Economist, which is the leading voice of center-right journalism, speaks favorably of F/OSS, it's time to drop the "communism" line and come up with something else, folks.
Communism != Socialism (Score:2)
The US government slapped such a negative connotation to the word "communist" during the Cold War, a connotation that belongs to "socialist". Not one of the countries we were against during the cold war was e
free software != "good" communism (Score:2)
Are we to take your word for it that communism will work if given the proper setting, when all previous attempts to achieve communism failed? By definition [marxists.org], communism does not allow for capitalism to coexist with it. You can have one, but not the other. To call the Internet "the new communism" is to portray the term "communism" as something other than i
Re:Communism != Socialism (Score:3, Interesting)
First, Communism is a form of Capitalism. The reason this probably sounds strange to the average person is because they have stopped thinking of Free Market Capitalism as a form of Capitalism, and think of it as the ONLY form.
Communism is State Capitalism. An economy of administrators and workers, hierarchical, with central control
Re:I'm just waiting ... (Score:3, Interesting)
Journalism started to die off in great masses of professionals in the early 1990s. Today, I can hardly use the term "journalism" since that thing is essentially dead. Many important stories are simply ignored for purely political reasons by men who should know better. And another fat slice of the population finds itself being spoon-fed intellectual
re: I'm just waiting... (Score:3, Insightful)
Re:I'm just waiting ... (Score:2)
No, I really don't think I did -- and I say this as a pretty solid leftie myself. The Economist's biases are plain in their writing, but they're more hyper-capitalist than "far right" in the sense I think that term is usually defined. Note that they've written favorably not only of F/OSS but also of other such "liberal" causes as drug legalization, and have (finally) started to look skeptically on Bush's foreign adventurism. Generally, they favor whatever they think will be good f
Re:I'm just waiting ... (Score:5, Insightful)
While, I agree that The Economist is generally 'pro-capitalist', I would not call them pro-business, but rather pro-competition; a distinction most people miss. Most businesses, ironically enough, dislike competition and are therefore anti-capitalists.
PCB
Re:I'm just waiting ... (Score:3, Insightful)
I think business (aka capitalist) are pro-competition about the things they have to pay (raw materials, services they use) but dislike competition in the product or service the business provides.
They are different kinds of sharing! (Score:5, Insightful)
Imagine a different kind of sharing... (Score:4, Interesting)
I'm not saying it would be easy, but imagine if...
Re:Imagine a different kind of sharing... (Score:2)
Re:Imagine a different kind of sharing... (Score:5, Insightful)
As for security, unless every single thing is bolted down, your office will suddenly need a much larger budget to replace disappearing paper, pens, coffee, computer parts and the like. And considering that a typical PC is completely vulnerable to physical access attacks - would you feel comfortable typing anything secure on a keyboard in an office that is lived in by unknown non-company-employees?
I am not saying that your idea is impossible - however, it will not be easy to implement, especially in a way that office occupants find agreeable.
Re:Imagine a different kind of sharing... (Score:3, Informative)
Re:Imagine a different kind of sharing... (Score:3, Insightful)
Yes there is extra space, but the cost to get homeless people there, maintain the building, ensure those people do not do things that would disrupt during business hours, is quite high. The same reason there is excess food, yet people starve. The cost to get the food to the starving people becomes prohibitive in some areas.
Re:Imagine a different kind of sharing... (Score:2, Insightful)
Re:Imagine a different kind of sharing... (Score:2)
Re:Imagine a different kind of sharing... (Score:2)
Academic Discounts (Score:5, Informative)
WooHoo!
Re:Academic Discounts (Score:2)
How to do it (tm) (Score:4, Insightful)
The Economist is more time-draining than Slashdot (Score:5, Interesting)
It's a very interesting magazine though if you can find the time to commit to it.
I wish! (Score:2, Funny)
Re:I wish! (Score:2)
Sorry, I can't shed tears for ya.
The True Economics of OSS (Score:5, Interesting)
The analogy runs as follows. Suppose that a street has a bunch of bun vendors and a bunch of people who sell sausages to put in the buns (wow, talk about decoupled designs). People might be willing to spend $1.50 for a bun plus a sausage - nominally $1 for the sausage and $0.50 for the bun.
Now, suppose that someone in the sausage industry comes up with a way of "open-sourcing" buns - now buns are free! This happening, you've got a bunch of customers wandering around buying sausages with an extra $0.50 in their pockets. They were clearly willing to spend more on the sausage+bun combination, so maybe you can jack up your price to $1.10 or $1.20 (very unlikely you'll be able to go to $1.50).
Of course, like all simplistic analogies, this depends on a lot of assumptions. For instance, we
expect that the customer won't go off and buy something new (a 50 cent Coke, maybe).
Now, think about companies that have major OSS support. The best example is IBM - which makes its money of hardware and services. Are they the sausage vendors in this case?
I don't know if this is nonsense, but it's an interesting theory. If anyone has a good counter-argument, let's hear it. If anyone has a silly pun about "open-saucing" hot dogs, well, remember that I'm a computer scientist and can generate an enormous static charge from your keyboard to Get You.
Re:The True Economics of OSS (Score:3, Interesting)
OSS can artificially manufacture more wealth in the long-term, much like the stock-market does.
Think of using (and in turn contributing back to) OSS tools like getting free hammers and nails so long as you help improve the design of hammers, nails, and other industry standard tools you use for free. Within the context of using those tools to build things, general practitioners are going to come up with gripes and improvements. I thin
Re:The True Economics of OSS (Score:2)
I think the analogy isn't quite accurate. First, the wealth creation is "natural" (ie, by this I mean that wealth is created by improving the real value of stuff). Ie, many OSS groups are building tools that people use and increasing the value of those peoples' labor and services. Stock markets provide a more efficient means of matching people with available capital to those who need that capital to build stuff
Re:The True Economics of OSS (Score:2)
Poor choice of the word "artificial" on my part.
I believe OSS can manufacture wealth in the manner you describe - by making the delivery of services and products a more productive activity through smarter/better/more effective tools.
I guess why I used artificial is it seems counter-intuitive until you realize the gains possible. You're giving your employees' productivity away, but in the end, if everyone is doing that, all projects start off closer to completion because of the va
Re:The True Economics of OSS (Score:3, Insightful)
Re:The True Economics of OSS (Score:4, Funny)
Are you saying you're a real hot-dog programmer?
Re:The True Economics of OSS (Score:3, Funny)
1) Communist Solution.
The government owns all bun and all the sausages, so you use it as a bribe officials to escape black-marketing charges.
2) Socialist Solution.
The government take it off you by way of increased taxation to pay social security to the unemployed bun vendors.
3) Capitalist Solution
The now unemployed bun vendors become sausage vendors, thereby increasing the supply so that you now get 2 for the price of 1 and die an early death from obesity related di
Problem of cost/return (Score:3, Interesting)
Stop editorializing article summaries, please. (Score:2, Insightful)
Gee, what an unbiased way to present an article for discussion.
True indeed
Coming to a conclusion in an article summary stifles discussion. Stop doing that.
Don't worry (Score:3, Funny)
This would only be a problem if everyone RTFA. However, as that is rarely a problem, there is nothing to worry about.
Name-dropping for fun and profit (and showing off) (Score:2)
Different motivations for sharing (Score:5, Insightful)
It should also be noted that not all sharing is good. [go.com]
That works until.... (Score:5, Funny)
This works untiol SCO shows up and claims ownership of the lentils found in every bowl served, and demands that each soup-eater pay them $699.
Re:Different motivations for sharing (Score:3, Insightful)
The idea behind that story was that everyone had enough to survive, but nobody had enough variety to make anything good. When they all threw it into the same pot, they still had plenty of food, but now it tasted better. No more food, no less food, just better food.
Fortunately for Linux, there's plenty of "soup" to go around. Our bowl can be indefinitely reple
Historical note (Score:3, Interesting)
In the days before canning armies would starve when on the move, unless they could steal food from villages that they passed. If they did, the villagers would starve.
So, three soldiers show up in a village... of course the villagers don't know that there are only three, and they don't know that they CAN'T just steal all their food. So they pretend that they've already been robbed, and don't have any left. The stone soup is a con game to allow people to safely contribute without being
Re:Different motivations for sharing (Score:2)
Have to agree (Score:2, Insightful)
Article about nothing (Score:4, Insightful)
Well, here are my 0.02:
Why is sharing important:
It breaks down traditional corporate moloch, it teaches that anarchy-like goal-driven structures are perfectly viable and can outperform hierarchical companies.
It teaches that inforamation must be free (both as beer and as freedom), if it isnt, there will always be ways to free it.
It practicaly demonstrates that acting selfish is not way to go (try throttling bt upload to 1kb/s, see results
All in all, its kind of hippie like philosophy crossed with viable economy (thats not based around money, but around ideas).
Re:Article about nothing (Score:3, Informative)
If you read the article it describes that people are acting their own self interest. Donation of time and intellectual resources are not purely charitable, people do them for personal gain (fame and recognition by peers, experience that increases their value in paying jobs, and enjoyment)
"The reason often seems to be that writing ope
Re:Article about nothing (Score:2)
Well if an empoyer asks "Can you do X," not only can you explain how to do it, you have an actual implementation of that skill to point to source code and all. It's not necessarily just what comes up in a job application, its the new skills developed overall.
Plus it does not explain why people share THAT much - it not about tangible resources (average filesharer uses most of his bw fo
The economies of sharing V1.0 (Score:5, Funny)
V1.0 - I have axe, you have club, therefore you share everything with me.
V2.0 - I am the government, therefore you share part of everything with me and I decide who to share with.
V3.0 - I have fileserver, you have connection, therefore I share everyone else's stuff with you whether they gave me permission or not.
V4.0 - I have everything you have. You have everything I have. Everyone has shared everything. Life is meaningless.
Just let me know... (Score:4, Funny)
Here, you can borrow mine...
Article Quote.... (Score:2)
Step carefull around the ravenous wolves.
Insubstantial (Score:2, Insightful)
The author barely even mentions what Open Source is, does not analyse the reasons for Open Source, and gives two-three obvious explanations. Then he attempts to compare Open Source programming with file sharing and SETI@Home. It is wrong to compare these two examples since they're based on unused resources. Spare time is not an unused resource.
They're only just now getting it? (Score:3, Funny)
In Reference to Slashdot (Score:3, Insightful)
In this case, the "shareable good" involved is
the time, education, and effort of the users who participate. It is combined
with a public good--existing information--to form what is also itself a
public good--a topical news and commentary source.
The question tho' is whether the employers of many
I am not opposed to the OSS model but I would like to see more analysis of its true economic cost as I was always taught "there is no such thing as a free lunch." The fact that it does seem to produce a superior product is all the more reason to better understand its true costs.
Professor Benkler's 10/22/2004 article is a good read. Thanks for posting a reference to it.
Hopefully this was worth more than $.02
Free Open Source is Programmers "best interest" (Score:5, Insightful) aquire at the least cost. The efficency envirnment, lnng-run interest is naive of them.
OSS is in our best interest (Score:3, Interesting) acquire at the least cost. The efficiency environment, long-run interest is naive of them.
Economist is better than the rest... (Score:4, Insightful)
If the only language is English, and you have any ability at all to filter editorial statements out of news stories, you should subscribe to the economist -- and I say this even though I am a registered pinko commie bastard.
Re:Economist is better than the rest... (Score:5, Interesting)
Re:Economist is better than the rest... (Score:2, Funny)
Ah, an open source developer
Re:The rest are just worse. (Score:2, Funny)
Re:The rest are just worse. (Score:2)
Re:The rest are just worse. (Score:4, Insightful)
For most of the people doing reviewing, the Economist is really very fair and reasonable in its reporting.
Is it possible you are just politically marginalized, and that your views differ significantly from the rest of ours?
Is there a publication you recommend? That isn't filled with lunatic fringe ravings? Seriously, I would like to try it.
Re:The rest are just worse. (Score:3, Interesting)
Re:in-crowd (Score:5, Insightful)
Re:in-crowd (Score:2, Insightful)
Re:in-crowd (Score:5, Interesting)
I think that you probably haven't really read too much of it.
Re:in-crowd (Score:2, Insightful)
Liberal, actually (Score:5, Insightful)
After many years of reading the Economist, I agree with their self-assessment.
Having said that, I've never been comfortable with the 1-dimensional right/left political categorizations. People and politics are far more complicated than that.
Re:Liberal, actually (Score:2)
Liberals and freedom (Score:5, Insightful)
Later, that became associated with fighting for other sorts of freedom, such as civil rights for minority groups.
The association of "liberal" with "poor and minority groups" has led the term somewhat away from its original meaning. Over time, it's become associated with improving the lot of poor people even where they're not activily being oppressed but merely poor: welfare, medical care, affirmative action, etc.
Liberals argue that the causes of poverty are side-effects of less obvious rights violations by rich people and companies. They'd argue that a company which employs many people in a town has an obligation to those people to continue to employ them, even when that factory is no longer profitable. That obligation by the company is the right of the people.
I wouldn't say that the Economist is "all for" corporate tyranny. They'd say that a factory which isn't profitable cannot employ those workers because there simply is no money to pay them. That strikes them as simple level-headedness: you cannot pay workers from nonexistent money.
But they do hold the company responsible for its non-economic externalities. If the company is dumping cadmium into the water and poisoning those workers, even if it's proftable for the company it is wrong to do so. Simple economics will not prevent that, so they recommend well-chosen and well-enforced government regulation.
I often find myself disagreeing with them. Their notion of free-market capitalism often assumes frictionless changes that are untrue. If a company moves a factory from Flint, Michigan to Bangladesh, yes, I suppose it does improve the US economy by allowing Americans to purchase the goods more cheaply, thus freeing up their capital for investment in other things.
But the people of Flint, Michigan don't realize those improvements directly; they don't immediately acquire programming skills and move to San Francisco to get better jobs. Nor do they disappear. Even if the simple "invisble hand" argument works for the good of the country as a whole, it can cause vicious harm in microeconomic terms, and those are externalities which shouldn't be ignored.
Re:Liberals and freedom (Score:4, Insightful)
"[Liberals would] argue that a company which employs many people in a town has an obligation to those people to continue to employ them, even when that factory is no longer profitable."
Some "liberals" might, especially communistic/collectivists at the far end of the "liberal" spectrum, who have wrapped around to some kind of "national socialism" or something. But most "liberals" base the requirements of a corporation's obligation to its community on the exchange of value between them, and explicit agreements. Places like Flint, Michigan were built on government subsidies to create factories, from police security to education to tax breaks to actual handouts. In fact, the people who usually complain most about a company "taking their jobs away" are usually found voting for Republicans, calling themselves "conservatives" because of issues like abortion, homosexuality and evolution (AKA minding someone else's business). That kind of "right to work" at the expense of actual business is rarely heard from liberals, though lawyers, doctors, and other rich people still think of it as their right.
Re:in-crowd (Score:5, Insightful)
Actually, they're pretty moderate and reasonable with their analyses, they advocate market solutions for problems that a market can solve i.e. most things.
They go with the least-worst economic system (free-market with a small dash of government regulation to stop the worse excesses of capitalism) since that appears to have won the argument so far. So they obsess about what Greenspan says, but isn't that their job? That's the "Economist" bit in "The Economist".
And hindsight is a wonderful thing. Nobody else was worrying about the Taliban at the time, either.
Re:in-crowd (Score:2)
Re:in-crowd (Score:4, Insightful)
Well, gee, with the collusion of apathy and cheerleading amongst news sources, it becomes difficult for the common man to become educated enough about things like future Talibans in order to become concerned.
The American CIA is similarly insulated from public worry. People commonly go about their lives utterly unconcerned about the documented offenses of this agency. In part, that's because of the press blackout.
The old sentiments are quite correct on this matter: Without a free (or diverse) press, our democracies simply cannot function.
My favorite Economist article (Score:2)
I remember an Economist article which, essentially, attacked parents for the drain on productivity that they caused: time off work, annoyances to more productive non-parents, etc. The article's argument was that highly productive single people shouldn't in any way have to share the costs of society's need to raise children, right down to not having to put up with children in public spaces.
I came away with the impression that the article's author would love it if all new children were banned, and we had
Re:My favorite Economist article (Score:2)
I ask since they've had numerious articles talking about the demographic catastrophies coming for many first-world economies as the workforce ages and fewer and fewer workers need to pay for more and more retirees, so this article seems really out of sync.
Independant (Score:5, Interesting)
As to the "right wing propagandistic tool of international corporatism". Wow, good line if it's some sort of attempt at ironic hip retro-sixties radical leftism, but it doesn't have much to do with...well, reality.
The Economist supported Kerry, after all, in the US elections. They have been quite positive about Linux for a long time. They are being sued by Silvio Berlusconi, Italy's right wing leader, because of their scathing attacks on his corruptness. This is hardly the sort of independant thoughts and writing that one would expect from a "propogandistic tool".
Re:in-crowd (Score:2)
Re:in-crowd (Score:2)
Re:in-crowd (Score:2)
Re:in-crowd (Score:3, Insightful)
[In response to an anonymous reporter's question [banking.com] "Why do you rob banks?"]:
"Because that's where the money is." - Willie Sutton
[from the bottom of the current Slashdot page in which I'm submitting this post]:
"I don't have any solution but I certainly admire the problem. -- Ashleigh Brilliant"
What's wrong with the Economist.... (Score:3, Interesting)
Re:in-crowd (Score:2)
Re:in-crowd (Score:2)
Re:Don't agree with me? Fascists! Nyah nyah nyah (Score:2)
Re:British WSJ (Score:3, Funny)
|
http://slashdot.org/story/05/02/07/1146238/the-economist-on-the-economics-of-sharing
|
CC-MAIN-2015-18
|
refinedweb
| 4,631
| 62.98
|
import/export american used cars
Search Engine Stat:
Links by hjacques
- import/export american used cars
import of american certified used cars to lebanon.
Other businesses in this category:
- Byout online
Byoutonline is aimed mainly at the private property sales and rental markets. Here you can advertise your Lebanese property For Sale or For Rent 100% free, no hidden costs, no extras.
- FARHAT Bakery Equipments
Lebanon Manufacturer & industry for automatic lines & ovens for arabic bread, lebanese bread, manakish lahmajin pizza pies, atayef sweets, kaak breadsticks grissini confectionaries, pita pocket bread, naan indian roti bread as well as lavash tannour bread.
- website design for free
There is an erroneous misconception that getting a web presence costs an arm and a leg simply because one has to hire a qualified programmer. Also, most people do not realize that you can get a complete website design for free!
- Siyartak.com
Find cars, classic cars, motorcycles, ATV's, Trucks, boats, yachts, number plates, rental cars ...For sale all in Lebanon. Your automotive directory classifieds SIYARTAK.com , Get you Lisitng Added.
- Apartment For Sale, Real Estate in Lebanon
For all your real estate needs contact us, and we will help you to buy or sell your property faster with good prices. With us you can find valuable real estate information regarding buyers and sellers. We are real estate agents and provide our expert services to help you in buying or selling your homes, villas, flats, apartments, buildings, or any property in Beirut as well as the entire Lebanese countries.
|
http://www.arabiaweb.com/detail/14248/import-export-american-used-cars.html
|
crawl-003
|
refinedweb
| 254
| 51.99
|
Monitoring ClickHouse on Kubernetes With Prometheus and Grafana
Monitoring ClickHouse on Kubernetes With Prometheus and Grafana
In this article, we discuss how to perform ClickHouse monitoring on Kubernetes with Prometheus and Grafana.
Join the DZone community and get the full member experience.Join For Free
The ClickHouse Kubernetes operator is great at spinning up data warehouse clusters on Kubernetes. Once they are up, though, how can you see what they are actually doing? It’s time for monitoring!
In this article, we’ll explore how to configure two popular tools for building monitoring systems: Prometheus and Grafana. The ClickHouse Kubernetes operator includes scripts to set these up quickly and add a basic dashboard for clusters.
Monitoring Architecture
Let’s start with a quick look at how monitoring works in a Kubernetes ClickHouse installation. Here’s a picture of the main moving parts.
All of the key components run in Kubernetes. Starting at the top, the ClickHouse Kubernetes Operator deploys and manages ClickHouse clusters on Kubernetes.
The next component is Prometheus, a time-series database that stores metrics on all components we are observing. It fetches metrics on ClickHouse nodes from the Metrics Exporter. This operator component implements a Prometheus exporter interface. The interface exposes data in a standard format that Prometheus understands.
The final component is Grafana. It serves up dashboards to web browsers. The dashboards fetch data using queries back to the Grafana server, which in turn calls Prometheus.
The ClickHouse operator tracks cluster configurations and adjusts metrics collection without user interaction. This is a handy feature that helps reduce management complexity for the overall stack. Speaking of the stack, let’s now dive in and set it up.
Getting Started
Scripts used in this demo come from the ClickHouse Kubernetes operator project on GitHub. Your first step is to grab the current source code. You won’t need to build anything because the actual services are prebuilt container images.
xxxxxxxxxx
git clone
cd clickhouse-operator
(Note: If you have already cloned the repo, run ‘git pull’ to ensure it is up-to-date.)
All further steps will require a running Kubernetes cluster and a properly configured kubectl that can reach it. If you don’t have Kubernetes handy, you can take a break and install Minikube. If you are using an existing cluster, you will need system privileges to create namespaces and deploy to the kube-system namespace.
Set Up ClickHouse on Kubernetes
Users already running a ClickHouse operator to run Kubernetes clusters can skip this section. If you are starting from a clean Kubernetes installation, read on.
Install ClickHouse Operator
The quickest way to install the ClickHouse operator is to apply the .yaml deployment file as shown below:
xxxxxxxxxx
kubectl apply -f deploy/operator/clickhouse-operator-install.yaml
kubectl get all -n kube-system --selector=app=clickhouse-operator
Once the second command shows a running clickhouse-operator pod, you are ready to proceed.
Install Zookeeper
Zookeeper is necessary for ClickHouse replication to work. You can quickly install it as follows: use the second command to check that zookeeper pods are running (this example just uses one).
xxxxxxxxxx
deploy/zookeeper/quick-start-persistent-volume/zookeeper-1-node-create.sh
kubectl get all -n zoo1ns
Create a ClickHouse Cluster
We can now start a ClickHouse cluster, which will give us something to look at when monitoring is running. For this example, we will install a cluster with 2 shards and 2 replicas each. Here’s the command to start the cluster and see the pods.
xxxxxxxxxx
kubectl apply -f docs/chi-examples/17-monitoring-cluster-01.yaml
kubectl get all
You can access ClickHouse directly by running
kubectl exec, as in the following example:
xxxxxxxxxx
kubectl exec -it chi-monitoring-demo-replcluster-0-0-0 clickhouse-client
Welcome to your cloud-native data warehouse! We can now proceed with the installation of the monitoring stack.
Install Prometheus
Cd to the deploy/prometheus directory. Run the following commands to install Prometheus and check that it is running. You will see an operator for Prometheus as well as a couple of pods.
xxxxxxxxxx
./create-prometheus.sh
kubectl get all -n prometheus
The deployment script configures the ClickHouse operator as a target source of metric data. You can confirm Prometheus sees the operator using a curl command + jq to fetch active targets. The following commands expose the Prometheus listener port and do exactly that.
xxxxxxxxxx
kubectl -n prometheus port-forward service/prometheus 9090 &
curl -G | jq '.data.activeTargets[].labels'
Rather than using curl, you can also open the following URL in a browser to see the same information:. Don’t forget to run the port-forward command.
Install Grafana
Grafana is the last component. Cd to the deploy/grafana/grafana-with-grafana-operator directory. Use the following commands to install Grafana and check that it is running. Like Prometheus, you will see an operator and a Grafana pod after a successful installation.
xxxxxxxxxx
./install-grafana-operator.sh
./install-grafana-with-operator.sh
kubectl get all -n grafana
The export command at the beginning causes the operator to load the ClickHouse plugin. This is not strictly necessary for our demo, but it allows you to create ClickHouse data sources and load dashboards of your own that talk directly to ClickHouse servers.
xxxxxxxxxx
kubectl --namespace=grafana port-forward service/grafana-service 3000
The Grafana server user and password are admin/admin. You will want to change this before making Grafana publicly accessible.
Using the Default Dashboard
The Grafana installation script automatically installs a Prometheus-based dashboard for monitoring ClickHouse. This is a nice touch since it means we can now see ClickHouse metrics without any special effort.
Next, point your browser to the following URL:. You’ll see a dashboard like this:
Press the Altinity ClickHouse Operator Dashboard link, and you will be rewarded by something like the following:
If you are already familiar with Grafana, you will find the default dashboard easy to understand. If you are a new Grafana user, on the other hand, here are a few things to try:
Select different values on the time drop-down on the upper right to see metrics at different time scales.
Use the selectors at the top of the screen to zero in on data for specific Kubernetes namespaces, ClickHouse installations, and specific servers.
Click the name of any panel and select View to look at individual metrics in detail.
Another fun exercise is to use the ClickHouse operator to add a new cluster or scale an existing cluster up or down. Changes to metrics take a few minutes to percolate through Prometheus. Press the refresh button (or reload the screen) to see the changes appear both in panels as well as the selector drop-down at the top of the screen.
Where to Go Next
As noted above, this blog post is the beginning of ClickHouse monitoring, not the end. Here are a few things you can do.
Put Some Load on the System
Monitoring idle systems is dull. You can put a load on the system as follows. First, connect to one of the pods.
xxxxxxxxxx
kubectl exec -it chi-monitoring-demo-replcluster-0-0-0 bash
Next, create a source table on all nodes on the cluster and a distributed table over it.
xxxxxxxxxx
clickhouse-client
ClickHouse client version 19.16.10.44 (official build).
. . .
:) CREATE TABLE IF NOT EXISTS sdata2 ON CLUSTER '{cluster}' (
DevId Int32,
Type String,
MDate Date,
MDatetime DateTime,
Value Float64
) Engine=ReplicatedMergeTree('/clickhouse/{cluster}/tables/{shard}/sense/sdata2', '{replica}', MDate, (DevId, MDatetime), 8129);
xxxxxxxxxx
:) CREATE TABLE IF NOT EXISTS sdata_dist ON CLUSTER '{cluster}'
AS sdata
ENGINE = Distributed('{cluster}', default, sdata, DevId);
Finally, put some load on the system by connecting to any ClickHouse pod and executing a clickhouse-benchmark command like the following:
xxxxxxxxxx
clickhouse-benchmark <<END
SELECT * FROM system.numbers LIMIT 10000000
SELECT 1
SELECT count() Count, Type, MDatetime FROM sdata_dist GROUP BY Type, MDatetime ORDER BY Count DESC, Type, MDatetime LIMIT 5
INSERT INTO sdata_dist SELECT number % 15, 1, toDate(DateTime), now() + number as DateTime, rand() FROM system.numbers LIMIT 5000
END
Enhance the Dashboard
You can log in to Grafana with user “admin” and password “admin.” At this point, you can edit the dashboard to add new features or move things around.
Build Your Own Monitoring Dashboards
The default dashboard is a good starting point that shows examples of different types of Prometheus queries to access exported ClickHouse data. Make a copy and add your own metrics. The dashboard JSON source is located in clickhouse-operator/deploy/grafana/grafana-with-grafana-operator. You can also export the JSON definition directly from the Grafana server.
Build Dashboards That Access ClickHouse
We’ll cover this topic in greater detail in a future blog article, but here are a few tips to building dashboards that access ClickHouse directly.
Step 1: Create a user with network access enabled from other namespaces. You’ll need to add a section to your ClickHouse cluster resource file that looks like the following:
xxxxxxxxxx
users:
# User for grafana
grafana/password: secret
grafana/networks/ip: "::/0"
Step 2: Ensure that your Grafana service has the ClickHouse plugin loaded. The installation procedure described above does this automatically as part of the installation. (Just in case you have forgotten the user/password for the server, it’s admin/admin.)
Step 3: Create data sources to access ClickHouse servers. If you are going through the main load balancer, use a URL like. The typical DNS name pattern is cluster_name.namespace.svc.cluster.local for Minikube and kops. Other Kubernetes distributions may differ.
Conclusion
ClickHouse Operator, Grafana, and Prometheus work well together to enable monitoring of ClickHouse installations running on Kubernetes. This blog article shows how to set up the default monitoring stack in a Kubernetes installation that you fully control.
Of course, there are many other types of monitoring, monitoring tools, and environments to monitor. We are planning more articles to address them, so stay tuned. In the meantime, check out our recent ClickHouse Monitoring 101 webinar for a soup-to-nuts overview.
And if you find anything wrong with the scripts described here, log an issue on the ClickHouse operator project in GitHub. Even better, fix it yourself and file a PR. We love improvements from the open-source community!
Published at DZone with permission of Robert Hodges . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/monitoring-clickhouse-on-kubernetes-with-prometheu
|
CC-MAIN-2020-24
|
refinedweb
| 1,749
| 55.84
|
Hi I am having a class which contains user defined data type property. I have created an instance of that class. When I bind that object of that class to DetailsView it is showing all properties except user defined data type property. Here is the sample code.
public class Customer { public string CustomerName { get; set; } public int Age { get; set; } public Address CustomerAddress { get; set; } } Address class looks like public class Address { public string Line1 { get; set; } public string Line2 { get; set; } public string City { get; set; } } Creating an object of Customer class var cust = new Customer { <Fields> <asp:BoundField <asp:BoundField <asp:BoundField </Fields> </asp:DetailsView>
Above code is not displaying Customer Address. Can any one help me ?
|
http://www.dskims.com/how-to-bind-user-defined-data-type-object-to-detailsview/
|
CC-MAIN-2019-13
|
refinedweb
| 119
| 60.45
|
OK, we are being introduced to classes in my C++ class.
Here are my .h files:
Which looks ok.....Which looks ok.....Code:// "str.h" file class Str { //create class 'str' char s[32]; //define array of 32 characters public: Str (char *p=""); //ctor (default) int operator<(Str &str); //operator friend istream & operator>>(istream &in, Str &str); friend ostream & operator<<(ostream &out, Str &str); }; #endif // "student.h" file #ifndef STUDENT_H #define STUDENT_H #include <iostream.h> #include "str.h" class student{ Str name; int grade; public: student(char *n="", int g=0); friend istream &operator >> (istream &in, student &student); friend ostream &operator << (ostream &out, student &student); }; #endif
My main .cpp file returns errors, I am posting just one function for brevity:
My errors are as follows:My errors are as follows:Code:void Sortbygrade (student s, int count) { for (int i=0; i< count-1; i++) for (int j=0; j<count-1; j++) if (s[j+1].grade > s[j].grade) //error here { student n; n=s[j]; //error here s[j]=s[j+1]; //error here s[j+1]=n; //error here }; }
C:\Cpp1.cpp(54) : error C2676: binary '[' : 'class student' does not define this operator or a conversion to a type acceptable to the predefined operator
Now, where am I going wrong on the operator? I am sorting a list of students that will be read from a file.
|
https://cboard.cprogramming.com/cplusplus-programming/56821-class-concept.html
|
CC-MAIN-2017-43
|
refinedweb
| 231
| 63.49
|
I apologize for not keeping up my blogging as expected. I will continue my follow-up from the PDC demos shortly, but first I will answer some of the questions that were left in recent comments:
Jay asked: "... has Microsoft developed an application that the common user can use out of the box without much developing experience?"
Yes! Robert demonstrated the new Speech User Experience in Windows Vista during the first part of our PDC session. Check out the video once it becomes available and you will be plesantly surprised (we hope)! Especially look for new things like the correction experience, mouse grid and numbering.
Sahil Malik asked: "Will the WinFX speech API be more than just a wrapper on the SASDK available as COM components already? Or will there be other enhancements - perhaps a more natural sounding Text to speech?"
First let me point out that the SASDK is for telephony speech not desktop speech (it is the SDK for the Microsoft Speech Server 2004 product). The System.Speech managed API is a complete re-design of the API which follows the .Net API design guidelines. And yes, the new voices in Windows Vista are a big improvement (IMHO) over our previous offering. The new voice, Anna, is built using a database of pre-recorded samples (rather than DSP algorithms).
Tommy asked: "Is it possible to implement a text to speech system only run at stand a lone pocket pc."
There are currently no announced plans from Microsoft to expose speech synthesis (and speech recognition) functionalities to developers on the Pocket PC platform.
- Philipp
[I apologize for the delay in following up my promise to start blogging about my PDC demo.]
At the PDC I demoed an speech-enabled RSS reader, built on top of the WPF SDK Viewer sample application. The first functionality I showed was a button that when toggled will read the contents of the blog article using Vista's new built-in voice Anna. Under the covers I used a few regular expressions to extract the content from an HTML blog entry. Once I have a text I can simply do the following 2 steps:
using System.Speech;
public static SpeechSynthesizer MySynthesizer;
MySynthesizer.Speak(new FilePrompt(<path to text file containing the article>));
The FilePrompt class is derived from the Prompt class. It is a convenient helper when the source of Speak() call is in a file on disk.
In our newsgroups (public.microsoft.speech_tech) we often get asked about how to send the output of the speech synthesizer to a WAV file instead of the default audio.
Here's the code snippet that does just that (taken from my demo):
MySynthesizer.SetOutputToWaveFile(<path to WAV file>);
MySynthesizer.Speak("Let's wreck a nice beach.");
MySynthesizer.Speak("It is now " + DateTime.Now.ToShortTimeString());
In my next blog I will show how you can use the Pause() and Resume() methods to control the speaking action. I'll also talk about some of the events raised by the SpeechSyntheizer class.
Thank you all for coming to our Friday lunch session at the PDC! The large attendance was a pleasant surprise given the timing of our session.
Rob was first up. He demonstrated the new Speech User Experience in Windows Vista. Several times during his demo he got spontaneous applause from the audience :-). He demonstrated the new user interface and the free functionality that every application gets, including dictation (if their text fields use RichEdit 2.0 controls). The fact that Windows Vista ships with our latest speech recognition engine is a big plus for developers who no longer have to worry about how their users will get a competitive recognition engine on their machines.
I was next. I did some live coding demos of speech synthesis (including synthesizing to a file) and speech recognition. The demos worked w/o a problem (although my pressing the 'End' instead of the 'Page Down' button on the slide machine caused some confusion). I will be posting the code snippets from my demos on this blog over the next couple of weeks.
Finally Steve Chang from the Speech Server team showed off some of the prototype systems that they have built. Despite some tricky setup problems (coordinating phone line access with the room's speaker system) his demos went really well. It was very impressive to see both the multi-linguality of their platform and the ability of building working systems in matters of days, not weeks or months. And things will only get better ...
The ensuing Q & A session lasted for about 20 minutes (it helped that there was no talk scheduled in our room after us). I hope we were able to answer your questions. If you have any further questions (or have questions after viewing the online recording) feel free to post it to this blog.
After a somewhat lengthy tech check last night (telephony demos always take a bit more time to setup with the A/V folks) we finally made it to the BAR (Big A.. Room) for the 'Ask the Experts' session. There were quite a few folks already waiting for us! I had a great time talking to them, finding out what they are doing with SAPI, their pain points (I hope we won't repeat the same mistakes) with our current release and what their expectation are for the future.
I'm very much looking forward to showing them our demos today at noon during our PDC session. It will be fun!
Rob and I are both in the speaker's work room at the PDC working on our demos. I can hear Rob talking to his computer - it's going to be a cool demo!
My demo is pretty much set now (snippets and all). In the remaining time I'll work on my talking point and practice my typing, since I will be doing live coding demos. For anyone that has ever done any speech demo can appreciate the 'thrill'!
I just turned on annonymous comments for my blog. Sorry for any inconvenience.
I am very excited to have the opportunity to present our work over the last 2 years at the PDC with my co-worker Robert Brown. Unfortunately, our session is at noon on Friday, at a time when a lot of attendees will likely be on their way back to their loved ones. But I hope there will be at least some of you interested enough to stop by our session to see how far speech reocognition and speech synthesis has come and how easy it is to use it from managed code:
Session Details PRSL03 - Ten Amazing Ways to Speech-Enable Your Application September 16, 12:00 PM - 12:45 PM 502 AB
Phillip Schmid, Robert Brown
Microsoft is supercharging Windows and Windows Server System with state-of-the-art speech technology. WinFX has a powerful API for enabling your users to speak to your apps and your apps to speak to your users. A future version of Speech Server will use the same API for extending the reach of your .NET applications to the telephone. The technology just works and the code's easy to write. During this lunch session, see some great examples, like using speech to access information in Windows Vista ("Longhorn"), speech enabling your rich Windows Presentation Foundation ("Avalon") application, and how to extend your application to the telephone for ubiquitous access.
Hi, my name is Philipp Schmid and I am the development lead for the speech APIs at Microsoft. My team delivers both the COM-based sapi.dll (which ships as part of the OS since Windows XP) and the new managed API, System.Speech (part of WinFX). I was one of the original developers on SAPI 5.0 (which shipped in Windows XP and Office XP). I then became my own customer as I used sapi.dll to speech-enable the shell in Windows for Tablet PC V 1.0. After that I returned to the speech group to become the development lead for the managed API effort.
Before joining MSFT in 1999, I got my Ph.D. in Computer Science from the Oregon Graduate Institute (now part of the Oregon Health Sciences University) and then spent 2 wonderful years as a Postdoctoral Associate at the Spoken Language Systems Group at MIT's Laboratory for Computer Science.
But enough about me. This blog is about all things speech recognition and speech synthesis. On the side I am very interested in enterprise technologies and will comment on those from time to time.
One more thing: 2 more days until I leave for the PDC ... More about that in my next blog.
|
http://blogs.msdn.com/speechlead/
|
crawl-002
|
refinedweb
| 1,449
| 71.85
|
Guide to Time and Date in Java (Part 3)
Guide party will start at approximately:
//20:00 LocalTime.of(20, 0, 0)
...irrespective to time zone. I can even say that my birthday party this year will be precisely at:
//2016-12-25T20:00 LocalDateTime party = LocalDateTime.of( LocalDate.of(2016, Month.DECEMBER, 25), LocalTime.of(20, 0, 0) );
But, as long as I don't provide and date. For example, we didn't cover leap years which can become a serious source of bugs. I find property-based testing extremely useful when testing dates:
import spock.lang.Specification import spock.lang.Unroll import java.time.* class PlusMinusMonthSpec extends Specification { static final LocalDate START_DATE = LocalDate.of(2016, Month.JANUARY, 1) @Unroll def '#date +/- 1 month gives back the same date'() { expect: date == date.plusMonths(1).minusMonths(1) where: date << (0..365).collect { day -> START_DATE.plusDays(day) } } }
This test makes sure adding and subtracting one month to any date in 2016 gives back the same date. Pretty straightforward, right? This test fails for a number of days:
date == date.plusMonths(1).minusMonths(1) | | | | | | | | 2016-02-29 2016-01-29 | | 2016-01-30 | false 2016-01-30 date == date.plusMonths(1).minusMonths(1) | | | | | | | | 2016-02-29 2016-01-29 | | 2016-01-31 | false 2016-01-31 date == date.plusMonths(1).minusMonths(1) | | | | | | | | 2016-04-30 2016-03-30 | | 2016-03-31 | false 2016-03-31 ...
Leap years cause all sorts of issues and break the laws of math. Another similar example is adding two months to a date that is not always equal to adding one month two times.. }}
|
https://dzone.com/articles/guide-to-time-and-date-in-java-part-3?fromrel=true
|
CC-MAIN-2019-35
|
refinedweb
| 267
| 62.24
|
get faces in coincident order from shell
Submitted by jelle on 19 July, 2011 - 23:36
Forums:
when traversing a shell, in many cases the underlying faces are not returned in coincident order.
which surprises me, since such an order should exist when the faces are sewed together.
am I expecting too much, or is there a call to traverse the shell with coincident / neighboring faces?
thanks,
-jelle
Hi Jelle,
The ordering sequence of faces in a shell is not enforced. I don't remember internals of the sewing algorithm but may presume that it may even keep the original order of the faces (in a compound, for instance). A shell coming from a STEP file will likely retain its order which is arbitrary.
Nonetheless, you can still reorder them yourself using edge connectivity information and using map containers.
Here is some pseudo-code:
put all faces into a source_map;
create empty target_list;
create empty map of used edges (edges of faces in target_list);
take first face, explode into edges, put edges into edge_map and face into target_list, remove from source_map;
until source_map is empty do:
- take next face,
- start exploding into edges, if at least one edge belongs to edge_map pick up this face, put its edges into edge_map, append face to target_list, remove from source_map;
- if no edges belong to edge_map, skip to next face
repeat.
Hope this helps.
Roman
hi Roman!
Many thanks for taking the time to explain.
I implemented an ordering algorithm much like your description ( with the TopoExplorer its not very hard ), that works perfectly ;)
I also take into account the parametric orientation of the faces ( since u,v direction do not nessecarily match ).
However, my paranoia was that something like TopOpeBRepTool might already implement something alike.
Thanks!
-jelle
Hi jelle
I am a novice in opencascade, i too confronted with your problem of ordering the faces, can us share your ordering algorithm with me.
Thanks
- anand
Hi jelle,
I'm also very much interested in your implementation of the ordering algorithm. Thanks!
sure...
def sort_coincident_faces(self):
"""
"""
# make sure the shell can be oriented
# otherwise we're at a loss before starting...
from OCC.BRepCheck import BRepCheck_Shell
orient = BRepCheck_Shell(self.topo).Orientation()
if not orient==0:
raise AssertionError( 'orientation of the shell is messed up...')
tp_shell = Topo(self.topo)
faces = [fc for fc in self.Faces()]
# hash face to its edges
edges_from_faces = dict([fc, [edg for edg in fc.Edges()] for fc in self.Faces()])
source_faces = list(edges_from_faces.keys())
sorted_faces = [edges_from_faces.keys()[0]]
# add the edges of the face contained in sorted_faces
identified_edges = list(edges_from_faces[sorted_faces[0]])
while source_faces:
connections = defaultdict(int)
for k,v in edges_from_faces.iteritems():
print 'k',k
for edg in v:
for i in identified_edges:
if edg.topo.IsPartner(i.topo):
print 'connect///'
connections[k]+=1
# there can be several faces that are connected to the faces in `sorted_faces`
# the face of interest is the one with the most coincident edges
k = sorted(connections.iteritems(), key=operator.itemgetter(1), reverse=True)[0][0] # get the face with most connections
if not k in sorted_faces:
sorted_faces.append(k)
identified_edges.extend(edges_from_faces[k])
for i in sorted_faces:
if i in edges_from_faces:
del edges_from_faces[i]
if i in source_faces:
del source_faces[source_faces.index(i)]
return sorted_faces
So, I follow Roman's suggestion, with 1 slight chance.
I sort the faces that share an edge to their degree; the number of coincident edges a faces has.
For instance, if you already have 2 faces of a cube, the 3rd face should be connected to _either_ of the other faces, right?
connections[k]+=1 # +1 coincidence for this face K
That's all there is to it...
The screendump explains it somewhat more intuitively...
-jelle
Messed up indentation, please see:
|
https://www.opencascade.com/content/get-faces-coincident-order-shell
|
CC-MAIN-2020-29
|
refinedweb
| 625
| 62.98
|
.
In case you also do not have Varnish, you will need to follow the instructional section on how to Install Varnish before we can continue..
Here switch caching application to Varnish
This is what the Magento2 mdmin page should look like:
Image courtesy: Magento Site
Place the file in a Varnish folder for configuration (any place that is safe for you)..
sudo systemctl restart varnish.service sudo systemctl restart apache2.service; } }
Not all URLs should be cached. Especially not in sites that deal with personal information such as credit cards or financial details.
if (req.url ~ "magento_admin|magento_login") { return (pass); }.
Varnish creates a TTL value for every object in the cache. The most effective way of increasing a website’s hit ratio is to increase the time-to-live (TTL) of the objects..
2 Comments
|
https://info.varnish-software.com/blog/topic/using-varnish-to-speed-up-magento
|
CC-MAIN-2019-04
|
refinedweb
| 135
| 57.06
|
Ok, Junio had some cool octopus merges, but I just one-upped him. I just merged the "gitk" repository into git, and I did it as a real git merge, which means that I actually retained all the original gitk repository information intact. IOW, it's not a "import the data" thing, it's literally a merge of the two trees, and the result has two roots. Now, the advantage of this kind of merge is that Paul's original gitk repository is totally unaffected by it, yet because I now have his history (and the exact same objects), the normal kind of git merge should work fine for me to continue to import Paul's work - we have the common parent needed to resolve all differences. Now, I don't know how often this ends up being actually used in practice, but at least in theory this is a totally generic thing, where you create a "union" of two git trees. I did the union merge manually, but in theory it should be easy to automate, with simply something like git fetch <project-to-union-merge> GIT_INDEX_FILE=.git/tmp-index git-read-tree FETCH_HEAD GIT_INDEX_FILE=.git/tmp-index git-checkout-cache -a -u git-update-cache --add -- (GIT_INDEX_FILE=.git/tmp-index git-ls-files) cp .git/FETCH_HEAD .git/MERGE_HEAD git commit (this is not exactly how I did it, but that's just because I'd never done this before so I didn't think it through and I had some stupid extra steps in between that were unnecessary). Of course, in order for the union merge to work, the namespaces have to be fit on top of each other with no clashes, otherwise future merges will be quite painful. In the case of gitk, Paul's repository only tracked that single file, so that wasn't a problem. Anyway, you shouldn't notice anything new, except for the fact that "gitk" now gets automatically included with the base git distribution. And the git repository has two roots, but hey, git itself doesn't care. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@vger.kernel.org More majordomo info at on Thu Jun 23 07:53:46 2005
This archive was generated by hypermail 2.1.8 : 2005-06-23 07:53:48 EST
|
http://www.gelato.unsw.edu.au/archives/git/0506/5511.html
|
CC-MAIN-2014-42
|
refinedweb
| 396
| 59.03
|
Just
If you’re a PHP dev and you haven’t heard of Composer yet, IMO you’re doing it wrong!
Composer is a great tool. I finally replaced a load of my hacked-together libraries with proper Symfony and Zend ones.
I was going to implement activitypingback.org this afternoon but got carried away adding docblocks to my code.
Wow, never thought I'd write “docblocks” and “carried away” in the same sentence :/
Now if only phpDocumentor could run my tests and display testing information as well. I would live in localhost/docs!
Oh man, PHPDocumentor produces awesome results
Reduced test cases are my friends tonight #microformats
HA! value-class pattern, I have beaten you.
For at least one test case. But still.
@sandeepshetty yep. At the moment it’s a couple of functions, by tomorrow it’ll be a PHP5 namespaced composer package.
Built and released today: ~ THE TRUNCENATOR ~ #indieweb
Use for HTTP and OAuth 1 in PHP. You’ll never need another HTTP library.
|
https://waterpigs.co.uk/notes/?tagged=php&before=2012-10-13T21%3A36%3A19%2B00%3A00
|
CC-MAIN-2017-04
|
refinedweb
| 167
| 67.76
|
GitHub Readme.md
A Telegram Userbot based on Pyrogram
This repository contains the source code of a Telegram Userbot and the instructions for running a copy yourself. Beside its main purpose, the bot is featuring Pyrogram Asyncio and Smart Plugins; feel free to explore the source code to learn more about these topics.
I assume you will read this whole README.md file before continuing.
Development in progress.
You're gonna need to get the following programs and services either installed on your server or signed up for. You must do all. It is a cardinal sin if you don't.
virtualenv installed so that the packages don't interfere with other system packages.
MongoDB on your server or a free server from MongoDB Atlas. (I recommend Atlas as I used it during development with no issues.)
carbon-now-cli on your server too generate code images for the carbon.py module. I use this CLI tool cause I don't know and couldn't get selenium and chromedriver to work nicely on my server/code. I'll be nice and even give you the command to install this. I assume you already have NPM installed.
Windows: npm install -g carbon-now-cli Linux: sudo npm install -g carbon-now-cli --unsafe-perm=true --allow-root MacOS: I assume almost the same as linux ¯\_(ツ)_/¯
One Click Deploy Quick Deploy on Heroku using the button down below:
The way I deploy
git clone cd userbot virtualenv venv source venv/bin/activate pip install -r requirements.txt python -m userbot.
To add extra modules to the bot, simply add the code into userbot/plugins. Each file that is added to the plugins directory should have the following code at a minimum.
from pyrogram import Message, Filters from userbot import UserBot @UserBot.on_message(Filters.command('sample', ['.'])) async def module_name(bot: UserBot, message: Message): await message.edit( "This is a sample module" )
This example is only for Pyrogram on_message events.
Dan for his Pyrogram Library
Colin Shark for his PyroBot which helped with most of the useful functions used.
The people at MyPaperPlane for their Telegram-UserBot that gave a ton of ideas on how and what modules to include in this userbot.
Baivaru for the ton of help that got me this far into making this repo.
Made with love from the Maldives ❤
|
https://elements.heroku.com/buttons/athphane/userbot
|
CC-MAIN-2020-34
|
refinedweb
| 393
| 65.83
|
Creating a Custom Theme Project
The second option to create a custom theme is to create your own custom theme project, which doesn't follow the approach from the Telerik built-in themes. Basically, what you need to do is to create XAML files for the controls that you want to style and then to combine them in one file (like Generic.xaml) using ResourceDictionary.MergedDictionaries. Then create a new class, which derives from Telerik.Windows.Controls.Theme. In the constructor you should set the source to point to the Generic.xaml file (that merges all your XAML files from the theme project). The tricky part here is that.
The purpose of this topic is to show you how to do that.
Open Visual Studio and create a new Silverlight Application. Name the project CustomThemeDemo.
Add a new Silverlight class library project to your solution, named MyTheme.
In the MyTheme project add references to the Telerik assemblies containing the controls you want to style. For example, if you have Style for RadMenu, you should have a reference to the Telerik.Windows.Controls.Navigation.dll assembly. In this demo the RadSlider control will be styled for simplicity. That's why you need to add a reference only to the Telerik.Windows.Controls.dll assembly.
In the MyTheme project, add a new folder named Themes. All XAML files, describing the styles for the target controls, should be placed in the Themes folder. In the following example, the original Vista theme of the RadSlider control is copied from the UI for Silverlight installation folder (~UI for Silverlight Installation Folder\Themes\Vista\Themes\Vista\Slider.xaml) and pasted in the Themes folder of the MyTheme project.
Add a new ResourceDictionary to the Themes folder. Name it Generic.xaml. Use Generic.xaml as a Resource Dictionary which contains Merged Resource Dictionaries only.
<ResourceDictionary xmlns="" xmlns: <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/MyTheme;component/Themes/Slider.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary>
Just for the demonstration, open the Slider.xaml file and modify the SliderBackgroundTrack brush to Red.
<SolidColorBrush x:
Add a new class named MyTheme to the MyTheme project. That class should derive from the Telerik.Windows.Controls.Theme base class. In the constructor you should set the source to point to the Generic.xaml file (that merges all your XAML files from the theme project).
using System; namespace MyTheme { public class MyTheme : Telerik.Windows.Controls.Theme { public MyTheme() { this.Source = new Uri( "/MyTheme;component/Themes/Generic.xaml", UriKind.Relative ); } } }
Imports System Namespace MyTheme Public Class MyTheme Inherits Telerik.Windows.Controls.Theme Public Sub New() Me.Source = New Uri("/MyTheme;component/Themes/Generic.xaml", UriKind.Relative) End Sub End Class End Namespace
Finally, the MyTheme project should have the following structure.
The next step is to apply the custom theme. In the Silverlight client project, add a reference to the project containing your custom theme (in this case this is the MyTheme project)..
Open the App.xaml.cs file and add the following code in the constructor. This will be enough to apply the theme globally to all Telerik Silverlight controls.
Telerik.Windows.Controls.StyleManager.ApplicationTheme = new MyTheme.MyTheme();
Telerik.Windows.Controls.StyleManager.ApplicationTheme = New MyTheme.MyTheme()
If you want to apply the theme only for a specific control, then you should stick to the following approach:
<UserControl.Resources> <myThemeProject:MyTheme x: </UserControl.Resources> <Grid x: <telerik:RadSlider x: </Grid>
|
https://docs.telerik.com/devtools/silverlight/styling-and-appearance/stylemanager/creating-a-custom-theme/common-styling-apperance-themes-custom-theme-project
|
CC-MAIN-2018-43
|
refinedweb
| 561
| 61.12
|
In this C++ tutorials, you will learn about friend functions, the need for friend function, how to define and use friend functions and few important points regarding friend functions, explained with examples.
The Need for Friend Function:
As discussed in the earlier sections on access specifiers, when a data is declared as private inside a class, then it is not accessible from outside the class. A function that is not a member or an external class will not be able to access the private data. A programmer may have a situation where he or she would need to access private data from non-member functions and external classes. For handling such cases, the concept of Friend functions is a useful tool.
What is a Friend Function?
A friend function is used for accessing the non-public members of a class. A class can allow non-member functions and other classes to access its own private data, by making them friends. Thus, a friend function is an ordinary function or a member of another class.
How to define and use Friend Function in C++:
The friend function is written as any other normal function, except the function declaration of these functions is preceded with the keyword friend. The friend function must have the class to which it is declared as friend passed to it in argument.
Some important points to note while using friend functions in C++:
- The keyword friend is placed only in the function declaration of the friend function and not in the function definition.
- It is possible to declare a function as friend in any number of classes.
- When a class is declared as a friend, the friend class has access to the private data of the class that made this a friend.
- A friend function, even though it is not a member function, would have the rights to access the private members of the class.
- It is possible to declare the friend function as either private or public.
- The function can be invoked without the use of an object. The friend function has its argument as objects, seen in example below.
Example to understand the friend function:
#include <iostream>
using namespace std;
class exforsys
{
private:
int a,b;
public:
void test()
{
a=100;
b=200;
}
friend int compute(exforsys e1);
//Friend Function Declaration with keyword friend and with the object of class exforsys to which it is friend passed to it
};
int compute(exforsys e1)
{
//Friend Function Definition which has access to private data
return int(e1.a+e1.b)-5;
}
void main()
{
exforsys e;
e.test();
cout << "The result is:" << compute(e);
//Calling of Friend Function with object as argument.
}
The output of the above program is
The function compute() is a non-member function of the class exforsys. In order to make this function have access to the private data a and b of class exforsys , it is created as a friend function for the class exforsys. As a first step, the function compute() is declared as friend in the class exforsys as:
friend int compute (exforsys e1)
The keyword friend is placed before the function. The function definition is written as a normal function and thus, the function has access to the private data a and b of the class exforsys. It is declared as friend inside the class, the private data values a and b are added, 5 is subtracted from the result, giving 295 as the result. This is returned by the function and thus the output is displayed as shown above.
[catlist id=165].
|
http://www.exforsys.com/tutorials/c-plus-plus/c-plus-plus-friend-functions.html
|
CC-MAIN-2016-50
|
refinedweb
| 593
| 66.78
|
:well, it took me less than 30 minutes... :but it was really simple and is most likely not correct. : :but maybe it is, and this way, we could concentrate on something different. :btw: if this looks right, i may tackle the other _r's tomorrow. : :~ibotty That's not going to quite work, but it was a good try! The issue is that the supplied buffer must be used to hold auxillary data... the pointers to pw_name, pw_passwd, and so forth in the struct passwd. This means that __hashpw() in the same file must be adjusted. It uses a static u_int max and static char *line at the moment... instead the buffer and buffer size must be passed to hashpw() and it must properly return an error if the buffer is not large enough. getpwnam() must allocate a sufficiently sized buffer and deal with ERANGE. ERANGE must be returned if the buffer is not sufficient. This is how FreeBSD-current deals with it. I'm not saying copy this, because it's rather messy code, but just to demonstrate the problem... if (pwd_storage == NULL) { pwd_storage = malloc(PWD_STORAGE_INITIAL); if (pwd_storage == NULL) return (NULL); pwd_storage_size = PWD_STORAGE_INITIAL; } do { rv = fn(key, &pwd, pwd_storage, pwd_storage_size, &res); if (res == NULL && rv == ERANGE) { free(pwd_storage); if ((pwd_storage_size << 1) > PWD_STORAGE_MAX) { pwd_storage = NULL; return (NULL); } pwd_storage_size <<= 1; pwd_storage = malloc(pwd_storage_size); if (pwd_storage == NULL) return (NULL); } } while (res == NULL && rv == ERANGE); Would you like to try again? -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx>
|
http://leaf.dragonflybsd.org/mailarchive/kernel/2003-12/msg00223.html
|
CC-MAIN-2015-18
|
refinedweb
| 243
| 65.12
|
jmp
i am trying to get number from string using this code :
#include <IE.au3> $oIE = _IEAttach ("Edu.corner") Local $aName = "Student name & Code:", $iaName = "0" Local $oTds = _IETagNameGetCollection($oIE, "td") For $oTd In $oTds If $oTd.InnerText = $aName Then $iaName = $oTd.NextElementSibling.InnerText $iGet = StringRegExpReplace($iaName, "\D", "") EndIf Next MsgBox(0, "", $iGet) it was get number like 52503058
But, I want to get only student code 5250. (Different student have different code, sometime its 3 digits, Sometime 4)
- By BlueBandana
Is there a way to output the regex matches into a file?
I have a script to compare two files and check for regex matches.
I want to output the matching regex of 'testexample.txt' to another file.
#include <MsgBoxConstants.au3> #include <Array.au3> $Read = FileReadToArray("C:\Users\admin\Documents\testexample.txt") $Dictionary = FileReadToArray("C:\Users\admin\Documents\example.txt") For $p = 0 To UBound($Dictionary) - 1 Step 1 $pattern = $Dictionary[$p] For $i = 0 To UBound($Read) - 1 Step 1 $regex = $Read[$i] If StringRegExp($regex, $pattern, 0) Then MsgBox(0, "ResultsPass", "The string is in the file, highlighted strings: " ) Else MsgBox(0, "ResultsFail", "The string isn't in the file.") EndIf Next Next
- By.
-
Recommended Posts
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now
|
https://www.autoitscript.com/forum/topic/188042-help-path-split-with-stringregexp/?tab=comments#comment-1350422
|
CC-MAIN-2022-05
|
refinedweb
| 237
| 60.01
|
AWS Messaging & Targeting Blog sales teams have similar challenges to ensure clients do not miss meetings. To these businesses, an appointment missed represents lost revenue. As a result, the no-show rate is a key metric to improve. An outbound voice message provides another way to reach customers versus emails or SMS, and voice reminders give customers the choice of channels based on personal preferences.
Overview
Amazon Pinpoint is a multichannel communications service enabling customers to send both promotional and transactional messages across email, SMS, push notifications, voice, and custom channels. Amazon Connect is an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost.
There are benefits of using these services together. Amazon Pinpoint allows you to build a segment of users which can be used within a campaign. Amazon Connect can enable customers to send outbound voice messages at scale should your user audience be large and require a high number of transactions per second (TPS).
To use these services together, you setup custom channels in Amazon Pinpoint, which can be created via an AWS Lambda function. These functions enable you to call APIs to trigger message sends as part of Amazon Pinpoint campaigns. Amazon Pinpoint has developed a new AWS Lambda function which can be used to send outbound voice messages via Amazon Connect. This configuration allows you to define the voice message to be sent, define the segment of users you would like to target, and send voice messages at scale through Amazon Connect via the Amazon Pinpoint custom channel.
The audience for this solution are technical customers who are used to working with multiple AWS services and are familiar with AWS Lambda functions. The solution built relies on the Amazon Pinpoint custom channel feature and targeting, along with the Amazon Connect outbound voice API called via a prepared AWS Lambda function. Once completed, you will be able to create an evergreen campaign which will send outbound voice messages to your patients who have an appointment the following day.
The costs associated with this solution will be:
- Amazon Connect outbound voice calls per minute
- Amazon Connect claimed phone number(s)
- Amazon Pinpoint Monthly Targeted Audience (MTA) costs.
The costs for a outbound voice reminder system that sends 10k messages per day, with an average length of 20 seconds per call, to an total monthly audience of 300k, in the US are as follows. Note that prices with vary for other countries. Complete Amazon Connect outbound call pricing can be found here.
Solution
Prerequisites:
For this walkthrough the article assumes:
- An AWS account
- Basic understanding of IAM and privileges required to create the following; IAM identity provider, roles, policies, and users
- Basic understanding of Amazon Pinpoint and how to create a project
- Basic understanding of Amazon Connect and experience in creating contact flows. More information on setup of Amazon Connect can be found here.
Step 1: Create an Appointment Reminder custom event
The first step in setting up this solution is to create and report a custom event to Amazon Pinpoint. There are multiple ways to report events in your application. Ffor demonstration purposes, below are two example event calls using the AWS SDK for Python (Boto3) from inside an AWS Lambda Function.
It is important to note that the Amazon Pinpoint events API can also be used to update endpoints when the event gets registered. In the below example, the first API call will update the endpoint attributes
AppointmentDate and
AppointmentTime with the details of the upcoming appointment. These attributes will be used in the outgoing message to the end-user
Sample Event: Appointment Coming Up
import boto3 client = boto3.client('pinpoint') app_id = '[PINPOINT_PROJECT_ID]' endpoint_id = '[ENDPOINT_ID]' address = '[PHONE_NUMBER]' def lambda_handler(event, context): client.put_events( ApplicationId = applicationId, EventsRequest={ 'BatchItem': { endpoint_id: { 'Endpoint': { 'ChannelType': 'CUSTOM', 'Address': address, 'Attributes': { 'AppointmentDate': ['December 15th, 2020'], 'AppointmentTime': ['2:15pm'] } }, 'Events':{ 'appointment-event': { 'Attributes':{}, 'EventType': 'AppointmentReminder', 'Timestamp': datetime.datetime.fromtimestamp(time.time()).isoformat() } } } } } )
NOTE: The following steps assume that the
AppointmentReminder event is being reported to Amazon Pinpoint. If you are unable to integrate the above API call into your application, you can manually create an AWS Lambda function using a Python runtime with the above code to trigger sample events.
Step 2: Create an Amazon Connect contact flow for outbound calls
This article assumes that you have an Amazon Connect contact center already setup and working. In this step, we will set up our Amazon Connect contact flow to dial our recipients and play read the message before hanging up.
- Log in to your Amazon Connect instance using your access URL (https://<alias>.awsapps.com/connect/login).
Note: Replace alias with your instance’s alias.
- In the left navigation bar, pause on Routing, and then choose Contact flows.
- Under Contact flows, choose a template, or choose Create contact flow to design a contact flow from scratch. For more information, see Create a New Contact Flow.
- Download the sample JSON contact flow configuration file Outbound_calling.json.
- Choose the dropdown menu under Save and choose Import flow (beta).
- Select the Outbound_calling.json file in the Import flow (beta) dialog and choose Save.
- Choose Save to open the Save flow dialog. Then choose Save to close the dialog.
- Choose Publish to open the Publish dialog. Then choose Publish to close the dialog.
- In the contact flow designer, expand Show additional flow information.
- Under ARN, copy the Amazon Resource Name (ARN) contact flow. It looks like the following:
arn:aws:connect:region:123456789012:instance/[ConnectInstanceId]/contact-flow/[ConnectContactFlowId]Note the ConnectInstanceId and ConnectContactFlowId from the ARN, they will be used in the next step.
- In the left navigation bar, pause on Routing and then choose Queues.
- Choose the queue you wish to use for the outbound calls.
- In the Edit queue screen, expand Show additional queue information.
- Under ARN, copy the Amazon Resource Name (ARN) for the queue. It looks like the following:
arn:aws:connect:region:123456789012:instance/[ConnectInstanceId]/contact-flow/[ConnectQueueId]Note the ConnectQueueId from the ARN. It will be used in the next step.
Step 3: Deploy and modify the Amazon Pinpoint to the Amazon Connect custom channel with AWS Lambda function
Next, we will need to deploy an Amazon Pinpoint custom channel. Custom channels in Amazon Pinpoint allow you to send messages through any service with an API, including Amazon Connect. The AWS Serverless Application Repository contains an open-sourced AWS Lambda function that we will use for our custom channel. After deploying the AWS Lambda function, we will customize it to match our requirements.
- Navigate to the AWS Lambda Console, then choose Create function.
- Under Create function, Choose Browser serverless app repository.
- Under Public applications, choose the checkbox next to Show apps that create custom IAM roles or resource policies and enter amazon-pinpoint-connect-channel in the search box.
- Choose the amazon-pinpoint-connect-channel card from the list and review the Application details.
- Under Application settings enter the details for ConnectContactFlowId, ConnectInstanceId, and ConnectQueueId from the previous step.
- After reviewing all the details, choose the checkbox next to I acknowledge that this app creates custom IAM roles and resource policies and choose Deploy.
- Wait a couple minutes for the application to deploy two AWS Lambda functions and an AWS Simple Queue Service queue.
- Under Resources, choose the PinpointConnectQueuerFunction resource to open the AWS Lambda function configuration. This is the AWS Lambda function that Amazon Pinpoint will call when the message is crafted.
- Under Function code, scroll down to line 31 and replace
message = "Hello World! -Pinpoint Connect Channel"
with
message = "This is a reminder of your upcoming appointment on {0} at {1}".format(endpoint_profile["Attributes"]["AppointmentDate"][0], endpoint_profile["Attributes"]["AppointmentTime"][0])
- Choose Deploy.
Step 4: (Optional) Modify the custom channel AWS Lambda function to meet change the rate of outgoing calls
By default, the custom channel we deployed in the previous step will place outbound calls through Amazon Connect at a rate of 1 call every 3 seconds. This allows you to configure how many active outbound calls to avoid running into service limits. Review your current service limits in Amazon Connect for more details.
- Navigate to the AWS Lambda Console, then choose AmazonPinpointConnectChannel-backgroundprocessor function.
- Under Function code, scroll down to line 73 and replace the sleep timer, currently set with 3 seconds, with your requirements.
- Choose Deploy.
Step 5: Create a Pinpoint custom campaign with your lambda function and segment
- Create a CSV file to import endpoints with the attributes of AppointmentDate and AppointmentTime.
Example:
Id,Address,ChannelType,Attributes.AppointmentDate,Attributes.AppointmentTime
1,+1[PHONE_NUMBER],SMS,November 30 2020,9:00am
2,+1[PHONE_NUMBER2],SMS,November 30 2020,10:00am
Navigate to the Amazon Pinpoint console.
- In the All Projects list, select your project.
- In the navigation pane, choose Segments.
- Choose Create a Segment.
- Choose Import a segment and upload your CSV file and choose Create segment.
- In the navigation pane, choose Campaigns.
- Choose Create campaign.
- In the Create a campaign wizard, enter a name for campaign name.
- Under Channel choose Custom.
- Choose Next.
- On the Choose a segment screen, choose the segment created above, and choose Next.
- On the Create your message screen, do the following:
a) For Lambda function choose AmazonPinpointConnectChannel that we deployed in Step 3 above.
b) For endpoint Options choose SMS.
c) Choose Next.
- On the Choose when to send the campaign screen, do the following:
a) Choose When an event occurs.
b) Under Events, choose the AppointmentReminder event.
c) Under campaign dates, choose a Start date and time and an End date and time to be used as the campaign’s duration.
- Choose Next.
- Review the campaign details and choose Launch campaign.
Cleanup:
To remove the two AWS Lambda functions and the Amazon Simple Queue Service queue provisioned in the steps above in order not to incur further charges, please follow these steps below.
- Navigate to the Amazon CloudFormation Console.
- Choose severlessrepo-amazon-pinpoint-connect-channel and choose Delete.
- Choose Delete stack in the delete confirmation window.
Next Steps:
You can continue to iterate on this experience using Amazon Pinpoint and Amazon Connect to create a custom user experience.
- Update the Amazon Connect contact flow to get customer input with a message that allows them to select an option to be routed to an agent for appointment rescheduling.
- Become familiar with Pinpoint Journeys and how to use voice alongside other channels. More information can be found here.
- Consume the Amazon Pinpoint Event Stream by configuring it manually or deploying the AWS Solution Digital User Engagement Events Database Solution to view the success and failure metrics of the Amazon Connect outbound call API.
To learn more about these services, please visit the Amazon Pinpoint or Amazon Connect web pages.
(1)
(2)
|
https://aws.amazon.com/blogs/messaging-and-targeting/send-voice-appointment-reminders-using-amazon-pinpoint-custom-channels-and-amazon-connect/
|
CC-MAIN-2021-25
|
refinedweb
| 1,786
| 55.03
|
For a full overview of PhoneGap, I suggest reading this article by Kyle Miller. One of the things I like about PhoneGap that Kyle does not mention is that PhoneGap has the Apache licence; that means there are no roadblocking requirements to developing commercial software, and there are no fees attached unlike some of PhoneGap's competitors.
As an aside, PhoneGap is becoming an Apache project and will be called Apache Callback.
To get started with PhoneGap we must first have an environment compatible with PhoneGap, as it does not provide a standard environment for development. For iOS apps you must have XCode, Eclipse is recommended for Android, and for BlackBerry you will need Ant and WebWorks.
Since we are focusing on Android, we will need to install Eclipse and the Android SDK.
PhoneGap recommends Eclipse Classic and should be easy enough to get your hands on and install.
Next, download and unpack or install the Android SDK and the ADT plug-in.
Once the Android SDK is unpacked, run the Android SDK manager (tools/android script in my case) and select the SDKs to download and install. You'll want to install the new 4.0 (API 14) release; possibly API 13 and/or 12, which are Honeycomb builds, so when it's simulated, they will have a large widescreen appearance; and API 10, which is Android 2.3.3, the version of Android on many phones currently —- probably the version of Android that you recognise.
I'd suggest leaving the downloading and installing of Android overnight, it does take an awful long time.
Now I ran into a bit of an issue installing the ADT. Eclipse complained about the lack of org.eclipse.wst.sse.core and refused to install the plug-in, this was a result of my version of Eclipse (indigo) not being in the available software sites.
A bit of googling resulted in this solution, after which the ADT plug-in installed fine.
Finally, we are actually ready to start coding.
Open up Eclipse and start a new project, select Android Project from the Android folder. When selecting a build target, I chose Android 2.3.3, which we can run under the newer version of Android later on if we want.
At this point, to make sure that our default Eclipse and Android environment is fine, click on run to load up a basic Hello World output.
You will need to create an Android Virtual Device now, so make one that matches your minimum SDK version.
To start using PhoneGap we need to copy in the libraries that it uses. Take the Android/phonegap-x.y.z.jar file in the directory that you extracted PhoneGap to, and put it into the /libs directory of your project. You will have to create this directory. Once the file is pasted into the directory, right-click on the jar file in the Project Explorer and add it to the build path of the project.
Within the /assets folder, create a www folder and inside that folder, an index.html file. Fill the html file with whatever dummy code you desire, we are only testing the environment again at this point. You will also need to copy into this file Android/phonegap-x.y.z.js for later use.
Also copy the Android/xml folder from the PhoneGap extraction directory to the xml folder in your project.
In the project's AndroidManifest.xml file add the following line inside the tags:
<activity android: <intent-filter> </intent-filter> </activity>
Lastly, we need to update the java file in the project's /src directory to load PhoneGap.
- Add import com.phonegap.*;
- Make the public class extend DroidGap rather than Activity
- Comment out setContentView(R.layout.main); and add a new line that has super.loadUrl("");
Now let's run the project again and see our PhoneGap app in action.
If you get an error stating:java.lang.IllegalArgumentException: already added: Lcom/phonegap/AccelListener; I fixed this by cleaning the project.
We have success at getting the bare minimum going with PhoneGap. It took a while to get here, but now we are starting to do useful coding. Stay tuned for that instal.
|
http://www.techrepublic.com/blog/australian-technology/set-up-an-environment-for-android-eclipse-and-phonegap/
|
CC-MAIN-2017-04
|
refinedweb
| 703
| 64.81
|
Grasp this useful technique whilst making some watches ⌚️
What is it?
A technique for sharing logic between components. Components accept a prop that returns a function responsible for rendering something. This allows our component to focus on other logic.
For those in camp TL;DR, scroll down for a demo 👍
What do render props do?
Handle some or all the rendering logic for a component.
<SomeDataProvider render={data => <AwesomeComponent stuff={data.awesome} />} />
When to use?
When you’re repeating patterns/logic across components.
Examples;
- Repeating UI structures
- Hooking into/subscribing to a data source
- Hooking into global events (scroll, resize, etc.)
A Silly Example
Let’s create a watch ⌚️ Our watch component will use
moment.js, a date and time utility library.
Every
1000ms we set the state to a new
Moment. The state change triggers a render and we display the time.
const Watch = () => { const [date, setDate] = useState(moment()) useEffect(() => { const TICK = setInterval(() => setDate(moment()), 1000) return () => { clearInterval(TICK) } }, []) return ( <Strap> <Bezel> <Screen> <Face> <Value>{date.format('HH')}</Value> <Value>{date.format('mm')}</Value> </Face> </Screen> </Bezel> </Strap> ) }
Don’t worry about
Strap,
Bezel,
Screen, etc. or any of the styling. We are only concerned with the technique.
But what if we wanted a watch with a different face? Many wearables allow us to change the watch face. Do we create a new
Watch variation for each face? No 👎
This is where a
render prop comes into play. We can adjust
Watch to utilise one for rendering a watch face.
Watch becomes a component that provides the current time and passes that to a
render prop.
const Watch = ({face}) => { const [date, setDate] = useState(moment()) useEffect(() => { const TICK = setInterval(() => setDate(moment()), 1000) return () => { clearInterval(TICK) } }, []) return ( <Strap> <Bezel> <Screen> {face(date)} </Screen> </Bezel> </Strap> ) }
Now we can create stateless face components that take a
Moment and render it in different ways.
Extracting our initial implementation might look something like
const CustomFace = date => ( <Face> <Value>{date.format('HH')}</Value> <Value>{date.format('mm')}</Value> </Face> ) // JSX to render being <Watch face={CustomFace} />
What if we don’t pass in
face? We’d get a blank watch. But we could rename
CustomFace to
DefaultFace and make it a
defaultProp on
Watch 👍
Nice 👍
Let’s create a new face. An analog one 🕔
const AnalogFace = date => { const seconds = (360 / 60) * date.seconds() const minutes = (360 / 60) * date.minutes() const hours = (360 / 12) * date.format('h') return ( <Face> <Hand type='seconds' value={seconds}/> <Hand type='minutes' value={minutes}/> <Hand value={hours}/> </Face> ) }
This one takes the date and displays it with hands ✋
We could then extend this to create a slew of different watch faces 🤓 No need to repeat the logic.
const App = () => ( <Fragment> <Watch face={DayFace} /> <Watch /> <Watch face={AnalogFace} /> <Watch face={DateFace} /> <Watch face={SecondsFace} /> </Fragment> ) render(<App />, ROOT)
Giving us
And that’s it!
Using a
render prop on our
Watch component keeps the logic in one place and stops us from repeating ourselves. This makes things easier to maintain and reuse 💪
DOs 👍
- Use when there’s an opportunity to share component/render logic
DON’Ts 👎
- Overuse. Another pattern may be more appropriate.
- Avoid implementing
renderprops with
PureComponents unless your prop is statically defined
NOTES ⚠️
- A
renderprop can have any name.
childrenis a
renderprop.
- Most components using a
renderprop could also be a higher-order component and vice versa!
That’s it!
A 3-minute intro to
render props!
For further reading, check out the React Docs.
All the demos are available in this CodePen collection.
As always, any questions or suggestions, please feel free to leave a response or tweet me 🐦!
As always, any questions, please feel free to leave a response or tweet me 🐦! And say "Hey!" anyway 😎
Discussion
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jh3y/react-s-render-props-technique-in-3-minutes-3njg
|
CC-MAIN-2021-04
|
refinedweb
| 619
| 59.09
|
The age of AI is upon us and many companies begin to start their AI journey and reap the full potential of AI in their respective industries. But, some still consider AI as an immature technology with plenty of ways for it to go wrong. Therefore, before starting your long AI journey, there are some pitfalls you should avoid in implementing and developing AI solutions. They’re a result of the anecdotal, personal and published experience of AI projects that could have gone better.
Reinventing the wheel, that’s the reasonable words to describe building an AI system that has become an industry standard. It is a waste of your company’s time and resources. Instead, buy it from a company that has done research and development for years, and has launched a product that has been used and trusted by ample of users. Embrace their solution because this buy decision can get you high-quality AI services at a fraction of the cost and time that it would take to develop in-house. Because building an AI system in-house is a costly and risky endeavor, only do it if the AI system is quite specialized to your business and allow you to build a unique defensible advantage, something that can differentiate your company from its competitors. …
Understanding various convolutional neural network (CNN) architectures can be tough, given the vast amount of information available out there. This article summarizes all relevant Medium articles needed to comprehend these CNN architectures. Before that, a mind map of CNN architecture might help you understanding the helicopter view of the subject.
Let’s admit it, grocery shopping sucks. The crowds, the out of stock items, and the cashier payment queue make grocery shopping painful. No wonder, only 15 percent of customers say that they enjoy grocery shopping. The grudge held by customers towards grocery shopping doesn’t help the retailers, who have been troubled with a high fixed cost, low-profit margin, and intense competition for years. Although the aforementioned problems have bothered the grocery retails and their customers for a long time, there seems to be no improvement in how people doing groceries over the years. Do you remember how you shopped grocery 5, 10, or even 20 years ago? Compare it with how you shop your groceries in the present time. …
Some might say artificial intelligence (AI) is the biggest buzzword of this decade. But, I can say AI is real, revolutionary, and has disrupted many industries. And I believe, its applications have only scratched the surface, with wider applications coming in the near future. Banking and finance, retail and e-commerce, healthcare, and logistics are a handful of industries that have tasted the benefits of using AI in the business.
While many industries have been revolutionized by AI, the application of AI in the oil and gas (O&G) industry has been limited. One of the factors that cause slow adoptions of AI is the unwillingness of O&G industry players to share and open their operational data. Sharing these data is considered as taboo and unthinkable for most of O&G companies because they think these data are sensitive and proprietary. But, data is the core of AI. …
The easiest way to make TensorFlow Object Detection API work on NVIDIA RTX 20 Super Series is by using NVIDIA GPU-Accelerated Container (NGC). NGC offers a comprehensive catalog of GPU-accelerated software for deep learning and machine learning frameworks.
We need to make sure to install compatible driver, CUDA, and cuDNN. NVIDIA GPU RTX 20 Super Series is powered by Turing architecture, which is only supported by NVIDIA driver version ≥ 410.48 and CUDA version ≥ 10.0.x. …
In this tutorial, we will learn about how to modify Pandas dataframes. Three operations are discussed in this tutorial:
First, we need to import the required libraries.
# Importing NumPy module and aliasing as np
import numpy as np# Importing Pandas module and aliasing as pd
import pandas as pd
We can delete particular index or columns by calling
drop() function. The official documentation of
drop() function can be seen here. We can pass
inplace = True to delete the data in place.
…
Importing data is a crucial step before we can start data exploration and analysis. One of the most commonly used data types is CSV (Comma-Separated Value). In this tutorial, we will learn how to import CSV files into Pandas dataframes.
We will apply the
read_csv() function to import CSV files. The complete documentation of
read_csv() function can be found here. First, we need to import the required libraries.
# Importing NumPy module and aliasing as np
import numpy as np# Importing Pandas module and aliasing as pd
import pandas as pd
The most important thing before importing your data is to know the location of your data. It is possible to import data from both online and local files. …
Before performing data analysis, we often need to know the structure of our data. In this tutorial, we focus on getting to know how to explore the structure of a dataframe in Pandas. This includes:
Each will be discussed in the following sections. First, we need to import the required libraries.
#Importing NumPy module and aliasing as np
import numpy as np#Importing Pandas module and aliasing as pd
import pandas as…
We often need to select and get subsets of the dataset for performing certain analyses and visualizations. Pandas facilitates data selecting and indexing using three types of multi-axis indexing:
[]and attribute operator
.
.loc[]
.iloc[]
Each will be discussed in the following sections. First, we need to import the required libraries.
# Importing NumPy module and aliasing as np
import numpy as np# Importing Pandas module and aliasing as pd
import pandas as pd
Suppose we have a dataframe as shown below:
In [1]:# Creating 'df' dataframe
df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]],
columns = ['A', 'B'…
In recent years, the term ‘machine learning’ becomes more than a buzzword. It changes the way we conduct business and make decision every day. In fact, machine learning has been used in numerous applications, such as:
As machine learning is becoming more and more valuable in our daily life, it is important for us to understand what it is and how it works. But, machine learning seems to be highly technical and daunting for noncomputer science people. …
About
|
https://medium.com/@andikarachman?source=post_internal_links---------4----------------------------
|
CC-MAIN-2021-04
|
refinedweb
| 1,072
| 61.56
|
- Tutorials
- Tanks tutorial
- Game Managers
Game Managers
Checked with version: 5.2
-
Difficulty: Intermediate
Phase 7 teaches you all about the architecture of the game - the tank and game managers which handle creating the rounds and deciding the winner of the game. We also learn about coroutines and how they are used in scenarios like this.
Game Managers
Intermediate Tanks tutorial
Transcripts
- 00:00 - 00:02
Phase 7 - Managers.
- 00:04 - 00:07
Okay, so in this particular section
- 00:08 - 00:09
we're going to create the game manager
- 00:09 - 00:11
and the idea is that by the end of it
- 00:11 - 00:13
we will have rounds with tanks
- 00:13 - 00:15
where you can drive around and fire
- 00:15 - 00:17
and destroy another tank and at the
- 00:17 - 00:19
end of it you have a UI that
- 00:19 - 00:21
explains that you have won that round,
- 00:21 - 00:23
and then after several rounds have been won
- 00:23 - 00:25
that you've won the game.
- 00:26 - 00:28
There is it turning red.
- 00:28 - 00:30
And you get this text at the end,
- 00:30 - 00:32
Player (number) wins the round!
- 00:32 - 00:34
And you'll notice also that we've colored
- 00:34 - 00:36
the UI to match the color
- 00:36 - 00:37
of the tank.
- 00:37 - 00:39
So we're going to start by creating
- 00:39 - 00:43
the spawn points for each tank.
- 00:43 - 00:45
So for that we just need a
- 00:45 - 00:48
position reference in the world,
- 00:48 - 00:50
so we're going to use an empty game object
- 00:50 - 00:52
for that, it's very simple, we're just
- 00:52 - 00:54
going to go to the hierarchy and we're going to
- 00:54 - 00:56
choose Create Empty.
- 00:57 - 01:00
I'm going to rename this one SpawnPoint1.
- 01:02 - 01:04
Then I'm going to duplicate it.
- 01:04 - 01:06
Control-D or Command-D
- 01:06 - 01:08
depending on your platform, and call it
- 01:08 - 01:10
SpawnPoint2.
- 01:10 - 01:12
And I'll give you some positions for those.
- 01:13 - 01:19
So SpawnPoint1 is (-3, 0, 30).
- 01:20 - 01:25
With a rotation of (0, 180, 0).
- 01:26 - 01:28
It's going to be a lot quicker to read it off the
- 01:28 - 01:30
slider because I've got both spawn points on at once.
- 01:30 - 01:34
So SpawnPoint1 position (-3, 0, 30)
- 01:34 - 01:38
Rotation 180 in the Y axis, 0 in the other two.
- 01:39 - 01:43
SpawnPoint2 (13, 0, -5)
- 01:43 - 01:47
ensuring a rotation of SpawnPoint2 is (0, 0, 0).
- 01:48 - 01:50
And naturally this is one of those things
- 01:50 - 01:51
that is not going to make a huge difference.
- 01:51 - 01:53
If you decide you want to move those spawn points somewhere
- 01:53 - 01:56
else in your game feel free to go ahead and do that.
- 01:56 - 01:58
Obviously you want to keep the Y axis
- 01:58 - 02:00
positioned at 0 at all times
- 02:00 - 02:02
because that's where you want the tanks to start.
- 02:03 - 02:06
Because we're dealing with these spawn points which are
- 02:06 - 02:08
effectively empty game objects,
- 02:08 - 02:10
something that just has a transform component
- 02:10 - 02:12
it's very difficult to see them in the scene.
- 02:12 - 02:14
So you have to select the translate tool to
- 02:14 - 02:16
be able to see their handles,
- 02:16 - 02:17
so SpawnPoint1 is here,
- 02:17 - 02:19
SpawnPoint2 is over here.
- 02:19 - 02:21
Now there's a quick way to rectify that
- 02:21 - 02:23
which is to select the SpawnPoint
- 02:23 - 02:25
and give it what we call a gizmo.
- 02:26 - 02:28
So a gizmo is a 2D representation
- 02:28 - 02:30
of something in the scene view.
- 02:30 - 02:32
So if you reselect SpawnPoint1.
- 02:33 - 02:35
And then at the top of the inspector
- 02:35 - 02:37
there's this little cube icon.
- 02:37 - 02:41
If you click down you can choose from a number of gizmos.
- 02:41 - 02:43
You can specify your own if you bring in
- 02:43 - 02:45
a texture in Unity, you can set that as the gizmo,
- 02:45 - 02:47
which is super handy.
- 02:47 - 02:49
But for this we're just going to select a blue background.
- 02:49 - 02:51
So that's going to then print the name
- 02:52 - 02:54
SpawnPoint1 just like that.
- 02:56 - 02:58
And then we'll do the same for SpawnPoint2
- 02:58 - 03:00
we're going to choose red.
- 03:00 - 03:02
So that's going to match the color of our tanks
- 03:02 - 03:04
that we're spawning as well, so SpawnPoint2 is red,
- 03:04 - 03:06
SpawnPoint1 is blue.
- 03:09 - 03:12
Okay, so that's where my two spawn points are.
- 03:13 - 03:15
So we just, design-wise, decided
- 03:15 - 03:17
to put them either side of a building so the
- 03:17 - 03:19
players have to at least drive around to start
- 03:19 - 03:20
shooting in the game.
- 03:20 - 03:21
So that's gizmos.
- 03:21 - 03:23
The next thing we need to do is create
- 03:23 - 03:25
a screen space canvas.
- 03:25 - 03:27
So when we first put in a UI
- 03:27 - 03:29
element in to the game, a slider for the health,
- 03:29 - 03:32
I mentioned that ordinarily our
- 03:32 - 03:34
UI is over the entire screen.
- 03:34 - 03:36
So that's what we're going to do now.
- 03:36 - 03:38
So in the hierarchy I'm just going to choose the
- 03:38 - 03:40
Create menu, I'm going to choose UI,
- 03:42 - 03:44
and Canvas.
- 03:44 - 03:46
And yet again it's going to appear huge,
- 03:46 - 03:48
and it should look like that if you've done it right.
- 03:49 - 03:51
And you can't really get it wrong because that's the default.
- 03:52 - 03:54
Then what we're going to do is rename this
- 03:54 - 03:56
MessageCanvas, so F2 on PC,
- 03:56 - 03:58
or Return on Mac,
- 03:58 - 04:00
just rename it MessageCanvas.
- 04:01 - 04:03
Now we want to work with this canvas a little
- 04:03 - 04:05
bit and add a text element that's going to be
- 04:05 - 04:07
the basis of our messaging at the end
- 04:07 - 04:08
of each round.
- 04:08 - 04:10
So what I'm going to do is,
- 04:10 - 04:12
first off I'm going to set the scene view to 2D mode.
- 04:12 - 04:15
So at the top there's a little button here
- 04:16 - 04:18
by the scene tab that says 2D, click on that.
- 04:19 - 04:21
It's going to constrain to a particular axis.
- 04:21 - 04:23
Then what I'm going to do is to click on
- 04:23 - 04:25
my MessageCanvas and you can
- 04:25 - 04:28
either hover over the scene view
- 04:28 - 04:30
and press F on the keyboard to frame,
- 04:30 - 04:33
or you can go to Edit - Frame Selected.
- 04:34 - 04:36
Then you can just use your scroll wheel
- 04:36 - 04:38
or gesture on your trackpad if that's what you use,
- 04:39 - 04:41
to kind of frame that in the scene view.
- 04:42 - 04:44
Then what we're going to do is actually add a
- 04:44 - 04:47
text field, so I'm going to right-click on my MessageCanvas,
- 04:47 - 04:49
so that I am immediately creating something
- 04:49 - 04:51
under that parent object.
- 04:51 - 04:53
And I'm going to choose UI Text
- 04:55 - 04:57
What you should see then is you have something called
- 04:57 - 04:59
New Text, it's a new text field,
- 04:59 - 05:01
it's left aligned and it's kind of sat in
- 05:01 - 05:03
the middle of our canvas.
- 05:03 - 05:04
It should look like that.
- 05:04 - 05:06
So basically with this thing we want it
- 05:06 - 05:08
to be nice huge text
- 05:08 - 05:10
and we want it to be stretched over most of the
- 05:10 - 05:13
screen except for a border around the edge.
- 05:13 - 05:15
So depending on what we put in to it we will use
- 05:15 - 05:17
Best Fit on the text which will scale it
- 05:17 - 05:19
if there's too much information on the screen,
- 05:19 - 05:21
and will ultimately fit it in to this canvas.
- 05:21 - 05:24
So on the Rect Transform for the text
- 05:25 - 05:27
the anchors for X and Y
- 05:27 - 05:29
should have a minimum of 0.1
- 05:31 - 05:33
and a maximum of 0.9.
- 05:34 - 05:36
So it should look like this.
- 05:36 - 05:39
So note that it's X, Y across,
- 05:40 - 05:42
and then X, Y across again.
- 05:42 - 05:45
So Min is the first two values, Max is the second,
- 05:45 - 05:48
so 0.1, 0.1, 0.9, 0.9.
- 05:48 - 05:51
Then what we're going to do is reset
- 05:51 - 05:53
all of these values.
- 05:53 - 05:57
So the anchors are basically where those
- 05:57 - 06:00
edges start from, so if I put those all to 0
- 06:00 - 06:03
and I say that X and Y should have 0.1
- 06:03 - 06:05
it's going to take that as a proportion
- 06:05 - 06:07
of the screen that we've got.
- 06:07 - 06:09
So you should now see that your screen looks
- 06:09 - 06:11
like this, it's got this border
- 06:11 - 06:13
around the outside and our text
- 06:13 - 06:16
is basically starting up in the upper left there.
- 06:16 - 06:19
Okay, so we've made a new
- 06:19 - 06:21
MessageCanvas, we've added a Text
- 06:21 - 06:23
new game object to it.
- 06:23 - 06:25
And on the rect transform the anchors for
- 06:25 - 06:27
X and Y have a minimum of 0.1
- 06:27 - 06:29
and a maximum of 0.9,
- 06:29 - 06:32
just creating this pad around the outside.
- 06:32 - 06:34
Obviously it varies depending on width and height
- 06:34 - 06:36
because of the aspect ratio
- 06:36 - 06:38
being wider than it is tall.
- 06:38 - 06:42
I'll reset all of those left, right, top, bottom to 0.
- 06:43 - 06:45
So then we'll actually work with the text component.
- 06:45 - 06:47
With that Text game object still selected
- 06:47 - 06:49
we're going to scroll down to our Text
- 06:49 - 06:51
component which is here.
- 06:52 - 06:55
So obviously the main characteristic of text
- 06:55 - 06:57
is what it actually says,
- 06:57 - 06:59
so I'm going to write in TANKS!
- 06:59 - 07:01
You can do that or I'm sure you can name the game
- 07:01 - 07:03
something hilariously different.
- 07:04 - 07:06
So feel free to do that.
- 07:06 - 07:08
And then the font that we've included for you guys,
- 07:08 - 07:10
which is a free for use font,
- 07:10 - 07:13
is one called BowlbyOne-Regular.
- 07:13 - 07:15
So next to the Font field just
- 07:15 - 07:17
use the circle select to change from the default
- 07:17 - 07:19
Arial to that one.
- 07:20 - 07:22
And then because this entire
- 07:22 - 07:24
block here is that text field
- 07:24 - 07:27
we want to set the alignment under the Paragraph section
- 07:27 - 07:30
to centre and middle.
- 07:32 - 07:34
So that'll place it right in the centre of the screen.
- 07:34 - 07:36
Then finally what we're going to do is
- 07:36 - 07:38
enable Best Fit.
- 07:40 - 07:43
And set the maximum size to 60, 60.
- 07:44 - 07:46
And the color to White.
- 07:49 - 07:51
So the font is BowlbyOne-Regular.
- 07:52 - 07:54
I've enabled Best Fit which will give me
- 07:54 - 07:57
a Min and Max size, I set the maximum to 60.
- 07:57 - 07:59
And then I set my color to White
- 08:00 - 08:03
just by dragging up in this color panel.
- 08:06 - 08:08
It should look like this if you've got it right.
- 08:08 - 08:10
Okay, so this might be tricky
- 08:10 - 08:12
to see so what we're going to do is just add
- 08:12 - 08:14
a drop shadow to it, nice and simple.
- 08:15 - 08:17
And for that we have a separate component.
- 08:17 - 08:19
So if you click the Add Component button
- 08:19 - 08:21
and just type in Shadow that will jump to that
- 08:21 - 08:23
very simply and you can hit Return.
- 08:24 - 08:26
And for the shadow we're going to
- 08:26 - 08:29
set the color, so first off we're going to choose
- 08:29 - 08:31
a kind of brown color to go with the sand.
- 08:31 - 08:37
So that is 114, 71, 40,
- 08:38 - 08:40
and the alpha remains the same,
- 08:40 - 08:41
it should be 128.
- 08:42 - 08:45
Then because that's only just poking out from
- 08:45 - 08:47
just behind the tank's actual text
- 08:47 - 08:51
I'm going to set the Effect Distance to -3 in X and Y.
- 08:52 - 08:54
It should look like that.
- 08:54 - 08:56
If you want to get artistic at this point, feel free.
- 08:57 - 08:59
And finally, once we've designed that
- 08:59 - 09:01
I'm just going to save my scene really quick
- 09:01 - 09:03
and then I'm going to disable 2D mode
- 09:03 - 09:05
because I don't want to look at this stuff any more
- 09:05 - 09:08
I want to go back to actually messing with my levels
- 09:08 - 09:10
so I'm going to uncheck 2D mode on the scene view
- 09:11 - 09:13
and then I'm going to select my level
- 09:13 - 09:15
art and I'm going to frame that
- 09:15 - 09:17
and then zoom back in to where I was.
- 09:17 - 09:20
So one more time, I was on my MessageCanvas
- 09:20 - 09:22
in 2D mode, framed,
- 09:23 - 09:25
and I'm just going to reselect my level art,
- 09:25 - 09:27
hover, press F to frame,
- 09:28 - 09:30
when not in 2D mode and zoom in.
- 09:33 - 09:35
We've added out text component
- 09:35 - 09:37
and we've set the text to say TANKS!
- 09:37 - 09:39
or something otherwise hilarious.
- 09:39 - 09:42
We have used circle select to choose our font.
- 09:42 - 09:44
That font is BowlbyOne-Regular.
- 09:44 - 09:46
It's a free font that we took from Google.
- 09:46 - 09:48
Thank you Google.
- 09:48 - 09:50
We've set the alignment to centre and middle.
- 09:51 - 09:53
So that it sits right in the centre of
- 09:53 - 09:55
where we've set it up.
- 09:55 - 09:56
We've enabled Best fit.
- 09:56 - 09:58
So basically Best Fit will allow that
- 09:58 - 10:00
to scale up to the maximum size.
- 10:01 - 10:03
So if we end up putting a lot of text in
- 10:03 - 10:05
to this text field then it will be between
- 10:05 - 10:07
60 and 10, which is the default
- 10:07 - 10:09
minimum if the amount of text forces it to
- 10:09 - 10:11
be smaller those are the things it'll be between
- 10:11 - 10:13
before it starts actually clipping over the
- 10:13 - 10:14
edge of the rectangle.
- 10:14 - 10:16
Then we set the color to White.
- 10:16 - 10:18
And we added a shadow component
- 10:18 - 10:21
to the text and we set the Effect color to
- 10:21 - 10:25
be brown and (-3, -3) for the Distance.
- 10:25 - 10:27
So that's back to the left and down a little.
- 10:27 - 10:29
And then we disabled 2D mode to go back
- 10:29 - 10:31
to the level itself.
- 10:33 - 10:35
Then we need to get back
- 10:35 - 10:37
to actually framing our tank and
- 10:37 - 10:39
worrying about how the camera will behave
- 10:39 - 10:41
when there's more than one tank.
- 10:41 - 10:43
So what I'm going to do is select my
- 10:43 - 10:45
CameraRig game object.
- 10:45 - 10:47
Now what you'll notice with the camera rig
- 10:47 - 10:49
is that we now have something missing.
- 10:49 - 10:51
So before we dragged out single tank
- 10:51 - 10:54
on and dropped it on to the targets array,
- 10:54 - 10:56
the thing at the bottom here, this thing.
- 10:57 - 10:59
Now it has a size of 1 because
- 10:59 - 11:01
we've already populated that field,
- 11:01 - 11:03
but this is missing because we've deleted
- 11:03 - 11:05
our tank so it can't find it any more.
- 11:05 - 11:07
But that's fine because we don't actually want this
- 11:07 - 11:09
to be there at all, we want the script to handle it for us.
- 11:09 - 11:11
So what I'm going to do is to
- 11:11 - 11:13
go to my CameraControl script,
- 11:14 - 11:16
I'm going to set my size back to 0,
- 11:16 - 11:18
so it doesn't exist any more
- 11:18 - 11:21
and then I'm going to double click on my CameraControl script
- 11:21 - 11:23
to go back to editing it.
- 11:23 - 11:25
So you may remember when we first came across
- 11:25 - 11:28
this there was that HideInInspector bit at the top?
- 11:29 - 11:32
And now comes the time to
- 11:32 - 11:36
uncomment that bit so that this Targets field
- 11:36 - 11:38
is no longer seen in the inspector.
- 11:38 - 11:40
So we're not deleting that part,
- 11:40 - 11:42
we're just removing the / and * either side of it.
- 11:42 - 11:44
So it should look like this.
- 11:46 - 11:49
So when you save your script and return to the editor
- 11:50 - 11:53
you'll notice that the camera control no longer shows that bit.
- 11:55 - 11:57
And compiled, there we go.
- 11:58 - 12:00
Okay, so your Targets array should disappear
- 12:00 - 12:02
it's still there, it's still accessible via the script,
- 12:02 - 12:04
it's just not visible for you to drag and drop stuff
- 12:04 - 12:06
on to, which is exactly what we needed.
- 12:07 - 12:09
Then we are ready to actually create
- 12:09 - 12:12
our game manager, so I'm going to save my scene
- 12:12 - 12:15
and I'm going to use the hierarchy Create button
- 12:15 - 12:17
to create an empty
- 12:18 - 12:20
and I'm going to name it GameManager.
- 12:24 - 12:26
So we're just using this as
- 12:26 - 12:30
a thing that can host our GameManager script,
- 12:30 - 12:32
we could really attach this to anything,
- 12:32 - 12:34
we could attach it to the level art or
- 12:34 - 12:36
something that we know is always going to be in
- 12:36 - 12:38
the scene but for our purposes it just makes sense
- 12:38 - 12:40
to have a dedicated game object we can
- 12:40 - 12:43
select and go back and set everything up on.
- 12:43 - 12:47
So our GameManager is going to be an independent object.
- 12:47 - 12:50
So in the Scripts Managers folder you will
- 12:50 - 12:54
find two files, one of them is the GameManager,
- 12:54 - 12:56
one of them is the TankManager, and we'll come on
- 12:56 - 12:59
to explain what the tank manager does shortly.
- 12:59 - 13:01
For now I'm going to just grab my
- 13:01 - 13:05
GameManager and drop it on to my GameManager object.
- 13:05 - 13:06
So grab the GameManager script,
- 13:06 - 13:08
drop it on to the GameManager object.
- 13:08 - 13:12
So let's actually do things the other way round
- 13:12 - 13:14
this time, we're going to populate these variables
- 13:14 - 13:16
and then we'll look at the script and see how it works
- 13:16 - 13:18
because there's a fair bit to it.
- 13:18 - 13:20
So we have a number of rounds to win
- 13:20 - 13:23
and as we said earlier there are 5 rounds to win.
- 13:23 - 13:25
There's a start and end delay to
- 13:25 - 13:28
each round which will allow the players to actually
- 13:28 - 13:30
read the text that's on the screen.
- 13:30 - 13:33
And then there is a reference to our CameraControl,
- 13:33 - 13:35
so remember we said that this script would
- 13:35 - 13:37
tell the camera where the tanks were and,
- 13:37 - 13:39
instantiate them and whatnot. So it needs
- 13:39 - 13:41
a reference, so that's the first thing we're going to setup.
- 13:42 - 13:44
So the CameraRig has that
- 13:44 - 13:47
cameraControl script attached to it,
- 13:47 - 13:50
so this is expecting something of type Camera Control.
- 13:50 - 13:52
So that component is attached to the camera rigs
- 13:52 - 13:54
so if I just drag and drop my CameraRig
- 13:55 - 13:57
on to that, then that will assign it.
- 13:57 - 13:59
Then, under Message Canvas,
- 13:59 - 14:00
I have my text game object.
- 14:00 - 14:02
that's my Message Text, I'm going to drag and
- 14:02 - 14:04
drop that on to assign it too.
- 14:05 - 14:07
And finally my Tank prefab,
- 14:08 - 14:10
let's zoom out a little,
- 14:10 - 14:12
is in my Prefabs folder, so I'm going to select Prefabs,
- 14:12 - 14:15
and I'm going to grab and drop my tank on there.
- 14:18 - 14:20
Then you will notice conspicuously
- 14:20 - 14:23
there is an array at the bottom called Tanks.
- 14:25 - 14:28
Which if you expand has size.
- 14:28 - 14:31
So as we know there are 2 tanks in our game.
- 14:31 - 14:33
So the size needs to be 2 so there
- 14:33 - 14:35
are 2 items in this array
- 14:35 - 14:37
Player1 Tank, Player2 Tank.
- 14:37 - 14:39
If you expand that once you've typed in
- 14:39 - 14:43
to it and hit return you will see there's Element 0 and Element 1.
- 14:44 - 14:46
So just a reminder, any array always starts
- 14:46 - 14:48
at 0, so that's why you're seeing
- 14:48 - 14:50
instead of 1 and 2, it's 0 and 1.
- 14:50 - 14:52
So the two things that we actually need here are
- 14:52 - 14:54
just the color that we want it to be,
- 14:54 - 14:56
so this color will apply to
- 14:56 - 14:58
the tank itself and
- 14:58 - 15:00
it'll also apply to the name
- 15:00 - 15:03
of the tank or the name of player1 and player2
- 15:03 - 15:04
in the UI itself.
- 15:04 - 15:06
So this color will get used for both of those things.
- 15:07 - 15:09
I'm going to setup red and blue for this.
- 15:10 - 15:12
So Element 0, I'm going to click on the color block.
- 15:14 - 15:16
So that big black square there, click on that
- 15:16 - 15:18
to bring up the color picker
- 15:18 - 15:19
and then the color I'm going to use there is
- 15:19 - 15:23
(40, 100, 178).
- 15:24 - 15:26
So shade of blue like that.
- 15:28 - 15:30
You'll notice that this conspicuously matches
- 15:30 - 15:33
the name tag gizmo that we gave Spawn Point 1.
- 15:34 - 15:35
Funny that.
- 15:35 - 15:37
So we will drag on SpawnPoint1
- 15:37 - 15:39
as the Spawn Point.
- 15:40 - 15:42
Then Element 1 will have a different color.
- 15:44 - 15:47
And that is a red which is 229,
- 15:48 - 15:49
I'm going to recap these in a moment.
- 15:49 - 15:52
(229, 46, 40).
- 15:54 - 15:57
(229, 46, 40) R, G and B.
- 15:58 - 16:00
And you guessed it, SpawnPoint2 is
- 16:00 - 16:02
the spawn point to drop on.
- 16:03 - 16:05
We've expanded our Tanks array so the
- 16:05 - 16:07
GameManager will create these tanks and color them
- 16:07 - 16:09
and also color the UI for us.
- 16:09 - 16:11
We've assigned all of our references,
- 16:11 - 16:13
like the text and the camera rig.
- 16:14 - 16:16
And then the colors we've used are
- 16:16 - 16:20
blue of (42, 100, 178).
- 16:21 - 16:22
That's for SpawnPoint1.
- 16:22 - 16:26
And then color of red (229, 46, 40).
- 16:26 - 16:28
for SpawnPoint2,
- 16:28 - 16:30
which should also be dragged on.
- 16:30 - 16:33
Okay, so let's talk about our GameManager.
- 16:33 - 16:35
The GameManager is in charge
- 16:35 - 16:37
of the game, weirdly enough,
- 16:37 - 16:39
the clue is in the name,
- 16:39 - 16:41
but the way that that works is pretty much what you're seeing here,
- 16:41 - 16:44
so it's in charge of initialising the game,
- 16:44 - 16:46
it needs to spawn as many tanks
- 16:46 - 16:48
as we've told it to, in this case 2.
- 16:49 - 16:51
And it needs to setup the camera targets,
- 16:51 - 16:53
so it needs to say 'hey camera control, these are
- 16:53 - 16:55
the tanks that I've spawned and you need to
- 16:55 - 16:57
focus in on them when the game starts'.
- 16:57 - 17:00
Then it needs to run the states of the game.
- 17:00 - 17:02
So we're going to look at the code in
- 17:02 - 17:04
the GameManager shortly and you'll see
- 17:04 - 17:07
that it's basically a simple sort of state machine that's running
- 17:07 - 17:09
round starting what happens during
- 17:09 - 17:11
round playing and what needs to be decided
- 17:11 - 17:13
when the round ends.
- 17:13 - 17:16
And all these things link in to the Tank Manager,
- 17:16 - 17:18
which is a separate script.
- 17:19 - 17:21
So the Tank Manager handles
- 17:21 - 17:23
shooting and movement for the tanks,
- 17:23 - 17:26
and also visual elements, I.E. the UI.
- 17:27 - 17:29
So what you need to understand is
- 17:29 - 17:31
that each tank gets assigned
- 17:31 - 17:33
it's own tank manager, and that
- 17:33 - 17:35
tank manager is then in charge of
- 17:35 - 17:38
turning off input, control, and shooting.
- 17:38 - 17:42
So every time that tank gets spawned in the world
- 17:42 - 17:44
it doesn't know what it's got to do but it has
- 17:44 - 17:46
it's tank manager there to sort it out.
- 17:46 - 17:48
So obviously this game is extendable
- 17:48 - 17:50
so we've put this kind of third one there to
- 17:50 - 17:52
kind of mean 'and so on'
- 17:52 - 17:54
as we populate that array with more tanks
- 17:54 - 17:55
it can do more things.
- 17:56 - 17:58
So shall we have a quick look at
- 17:58 - 18:00
the TankManager script?
- 18:00 - 18:02
Because that is the first one
- 18:02 - 18:04
that we need to understand.
- 18:05 - 18:07
Double click on the icons to open both those scripts.
- 18:09 - 18:11
And make sure you're looking at the TankManager.
- 18:12 - 18:14
So the TankManager is already complete
- 18:14 - 18:16
and we'll go through that briefly now
- 18:16 - 18:18
so you can understand what it's doing,
- 18:18 - 18:20
what it's role is and how it interacts
- 18:20 - 18:22
with the GameManager.
- 18:22 - 18:24
Most of the scripts that we're dealing with
- 18:24 - 18:26
today and in fact all of them other than
- 18:26 - 18:28
this one are mono behaviours.
- 18:28 - 18:31
So after it says public class and then the
- 18:31 - 18:34
name of the class you've got a colon and then MonoBehaviour.
- 18:35 - 18:37
CameraControl : MonoBehaviour.
- 18:38 - 18:40
So what that means is that
- 18:40 - 18:42
this script you can drop on to a game object
- 18:42 - 18:44
and it will have functions
- 18:44 - 18:47
that are called back such as Awake, Start,
- 18:47 - 18:49
Update, Fixed Update, those sorts of things.
- 18:49 - 18:51
That's all part of mono behaviour.
- 18:52 - 18:54
But this one does not have that.
- 18:54 - 18:56
It's not inheriting from anything before that.
- 18:57 - 18:59
So everything that you see in TankManager
- 18:59 - 19:01
it it's own thing.
- 19:01 - 19:03
We've also go this attribute at the top,
- 19:03 - 19:05
Serializable, so what that's doing is
- 19:05 - 19:07
saying to Unity
- 19:07 - 19:09
'when you have an instance of this
- 19:09 - 19:11
you can show it in the inspector'.
- 19:12 - 19:14
Most of the time you don't need that because
- 19:14 - 19:16
by default fields are serializable.
- 19:17 - 19:19
But when you've got a class that you make yourself
- 19:19 - 19:21
you need to mark it as serializable
- 19:21 - 19:23
so that it'll show up in the inspector.
- 19:25 - 19:27
Okay, so let's have a look at the public variables.
- 19:27 - 19:29
So we already know the first two,
- 19:29 - 19:31
we've got the color, which is the color
- 19:31 - 19:32
the player is going to be when it's spawned,
- 19:32 - 19:34
the tank and it's name.
- 19:35 - 19:37
And we've got a transform representing the
- 19:37 - 19:39
spawn point so that's obviously where
- 19:39 - 19:41
and in what direction it's going to be spawned.
- 19:42 - 19:44
So very crucially, what you're actually seeing
- 19:44 - 19:47
here is the result of what you just
- 19:47 - 19:49
filled in in the inspector.
- 19:49 - 19:51
So in the inspector we have this thing called Tanks
- 19:51 - 19:53
so each of these elements is
- 19:53 - 19:55
effectively those two things,
- 19:55 - 19:58
these are individual tank managers and those are the
- 19:58 - 20:00
only things that are down in the inspector.
- 20:00 - 20:04
That's because the Tanks array in the Game Manager
- 20:04 - 20:06
is an array of tank managers.
- 20:07 - 20:08
So it's showing those
- 20:08 - 20:10
two because we've told it to but the rest
- 20:10 - 20:12
we're saying 'hide those from the inspector'.
- 20:13 - 20:15
so if I jump over to Game Manager really quickly.
- 20:18 - 20:20
so there's an array of TankManagers
- 20:20 - 20:23
so these classes called Tanks,
- 20:23 - 20:26
so that's why you're seeing that in the inspector.
- 20:26 - 20:28
Okay, so the rest of these public
- 20:28 - 20:31
fields we've got HideInInspector to stop it from
- 20:31 - 20:33
showing up because we don't want people to
- 20:33 - 20:35
be able to adjust those.
- 20:35 - 20:37
So first is the PlayerNumber, so that's going to filter
- 20:37 - 20:39
through to the shooting Script and the Movement script
- 20:40 - 20:42
to tell each of those scripts
- 20:42 - 20:44
what input it needs, because you'll remember
- 20:44 - 20:46
they're based on the PlayerNumber.
- 20:47 - 20:49
Next we've got m_ColoredPlayerText.
- 20:49 - 20:51
So you'll notice that when we
- 20:51 - 20:53
showed that little video the text
- 20:53 - 20:56
that came up was in the color of the player.
- 20:57 - 21:00
So the way you do that is you use
- 21:00 - 21:02
HTML-like rich text
- 21:02 - 21:04
and we'll show you how to do that in a moment,
- 21:04 - 21:05
but that's just a variable to store that.
- 21:05 - 21:08
Next we've got a game object which is
- 21:08 - 21:10
storing the instance of the tank.
- 21:10 - 21:12
So if we need to
- 21:12 - 21:14
turn anything on or off we need to
- 21:14 - 21:17
get that referenced through the instance.
- 21:17 - 21:19
And the number of wins that the
- 21:19 - 21:21
tank currently has, so when it
- 21:21 - 21:23
gets enough wins it'll win the game.
- 21:23 - 21:25
Next we've got a few private references
- 21:25 - 21:28
so we've got references to the TankMovement and TankShooting scripts
- 21:28 - 21:31
so we can turn those on and off when we need to.
- 21:32 - 21:34
And a reference to the Canvas game object
- 21:34 - 21:37
so that we can turn the UI on and off.
- 21:37 - 21:40
Next we've got a public function called Setup.
- 21:40 - 21:42
So this is public because it's going to be called by
- 21:42 - 21:45
the GameManager when it first creates these tanks.
- 21:46 - 21:48
Seeing as the tank has been created
- 21:48 - 21:50
it's going to find the references to
- 21:50 - 21:52
the Movement and Shooting scripts
- 21:52 - 21:55
by saying that the Instance, get component from the instance.
- 21:56 - 21:58
So set the instance, get the components there.
- 21:59 - 22:01
Next, a little bit more obtuse,
- 22:02 - 22:04
we've got the Canvas game object
- 22:04 - 22:06
so what it's going is it's got the instance,
- 22:07 - 22:09
it's finding a component of type
- 22:09 - 22:11
Canvas in it's children,
- 22:11 - 22:13
because you'll remember all of the UI is
- 22:13 - 22:15
underneath a canvas, so we're finding that
- 22:15 - 22:17
canvas in the children,
- 22:17 - 22:19
and then we're getting the game object that that's on.
- 22:20 - 22:22
That's what that bit is doing there.
- 22:22 - 22:24
Next we're setting the PlayerNumber on the
- 22:24 - 22:26
Movement and Shooting scripts.
- 22:27 - 22:32
And then there's the HTML-like rich text.
- 22:32 - 22:34
So this looks a little bit scary and confusing,
- 22:34 - 22:36
but basically what we're doing is taking bits
- 22:36 - 22:39
of the text and then putting stuff in between them.
- 22:39 - 22:41
So for example,
- 22:41 - 22:43
if you saw in the video earlier we showed,
- 22:43 - 22:44
at the end it says
- 22:44 - 22:46
'Player 1 - X amount of wins'
- 22:46 - 22:49
'Player 2 - X number of wins'.
- 22:49 - 22:51
So what we're doing with this is we're just
- 22:51 - 22:53
adding bits to a string.
- 22:53 - 22:55
So this is a string here, in inverted commas,
- 22:55 - 22:57
and then we add to it something that's
- 22:57 - 22:59
converting a piece of code
- 22:59 - 23:01
to a particular color by using
- 23:01 - 23:03
this thing called ColorUtility
- 23:03 - 23:06
ToHtmlStringRGB, very long winded thing but
- 23:06 - 23:08
it basically takes in a color,
- 23:08 - 23:10
and will then color whatever
- 23:10 - 23:13
the text is after that, and the thing it's coloring is Player.
- 23:13 - 23:15
So if you've seen HTML before you'll know that
- 23:15 - 23:18
HTML tags fit inside angled brackets.
- 23:18 - 23:20
So it starts here, it then says
- 23:20 - 23:22
the color that I should color it is
- 23:23 - 23:25
this thing that takes in the PlayerColor,
- 23:25 - 23:27
so that will result in a color reference.
- 23:27 - 23:29
Now we finish that tag
- 23:29 - 23:31
and the piece of text its actually coloring
- 23:31 - 23:33
is the word Player.
- 23:33 - 23:35
Then we put in a space.
- 23:35 - 23:37
Then we put in the PlayerNumber
- 23:39 - 23:41
And then we finish coloring.
- 23:41 - 23:43
So all of that stuff is saying
- 23:43 - 23:45
'Player 1 in particular color
- 23:45 - 23:47
by using this rich text'.
- 23:48 - 23:50
It looks a bit lengthy but once you piece
- 23:50 - 23:52
through it it makes sense.
- 23:52 - 23:54
And like I said, have a look at the completed scripts,
- 23:54 - 23:56
it will have all the comments in there to tell you
- 23:56 - 23:57
exactly what we're doing.
- 23:57 - 24:00
So the next line is getting an array of mesh renderers.
- 24:00 - 24:02
So mesh renderers are the things
- 24:02 - 24:04
that actually show up your
- 24:04 - 24:06
3D objects in the scene.
- 24:06 - 24:08
So the tank is made up of
- 24:08 - 24:10
a series of mesh renderers, like one for the
- 24:10 - 24:14
tracks, one for the chassis, one for the turret, etcetera.
- 24:14 - 24:16
I'm just going to show that real quick.
- 24:16 - 24:18
So I've just dragged in my prefab
- 24:18 - 24:20
just to demonstrate but you'll see there's this mesh renderer
- 24:20 - 24:23
component and if I toggle things on and off
- 24:23 - 24:25
they disappear because they're not being rendered any more.
- 24:26 - 24:28
That's just what a mesh renderer is doing.
- 24:28 - 24:30
Okay we're making an array of mesh renderers
- 24:30 - 24:34
called Renderers and we're setting that to
- 24:34 - 24:36
all of the components in Children
- 24:36 - 24:38
that are mesh renderers.
- 24:38 - 24:41
So the instance, get the components in the children
- 24:41 - 24:43
that are mesh renderers and return them.
- 24:44 - 24:46
And then what we do is we loop through all of
- 24:46 - 24:50
those renderers and set their material's color
- 24:51 - 24:52
to the player's color.
- 24:52 - 24:54
So all that's doing is it's finding all the mesh renderers,
- 24:54 - 24:56
getting the color and changing it to the player color
- 24:56 - 24:58
that we chose.
- 24:58 - 25:00
Then we've got just a few more public functions
- 25:00 - 25:02
that are going to be called by the Game Manager.
- 25:02 - 25:04
We've got DisableControl which
- 25:04 - 25:06
turns off the script and
- 25:06 - 25:08
turns off the canvas and we've got
- 25:08 - 25:10
EnableControl that turns on the script
- 25:10 - 25:12
and turns on the canvas.
- 25:13 - 25:14
Finally we've got Reset.
- 25:14 - 25:16
So what Reset does is
- 25:16 - 25:18
it sets the
- 25:18 - 25:20
instance back to it's spawn point.
- 25:21 - 25:23
It turns it off and then
- 25:23 - 25:25
it turns it back on again and the reason it
- 25:25 - 25:27
does that is that all of
- 25:27 - 25:29
the tank's apart from the winner
- 25:29 - 25:31
will be off, but we need to
- 25:31 - 25:33
reset all of them so we need to turn
- 25:33 - 25:35
everything off first before
- 25:35 - 25:37
we can turn it back on again.
- 25:37 - 25:39
Okay, so that's all there is to TankManager.
- 25:40 - 25:42
Let's see about the GameManager.
- 25:42 - 25:44
So just to jump back to this slide real quick.
- 25:44 - 25:46
As we've said the Game Manager is
- 25:46 - 25:48
using an array of TankManagers,
- 25:48 - 25:50
so when we look at the GameManager in a second
- 25:50 - 25:52
we're going to see that at the start
- 25:52 - 25:54
it's going to use the TankManagers to spawn those.
- 25:54 - 25:56
So those were the instances that you just saw
- 25:56 - 25:58
referenced inside TankManager.
- 25:58 - 26:00
And then we're going to get in to actually
- 26:00 - 26:02
looping through things.
- 26:03 - 26:05
So let's take a look at that.
- 26:05 - 26:07
So if you switch over to the GameManager script you should
- 26:07 - 26:10
see the stuff that we're looking at right now.
- 26:11 - 26:13
Either that or you can look at the screen.
- 26:13 - 26:15
So there are three blocks of comments in this one.
- 26:15 - 26:17
There's one at lines 19 and 21.
- 26:19 - 26:21
I'm just going to remove those.
- 26:22 - 26:28
There is one at line 66 to line 74.
- 26:30 - 26:32
And then there's a really long one
- 26:32 - 26:34
which is line 108
- 26:36 - 26:38
all the way down to 152.
- 26:41 - 26:43
Cool, so apologies, we had to comment those
- 26:43 - 26:45
out otherwise you would have seen some warnings
- 26:45 - 26:47
and queries in the console that
- 26:47 - 26:49
kind of make it awkward for us to teach you stuff,
- 26:50 - 26:52
so we just commented those out.
- 26:52 - 26:54
Okay, so we've already gone over the
- 26:54 - 26:57
public variables, I won't bore you by doing it again.
- 26:57 - 26:59
The private variables,
- 26:59 - 27:01
so we've got an integer which stores the current round number.
- 27:01 - 27:03
So obviously as it counts up then you get
- 27:03 - 27:05
to display what the current round number
- 27:05 - 27:07
is when it starts.
- 27:07 - 27:10
Then we've got these two WaitForSeconds things.
- 27:11 - 27:15
So in coroutines you can put
- 27:15 - 27:17
a delay in your function course.
- 27:17 - 27:19
What is a coroutine James?
- 27:19 - 27:21
I think we'll cover that shortly.
- 27:21 - 27:23
Good idea.
- 27:23 - 27:25
But basically these WaitForSeconds things
- 27:25 - 27:27
are what a coroutine is looking for a
- 27:27 - 27:29
delay so we've got a start delay
- 27:29 - 27:30
and an end delay.
- 27:30 - 27:33
We'll change those in to these WaitForSeconds
- 27:33 - 27:35
classes so we can use them.
- 27:35 - 27:38
So basically we have a little pause within a function,
- 27:38 - 27:40
but they need to be converted to a type that is called
- 27:40 - 27:42
WaitForSeconds and you'll see why shortly when
- 27:42 - 27:44
we explain coroutines.
- 27:44 - 27:46
Okay then the last two things we've got,
- 27:46 - 27:48
two instances of TankManagers,
- 27:48 - 27:50
so they're referring to specific tanks
- 27:50 - 27:52
that are the RoundWinner and the GameWinner.
- 27:52 - 27:54
And we'll use those for the
- 27:54 - 27:56
message at the end of each round.
- 27:58 - 28:00
So next we've got this start function,
- 28:00 - 28:02
which is setting up those WaitForSeconds.
- 28:02 - 28:04
So we've got a StartDelay and an EndDelay,
- 28:04 - 28:06
which are just numbers in seconds
- 28:06 - 28:08
and then we're creating new WaitForSeconds
- 28:08 - 28:10
for the start and the end.
- 28:11 - 28:13
Then we've got SpawnAllTanks function
- 28:13 - 28:15
and SetCameraTargets function.
- 28:15 - 28:17
We'll cover those now and then come back
- 28:17 - 28:19
that StartCoroutine thing at the end.
- 28:20 - 28:22
So SpawnAllTanks.
- 28:22 - 28:24
All that's doing is it's going to loop through
- 28:24 - 28:26
the TankManagers.
- 28:26 - 28:28
For each TankManager it's going to
- 28:28 - 28:31
set the instance belonging to that TankManager
- 28:32 - 28:34
to an instantiated prefab,
- 28:34 - 28:37
so we're calling Instantiate(m_TankPrefab,
- 28:37 - 28:41
and then we're spawning it at the spawn point of the tanks.
- 28:41 - 28:43
With the same rotation.
- 28:43 - 28:45
So this may look confusing but it's basically a long
- 28:45 - 28:47
line that we've moved on to a new
- 28:47 - 28:49
line because it doesn't make any difference and it's easier
- 28:49 - 28:50
to fit on a projector.
- 28:50 - 28:52
So we're just instantiating
- 28:52 - 28:55
a tank per instance as required.
- 28:56 - 28:58
Okay, so now we've created the tank
- 28:58 - 29:00
we want to set it's player number,
- 29:00 - 29:02
and since this loop is going from 0
- 29:02 - 29:04
upwards that doesn't really
- 29:04 - 29:06
work as nicely as starting from
- 29:06 - 29:10
1 so we're just saying that the PlayerNumber is i + 1,
- 29:10 - 29:12
so it starts at 0 so
- 29:12 - 29:14
the PlayerNumber would start at 1,
- 29:14 - 29:15
and so on.
- 29:15 - 29:17
Otherwise when you start the game you'd get
- 29:17 - 29:19
'Player 01,
- 29:19 - 29:21
Player 1 failed', you don't want that.
- 29:21 - 29:23
so you need to just add 1
- 29:23 - 29:25
in a minimum of Player1, Player2 and so on.
- 29:26 - 29:28
And then finally for each tank we're
- 29:28 - 29:30
going to call that Setup function which is the first
- 29:30 - 29:32
function that we covered in TankManagers.
- 29:33 - 29:35
Just a quick reminder,
- 29:35 - 29:36
TankManagers - Setup.
- 29:36 - 29:38
So it's the thing that's in charge of
- 29:38 - 29:42
movement, shooting and coloring and whatnot.
- 29:45 - 29:48
SetCameraTargets, we're creating an array of
- 29:48 - 29:50
transforms called Targets
- 29:50 - 29:52
and we're setting it to be the same length as
- 29:52 - 29:54
the TankManagers array.
- 29:54 - 29:57
Then we're going to loop through this Targets array
- 29:58 - 30:01
and we're going to set
- 30:01 - 30:03
each target equal to
- 30:04 - 30:08
the TankManager's instances transform.
- 30:08 - 30:10
So each tank's instance
- 30:10 - 30:11
that transform.
- 30:12 - 30:14
And then finally we can set the
- 30:14 - 30:16
CameraControl's targets to
- 30:16 - 30:18
this Targets array that we just created.
- 30:19 - 30:21
so we're just looking at the TankManager
- 30:21 - 30:23
and saying 'here's all the positions
- 30:23 - 30:25
of the tanks you've just created,
- 30:25 - 30:27
and we'll just assign them to the CameraControl script'.
- 30:28 - 30:30
Okay, so the last thing that
- 30:30 - 30:32
the Start function did was
- 30:33 - 30:35
it did something called Start Coroutine
- 30:36 - 30:38
and then it had a function within
- 30:38 - 30:40
it's parameters, which is a bit weird.
- 30:42 - 30:44
So let's find out what that's all about.
- 30:45 - 30:47
So just to jump back to the slides,
- 30:47 - 30:49
so we've just been talking about Start
- 30:49 - 30:51
and what actually happens there, and we say that
- 30:51 - 30:54
it starts the GameLoop to continue.
- 30:54 - 30:56
So it's starting the GameLoop coroutines.
- 30:56 - 30:58
But we haven't talked about these yet, we're talked about functions
- 30:58 - 31:00
and how you can use those.
- 31:00 - 31:02
We need to talk about coroutines.
- 31:02 - 31:04
So the GameLoop
- 31:04 - 31:06
is going to start the round.
- 31:07 - 31:09
Obviously you're going to start the round, people are going
- 31:09 - 31:11
to start playing, driving around.
- 31:11 - 31:13
It's then going to wait.
- 31:13 - 31:15
And then they're going to be playing, so this is when
- 31:15 - 31:17
they're actually firing, shooting each other, running around,.
- 31:18 - 31:20
Then it's going to wait.
- 31:20 - 31:22
And then the round is going to end.
- 31:23 - 31:25
Ordinarily in functions, when you run a function
- 31:25 - 31:27
all the commands just go straight through
- 31:27 - 31:29
and something happens or you'll loop through
- 31:29 - 31:31
something but it's instantaneous.
- 31:31 - 31:33
But often in programming we need to
- 31:33 - 31:36
to kind of pause and do what we call Yield
- 31:36 - 31:38
and wait for a certain number of seconds or
- 31:38 - 31:40
wait for a certain condition to be valid.
- 31:40 - 31:42
And that's where coroutines come in.
- 31:43 - 31:45
So an ordinary function might look like
- 31:45 - 31:48
this - void, a non-return type and then
- 31:48 - 31:51
the name of the function ()
- 31:51 - 31:54
and then some commands within that.
- 31:55 - 31:57
Whereas a coroutine
- 31:59 - 32:01
has, first off you'll notice a different
- 32:01 - 32:03
return type, instead of void
- 32:03 - 32:06
we have this thing called IEnumerator.
- 32:06 - 32:08
As if the word coroutine wasn't weird enough
- 32:08 - 32:11
you now have something else to worry about,
- 32:11 - 32:13
IEnumberator, but don't be afraid of that,
- 32:13 - 32:15
it's just the return type.
- 32:15 - 32:17
And you'll also notice within this we have the
- 32:17 - 32:19
word yield in the middle of it.
- 32:23 - 32:26
what we can do is start some basic
- 32:26 - 32:28
commands or anything that we want to happen
- 32:28 - 32:30
straight away like any old function
- 32:30 - 32:32
but then we can stop at this point called yield.
- 32:33 - 32:35
At the point where you come in to contact
- 32:35 - 32:37
with yield what happens is
- 32:37 - 32:39
execution exits that function.
- 32:40 - 32:42
It goes away.
- 32:42 - 32:44
Then it waits for a certain period of time
- 32:44 - 32:47
or some condition to be true.
- 32:47 - 32:49
And then after that period of time
- 32:49 - 32:51
or that condition it comes back
- 32:51 - 32:53
in again at the same point that it yielded.
- 32:54 - 32:56
And then it will continue on with the coroutine.
- 32:57 - 32:59
So instead of just going straight through
- 32:59 - 33:01
and doing everything instantly
- 33:01 - 33:03
you put in a little pause in the middle of your
- 33:03 - 33:05
function, which could be very useful.
- 33:06 - 33:08
And it becomes really useful when you
- 33:08 - 33:10
start putting things like While Loops in
- 33:12 - 33:14
So what you can do
- 33:14 - 33:16
is have a while loop like this
- 33:16 - 33:18
with a yield instruction in the middle.
- 33:18 - 33:20
So if you haven't heard of while loops,
- 33:20 - 33:22
it's kind of like if you think of an if statement of
- 33:22 - 33:24
checking a condition a while loop is just there
- 33:24 - 33:26
to say 'whilst this thing is still true,
- 33:26 - 33:28
then we're going to do what's in those brackets'.
- 33:28 - 33:31
Okay, so in this coroutine
- 33:31 - 33:33
the first part of the function would start normally,
- 33:34 - 33:35
then it would hit the while loop,
- 33:35 - 33:38
so let's suppose that the condition is true.
- 33:38 - 33:40
It goes in to the while while loop and hits that yield.
- 33:41 - 33:43
Then it leaves again,
- 33:43 - 33:45
waits for some period of time,
- 33:45 - 33:47
and then comes back in to the yield
- 33:47 - 33:49
and continues with that while loop.
- 33:49 - 33:52
Whilst within a loop we've exited
- 33:52 - 33:54
and then come back in again after a period of time.
- 33:55 - 33:58
So supposing that that condition is still true,
- 33:58 - 34:01
we'll then go back in to the same while loop.
- 34:01 - 34:03
So let's say for example,
- 34:03 - 34:05
this might come up, we say that
- 34:05 - 34:07
the condition for that while loop is while there is
- 34:07 - 34:09
more than 1 tank left
- 34:09 - 34:11
keep doing this loop.
- 34:11 - 34:13
And then let's say that the yield was
- 34:13 - 34:15
come back next frame.
- 34:16 - 34:18
So then what we'd have is a function that wouldn't
- 34:18 - 34:20
finish until there was
- 34:20 - 34:22
only 1 tank left.
- 34:23 - 34:25
So you'd say while there
- 34:25 - 34:27
is not 1 tank left
- 34:28 - 34:30
do this, come back next frame.
- 34:30 - 34:32
Oh there's still not 1 tank left, come back next frame.
- 34:32 - 34:34
Oh there's still not 1 tank left, come back next frame.
- 34:35 - 34:37
Until there was, and then the function would finish.
- 34:37 - 34:39
Could be quite useful.
- 34:39 - 34:41
You remember the start function calls
- 34:41 - 34:43
StartCoroutine?
- 34:43 - 34:46
The coroutine that it's starting is the GameLoop.
- 34:46 - 34:48
So let's have a quick look at the GameLoop.
- 34:48 - 34:50
To see what's going on there.
- 34:50 - 34:53
GameLoop is saying
- 34:53 - 34:56
yield return StartCoroutine(RoundStarting());
- 34:57 - 34:59
So do you remember when it says yield it
- 34:59 - 35:01
waits for whatever it's got to the right
- 35:01 - 35:03
of it to finish before it continues on.
- 35:04 - 35:07
So when it's saying yield return StartCoroutine
- 35:07 - 35:09
it's waiting for that coroutine to finish
- 35:09 - 35:11
before it goes on to the next one.
- 35:11 - 35:13
So what's going to happen here is it's going to do
- 35:13 - 35:16
RoundStarting, wait for RoundStarting to finish,
- 35:16 - 35:19
then come back in, then it's going to do RoundPlaying,
- 35:19 - 35:22
wait for RoundPlaying to finish, then it's going to come back
- 35:22 - 35:24
etcetera for RoundEnding.
- 35:24 - 35:26
Finally we're going to check if there is a GameWinner
- 35:26 - 35:29
so if GameWinner is not equal to null.
- 35:29 - 35:32
So if there is a GameWinner then load
- 35:32 - 35:34
the current level, Application.LoadLevel
- 35:34 - 35:37
loadedLevel, that's just reloading this current level.
- 35:38 - 35:40
If there isn't a GameWinner then it's going to call StartCoroutine
- 35:40 - 35:42
and notice that there is no
- 35:42 - 35:44
yield return on that bit.
- 35:44 - 35:46
So it's not going to wait and then come back
- 35:46 - 35:48
in we're just going to call that then the
- 35:48 - 35:51
function will finish and there will be another GameLoop running.
- 35:52 - 35:54
So now we're getting to the point where we're looking
- 35:54 - 35:57
at how the GameManager and TankManagers interplay.
- 35:57 - 36:00
So this slide is going to kind of populate over time
- 36:00 - 36:02
but the way that we're going to do this is we're going to
- 36:02 - 36:04
animate in each point, we're going to talk about it and then
- 36:04 - 36:06
we're going to fill in that part of the script, so hopefully
- 36:06 - 36:08
we can kind of step you guys through it and it
- 36:08 - 36:09
will make sense.
- 36:09 - 36:12
So we've just talked about the GameLoop.
- 36:13 - 36:16
So the GameLoop starts off with
- 36:16 - 36:18
the RoundStarting.
- 36:19 - 36:21
So the first thing that needs to happen
- 36:21 - 36:23
in the RoundStarting is we need to reset
- 36:23 - 36:25
all the tanks, so we need to
- 36:25 - 36:27
set them up at their positions on the spawn points,
- 36:27 - 36:29
we need to enable their controls
- 36:29 - 36:31
and all these kind of things.
- 36:31 - 36:33
So the way that we do that is to parse to the
- 36:33 - 36:35
TankManager and 'hey, can you reset everything
- 36:35 - 36:37
and reposition everything?' and that's what the
- 36:37 - 36:40
TankManager does, so every tank remember has a
- 36:40 - 36:43
TankManager assigned to it and we do that.
- 36:43 - 36:46
We start off our GameLoop, we reset the tanks,
- 36:46 - 36:48
and the TankManager's in charge of resetting
- 36:48 - 36:51
things so De/Reactivating
- 36:51 - 36:53
and setting the positions back where they should be.
- 36:53 - 36:55
And we start off by disabling
- 36:55 - 36:57
all the tank controls.
- 36:57 - 36:59
And we have in the TankManager DisableControl()
- 36:59 - 37:01
so we can't move,
- 37:01 - 37:03
shoot and the UI is off.
- 37:03 - 37:05
When the game starts you'll see Round 1
- 37:05 - 37:07
and the tanks are there but they don't have
- 37:07 - 37:09
their Health UI just yet.
- 37:10 - 37:12
Then we do three more things
- 37:12 - 37:14
to start the round, we need to setup the camera position
- 37:14 - 37:16
and the size, so in other words
- 37:16 - 37:18
we take an average of the two tank's positions
- 37:18 - 37:20
and we setup the zoom using the size.
- 37:21 - 37:23
We increment the round number so it will,
- 37:23 - 37:25
if it's 0 we make it Round 1.
- 37:25 - 37:27
And then we setup the Message UI which will
- 37:27 - 37:30
say either Round 1 or whatever is appropriate.
- 37:30 - 37:32
So let's go and actually look at
- 37:32 - 37:34
that in the script now.
- 37:36 - 37:38
So currently RoundStarting just
- 37:38 - 37:40
has this yield return m_StartWait
- 37:40 - 37:42
so all it's going to do is wait for
- 37:42 - 37:45
3 seconds and then do nothing else.
- 37:46 - 37:48
So what we want to do is put in a
- 37:48 - 37:50
little bit of code before that.
- 37:50 - 37:53
First off we said we wanted to reset all of the tanks.
- 37:53 - 37:55
So before the yield return
- 37:56 - 37:58
put a couple of lines so you've got some space and
- 37:58 - 38:02
we've got a function called ResetAllTanks.
- 38:03 - 38:05
So put a call to that.
- 38:06 - 38:09
Next we'll put DisableTankControl
- 38:09 - 38:11
because we don't want people to be able to
- 38:11 - 38:15
control their tanks while the round is actually starting.
- 38:15 - 38:17
We want to wait for the RoundPlaying for them to be
- 38:17 - 38:19
able to control everything.
- 38:19 - 38:21
Then you remember we wanted to make sure that
- 38:21 - 38:23
the camera is set to that
- 38:23 - 38:25
exact size and position.
- 38:25 - 38:27
We don't want it to smoothly transition
- 38:27 - 38:29
to that, we want to set it.
- 38:29 - 38:32
So on the camera control,
- 38:32 - 38:40
so m_CameraControl.SetStartPositionAndSize()
- 38:41 - 38:43
So all that's doing is on the camera
- 38:43 - 38:45
control calling the function.
- 38:46 - 38:48
Next we want to increment the round number
- 38:48 - 38:50
because a new round has just started.
- 38:50 - 38:53
So that's m_RoundNumber++;
- 38:53 - 38:56
All that does is just add 1 to that round number.
- 38:57 - 38:59
And now we've got the incremented round number
- 38:59 - 39:01
we can set the message text
- 39:02 - 39:04
to be something appropriate like
- 39:04 - 39:05
Round and then the round number.
- 39:06 - 39:12
So m_MessageText.text because it's the text part of the text
- 39:12 - 39:15
component, well named isn't it?
- 39:18 - 39:22
And we're going to set that to Round + m_RoundNumber
- 39:22 - 39:24
Now it's very crucial here that you put
- 39:24 - 39:27
Round and then a space inside your string,
- 39:27 - 39:29
otherwise it will say, it will just say
- 39:29 - 39:31
Round1 like that on the screen, which you don't want.
- 39:31 - 39:33
So make sure you put a space in the string
- 39:33 - 39:35
and then we're adding on the actual number
- 39:35 - 39:37
on to the end of that, and it will
- 39:37 - 39:39
parse it straight in to the UI text.
- 39:40 - 39:42
Okay, so let's save the script
- 39:42 - 39:44
there and give that a test, see how that all works.
- 39:45 - 39:48
Okay, so I'm going to save my script, switch back to Unity.
- 39:50 - 39:52
And I'm going to press Play.
- 39:55 - 39:59
Okay, so this isn't working as well as I thought it might.
- 40:00 - 40:02
Interestingly it's just counting up the rounds.
- 40:02 - 40:04
Why would that be James?
- 40:04 - 40:06
Probably we need to actually
- 40:06 - 40:08
enable the control in, like,
- 40:08 - 40:11
the RoundPlaying bit, that'll probably do it.
- 40:11 - 40:13
Okay, so what we've done so far is
- 40:13 - 40:15
to say 'we want to start the round', and all that
- 40:15 - 40:17
RoundStart knows how to do is
- 40:17 - 40:20
to setup the UI, setup the tanks and then just be like
- 40:20 - 40:22
'okay, well I'm going to wait,
- 40:22 - 40:24
oh hey, there's a new round, I'll wait now,
- 40:24 - 40:26
oh hey, there's a new round'.
- 40:26 - 40:28
It doesn't actually let us play the game, because we didn't do that
- 40:28 - 40:30
bit yes, so I'm going to stop play
- 40:30 - 40:32
and jump back in to my code.
- 40:32 - 40:36
Okay so RoundPlaying, we've got this
- 40:36 - 40:38
yield return null there
- 40:38 - 40:40
so what yield return null means
- 40:40 - 40:42
is 'come back the next frame',
- 40:42 - 40:44
that's all it's doing there.
- 40:44 - 40:46
So just to jump back to the slide.
- 40:46 - 40:48
As we've said before, we've done RoundStarting
- 40:48 - 40:50
and those are the things that happen
- 40:50 - 40:52
TankManager is taking care of the
- 40:52 - 40:55
Resetting and DisablingControl so that's all good.
- 40:55 - 40:57
But RoundPlaying, so what do we do in RoundPlaying?
- 40:57 - 40:59
Well we need to enable the TankControl when we
- 40:59 - 41:02
played just now we noticed you couldn't drive or shoot.
- 41:03 - 41:05
So EnableControl is in TankManager.
- 41:05 - 41:07
It can move, it can shoot and it sets up the UI.
- 41:09 - 41:11
And the thing it needs to do is empty that message.
- 41:11 - 41:13
So as soon as we start playing we don't want to see
- 41:13 - 41:15
the word Round 1 on the screen, we want to scrub that out.
- 41:15 - 41:17
So we don't disable the UI or do anything
- 41:17 - 41:19
else we just empty the message string.
- 41:20 - 41:22
And then as James was saying earlier with the coroutine
- 41:22 - 41:26
we just keep waiting until there's one tank left.
- 41:26 - 41:28
Let's see how we write that.
- 41:29 - 41:31
So at the start as it said on the slide
- 41:31 - 41:33
we want to enable TankControl
- 41:33 - 41:35
so you can actually play the game.
- 41:35 - 41:37
So we've got a function that does that
- 41:37 - 41:39
and all this function does is loops
- 41:39 - 41:42
through all the tanks and calls their EnableControl.
- 41:42 - 41:44
So nothing really to it.
- 41:45 - 41:47
Next we want to empty that
- 41:47 - 41:52
message text, so m_MessageText.text
- 41:52 - 41:57
= string.Empty so that's just a blank string.
- 41:59 - 42:01
Naturally you could also just
- 42:02 - 42:04
do that if you wanted to,
- 42:04 - 42:05
but we're not doing it that way, we're going to go with
- 42:05 - 42:07
the string.Empty.
- 42:09 - 42:11
Okay, so it's looking good so far
- 42:11 - 42:13
but now all it's going to do
- 42:13 - 42:16
is it's going to enable the TankControl,
- 42:16 - 42:18
blank the string and then wait
- 42:18 - 42:20
at one frame and then
- 42:20 - 42:22
go on to the round ending.
- 42:22 - 42:24
So we don't want that, what we want is to
- 42:24 - 42:26
use that while loop that we saw earlier.
- 42:26 - 42:28
So for a while loop it's
- 42:28 - 42:31
while( and then the condition
- 42:31 - 42:33
within the ).
- 42:33 - 42:35
So we've got a function that's
- 42:35 - 42:37
OneTankLeft which returns true if the
- 42:37 - 42:40
there is one tank left, or less.
- 42:41 - 42:43
Yeah, there is a condition where it can be a draw,
- 42:43 - 42:44
you'll see.
- 42:45 - 42:48
Yeah, so this will keep
- 42:48 - 42:50
doing whatever is within the brackets
- 42:50 - 42:52
as long as there is
- 42:52 - 42:54
not one tank left.
- 42:54 - 42:56
So the thing that needs to go in there is
- 42:56 - 42:58
our yield return null.
- 42:58 - 43:00
What I should be doing with my while loop
- 43:01 - 43:03
is placing it around yield return null.
- 43:04 - 43:06
Like that.
- 43:08 - 43:10
Adding lots of lines for no reason.
- 43:11 - 43:13
There we go.
- 43:13 - 43:15
So until that's true it's just going to keep
- 43:15 - 43:17
going away one frame and waiting for it to become true.
- 43:17 - 43:19
Okay, we'll save that,
- 43:19 - 43:21
switch back to Unity,
- 43:22 - 43:24
and I'm going to hit Play and see what happens.
- 43:26 - 43:28
Okay, I've just muted my audio for the sake of everyone
- 43:29 - 43:31
So I knew it, I can drive around.
- 43:31 - 43:33
That's pretty cool,
- 43:33 - 43:35
I can shoot stuff.
- 43:36 - 43:39
Okay, so this is maybe a point to, you know,
- 43:39 - 43:41
hook up with a neighbour and start
- 43:41 - 43:43
shooting at each other, but,
- 43:43 - 43:45
what you'll also notice is that
- 43:45 - 43:48
that's great, now I can move around,
- 43:48 - 43:51
but I've got no idea who won that round or what happened.
- 43:51 - 43:53
I'll just keep infinitely adding new rounds and
- 43:53 - 43:57
keep playing in an endless war,
- 43:57 - 43:59
which I think we all know is wrong.
- 44:03 - 44:05
So we need to have a RoundEnding,
- 44:05 - 44:07
we need something to happen, we need some logic
- 44:07 - 44:09
to say 'what's going to happen when one tank
- 44:09 - 44:11
kills the other tank?'
- 44:12 - 44:14
So back to our slides.
- 44:17 - 44:19
RoundEnding is our next coroutine
- 44:19 - 44:21
within the overall loop.
- 44:22 - 44:24
So the first thing we need to do is
- 44:24 - 44:26
really disable the controls.
- 44:26 - 44:28
It's kind of great to be driving around and
- 44:28 - 44:30
showboating once you've won the game, but,
- 44:30 - 44:33
yeah, it's a little bit jazzy for my tastes.
- 44:33 - 44:35
So really we're going to disable the Tank Controls
- 44:35 - 44:37
you're going to stop dead, the camera is going to
- 44:37 - 44:39
focus in on where you are, it kind of puts a
- 44:39 - 44:41
nice pause in the game as well.
- 44:42 - 44:44
And then we're going to clear the existing
- 44:44 - 44:46
RoundWinner, so if we've already had a
- 44:46 - 44:48
round played we want to clear out that winner
- 44:48 - 44:50
and decide who's just won this round.
- 44:50 - 44:52
We want to check to see if any of
- 44:52 - 44:54
the round winner's have 5
- 44:54 - 44:58
rounds won and therefore are them game winner.
- 44:58 - 45:00
And then we want to put that in to the
- 45:00 - 45:02
Message UI and say 'okay, well first off
- 45:02 - 45:05
this person just won the round,
- 45:05 - 45:07
this tank has this many kills, this tank has
- 45:07 - 45:08
this many kills'.
- 45:08 - 45:10
Or we're going to say 'hey, this person
- 45:10 - 45:12
won the entire game'.
- 45:14 - 45:16
So let's have a look at how that works.
- 45:16 - 45:20
So back in our code the next coroutine,
- 45:20 - 45:23
remember IEnumerator means coroutine.
- 45:23 - 45:26
And we've currently got a yield that's got the EndWait.
- 45:27 - 45:29
So before we wait
- 45:29 - 45:30
we need to do a bunch of stuff.
- 45:30 - 45:32
Like we said, we're going to disable TankControl,
- 45:33 - 45:36
so we'll call that DisableTankControl function.
- 45:36 - 45:39
Next we said we need to clear out our RoundWinner
- 45:39 - 45:41
before we can check for another one.
- 45:42 - 45:44
So what we're going to do is
- 45:44 - 45:47
set m_RoundWinner to equal null.
- 45:48 - 45:50
So that's just saying
- 45:50 - 45:52
'for this round we don't know who's won yet,
- 45:52 - 45:54
we'll need to check'.
- 45:54 - 45:56
So the RoundWinners remember are
- 45:56 - 45:58
TankManagers, so it's referring to a
- 45:58 - 46:01
particular tank, same for the GameWinner.
- 46:01 - 46:03
It's referring to a particular tank via
- 46:03 - 46:05
it's TankManager.
- 46:06 - 46:10
So once we're not sure if there's a RoundWinner
- 46:10 - 46:12
we then can check, so we say that
- 46:12 - 46:19
m_RoundWinner = GetRoundWinner.
- 46:20 - 46:22
So what that's going to do is
- 46:22 - 46:24
it will assume that there is one
- 46:24 - 46:26
or fewer tanks left,
- 46:26 - 46:28
and then go through all of them until it finds
- 46:28 - 46:30
one that's active and return it.
- 46:30 - 46:31
Shall we have a look at that function?
- 46:31 - 46:33
Yeah we can have a quick look at that.
- 46:33 - 46:35
So if you scroll down, it's around
- 46:35 - 46:37
130 for me but it might be a bit different for you.
- 46:37 - 46:39
But what you'll see is GetRoundWinner.
- 46:39 - 46:42
And it's going to go through all of the tanks.
- 46:43 - 46:45
So it's looking for the array, looking for everyone
- 46:45 - 46:47
in the array, IE the length of the array.
- 46:47 - 46:50
And then if it finds a TankManager
- 46:50 - 46:52
who's instance if active
- 46:52 - 46:54
So it's activeSelf is true.
- 46:54 - 46:56
And it's going to return that tanks.
- 46:57 - 47:00
If it gets all the way through the array
- 47:00 - 47:03
and hasn't found anything,
- 47:03 - 47:05
then it's going to return null, so if it's not found
- 47:05 - 47:07
anything that's active it's going to return null,
- 47:07 - 47:09
that means there's going to be a draw.
- 47:11 - 47:13
And that happens if both tanks manage
- 47:13 - 47:15
to blow themselves up at the same time.
- 47:15 - 47:18
It is possible, I've seen it, maybe once.
- 47:19 - 47:22
Okay, so now after we've found our RoundWinner
- 47:23 - 47:25
we're going to check whether it is null or not.
- 47:26 - 47:29
So if that RoundWinner is not equal to null.
- 47:30 - 47:33
So ! for not equal to.
- 47:37 - 47:39
Then what we can do is add
- 47:39 - 47:41
to that round winner's number of wins.
- 47:42 - 47:49
So m_RoundWinner.m_Wins
- 47:49 - 47:51
or m_Wins
- 47:51 - 47:53
++ just to iterate it once.
- 47:54 - 47:56
So the RoundWinner remember links to the TankManager
- 47:56 - 47:58
for that tank so it says 'hey, this tank,
- 47:58 - 48:01
the number of wins that tank's got, lets add 1 to it'.
- 48:02 - 48:05
Okay next, after we've added 1
- 48:05 - 48:08
to somebodies number of rounds that they've won
- 48:08 - 48:10
they might have won the entire game.
- 48:10 - 48:12
So now we can check if there is a GameWinner.
- 48:14 - 48:17
So m_GameWinner = GetGameWinner.
- 48:19 - 48:21
Yeah, let's just take a look at GetGameWinner.
- 48:23 - 48:26
So yet again it's very similar to GetRoundWinner,
- 48:26 - 48:29
instead of checking if it's found something that's active
- 48:29 - 48:31
we're just saying 'hey, has this particular
- 48:31 - 48:34
tank in the list of tanks in the array,
- 48:34 - 48:36
is the number of wins equal to the
- 48:36 - 48:39
number of rounds required to win the entire game?
- 48:39 - 48:41
If so, return that particular tank
- 48:41 - 48:43
that's got the right number of wins.
- 48:43 - 48:45
If not return null'.
- 48:48 - 48:50
Once we've found a GameWinner we want
- 48:50 - 48:54
to get a message based on whether there is
- 48:54 - 48:56
no RoundWinner, no GameWinner,
- 48:56 - 48:58
whether there is a GameWinner, etcetera.
- 48:58 - 49:01
So we've got a function for doing that and
- 49:01 - 49:03
what we're doing is creating a string called message
- 49:04 - 49:06
and setting it equal to the EndMessageReturn.
- 49:07 - 49:09
And then once we've got that message
- 49:09 - 49:13
setting the MessageText's text to that message.
- 49:13 - 49:16
So if we wanted to do anything else with the message in-between
- 49:16 - 49:18
then we could do that here.
- 49:18 - 49:20
We're not actually going to do anything so we're just
- 49:20 - 49:21
setting the text straight away.
- 49:21 - 49:24
So we're calling this EndMessage function.
- 49:24 - 49:26
So EndMessage is how we actually
- 49:26 - 49:29
calculating what to do at this point so,
- 49:29 - 49:30
or what to write on the screen,
- 49:30 - 49:32
so I'm going to scroll down and look at this
- 49:32 - 49:37
very confusing looking set of strings and text.
- 49:37 - 49:39
So like before we have this
- 49:39 - 49:41
way of coloring text
- 49:41 - 49:43
and we had that function further up which
- 49:43 - 49:45
uses rich text.
- 49:45 - 49:47
So we've got a bunch of different things
- 49:47 - 49:49
that it could do, so by default what we
- 49:49 - 49:51
do is we say 'okay well let's just have
- 49:51 - 49:54
a default condition', and that default condition
- 49:54 - 49:56
is that there's a draw.
- 49:56 - 49:58
Because when there's a draw we don't really need to increment
- 49:58 - 50:00
anything at all so we don't worry about it at all.
- 50:00 - 50:02
So we're just going to put in DRAW! as our
- 50:02 - 50:04
default text for EndMessage.
- 50:04 - 50:08
Then we decide if something different that a draw has happened.
- 50:08 - 50:12
So we check 'hey, is RoundWinner not null?'
- 50:12 - 50:14
I.E. is there a RoundWinner?
- 50:14 - 50:16
Then we're going to send something to the message.
- 50:17 - 50:19
So the message this time will be
- 50:19 - 50:21
RoundWinner with colored text saying
- 50:21 - 50:26
'hey, Player 1 WINS THE ROUND!'.
- 50:28 - 50:31
So then we have something added on to the message
- 50:31 - 50:33
so remember message is our overall
- 50:33 - 50:36
piece of text that we're going to put in to that text field.
- 50:36 - 50:38
So we just keep adding stuff to it in order
- 50:38 - 50:40
to create a paragraph of different
- 50:40 - 50:42
things within the overall message.
- 50:42 - 50:45
The message as a string then gets added
- 50:46 - 50:48
all of these, so these \n
- 50:48 - 50:50
just means go to a new line.
- 50:50 - 50:52
So we're just spacing out this text, it's having
- 50:53 - 50:55
'this person won the round, move down a couple of lines,
- 50:55 - 50:57
then start writing something else'.
- 50:57 - 51:01
Then once we've moved down four new lines
- 51:01 - 51:03
we have a for loop.
- 51:04 - 51:06
So with that for loop we're going through
- 51:06 - 51:09
all of the tanks that are available
- 51:09 - 51:11
and within those we're adding to the
- 51:11 - 51:13
message some information.
- 51:14 - 51:16
So quite simply what we're doing here is adding on
- 51:17 - 51:20
that particular tank colored with PlayerText
- 51:20 - 51:22
then a colon
- 51:23 - 51:25
and a space, so it might be
- 51:25 - 51:27
Player 1:
- 51:27 - 51:31
and then the number of wins that are found for that tank.
- 51:31 - 51:33
So it looks at the TankManager and it says 'how many wins
- 51:33 - 51:36
have you got so far?' Print that as a number,
- 51:36 - 51:38
then put in a space and the word WINS.
- 51:39 - 51:41
Then put in a new line and do it again
- 51:41 - 51:43
for the next tank in the loop.
- 51:43 - 51:48
So you will have Player 1: this many wins,
- 51:49 - 51:51
Player 2, and so on.
- 51:53 - 51:55
Then what we're doing after that is saying
- 51:55 - 51:57
'hey, is there a GameWinner?'
- 51:57 - 51:59
Because if there's a GameWinner we don't want
- 51:59 - 52:01
it to do any of the stuff above, we don't want to actually
- 52:01 - 52:03
write all of this stuff in, so instead
- 52:03 - 52:06
of saying 'hey message +='
- 52:06 - 52:08
we're just clearing it out by saying =,
- 52:08 - 52:10
we're saying the message is going to be
- 52:10 - 52:12
exactly this thing and in that instance,
- 52:12 - 52:14
we check the GameWinner,
- 52:14 - 52:16
color it with the right text and we say
- 52:16 - 52:18
'space WINS THE GAME'
- 52:18 - 52:19
so it might be Player 1, it might be Player 2,
- 52:19 - 52:21
WINS THE GAME, that's the only message you'll
- 52:21 - 52:23
see on the screen, you won't see the number of
- 52:23 - 52:25
other wins that were won or anything like that,
- 52:25 - 52:27
we clear it out, put it in.
- 52:28 - 52:30
That's whoever won the game.
- 52:30 - 52:32
Then we return that message
- 52:32 - 52:34
so whenever we call EndMessage
- 52:34 - 52:37
we return that piece of text depending on
- 52:37 - 52:39
what's just happened in the round.
- 52:40 - 52:42
So when we're using EndMessage
- 52:42 - 52:44
back up in RoundEnding
- 52:45 - 52:46
that's what we're doing.
- 52:46 - 52:48
And finally the piece of text after it
- 52:48 - 52:50
gets assigned to the component's text field,
- 52:51 - 52:53
message, there
- 52:53 - 52:55
And I'm going to just jump back on to my slide
- 52:55 - 52:57
really quick, what we're going to do now
- 52:57 - 52:59
is save the script,
- 52:59 - 53:01
but we need you to pair up with someone
- 53:01 - 53:05
and fight because it's a tank game and that's how it works.
- 53:05 - 53:07
So remember you've got WASD,
- 53:08 - 53:10
for one player, and spacebar to fire.
- 53:10 - 53:13
Then you've got arrow keys and return for the other player.
- 53:15 - 53:17
So save your scene, grab a neighbour and
- 53:17 - 53:19
give it a test.
- 53:20 - 53:22
Okay, so we're going to do the same.
TankManager
Code snippet
using); } }
GameManager
Code snippet(); } } }
Related tutorials
- Coroutines (Lesson)
- UI Text (Lesson)
|
https://unity3d.com/learn/tutorials/projects/tanks-tutorial/game-managers?playlist=20081
|
CC-MAIN-2018-34
|
refinedweb
| 14,994
| 81.87
|
Item Description: Slurp entire files into variables
Review Synopsis:
During my daily ritual, I came across a new module called Slurp.
Slurp provides three functions: slurp, to_array and to_scalar. slurp is exported by default, the other two can be exported too.
Again, fake object orientation. You can use either slurp($filename) or Slurp->slurp($filename) to do exactly the same thing. I fail to see how Slurp->to_array is any better than Slurp::to_array. Both can be used.
This module handles files, so it should be in the File:: namespace. It is, however, just called Slurp. A much better name would be File::Slurp. But that module already exists.
File::Slurp provides exactly the same functionality, plus some extra handy functions. It has been around since 1996 and is does its job well. Slurp has no advantage over File::Slurp. The author clearly didn't search CPAN, or wants to compete.
The only positive sound about this module you'll hear from me is: nice documentation! Every function is docmented in detail.
Slurp does what it is supposed to do and is documented quite well. However, it should not be on CPAN, for two reasons: it is not in the correct namespace and another module already provides the functionality. This module adds nothing to CPAN but needless confusion for people in search of a file slurping module.
I think it should be removed from CPAN, or at least be renamed to something in File::.
On to the review ...
The question of object orientation is interesting - This module is quite obviously targeted at the beginner programmer, intended to provide a means of facilitating the simple import of data from files. With this in mind, I wanted to make the interface as simple and forgiving as possible, such that a beginner could issue either Slurp::to_array or Slurp->to_array without error, hence this "fake object orientation" - In this matter, intent on functionality, or more precisely, simplicity, was the driving factor in this syntax. The code for this "fake object orientation" is taken from the CGI.pm's self_or_default method (also taking note of the obversations posted by demerphq in this thread here), which, right or wrong in concept, does provide a robust interface for the module, well suited for beginner programmers. With this same intended audience, this module is not intended to form any true object-orientated interface, its purpose is much more simplistic and direct.
While I agree whole-heartedly with the sentiments of chromatic on the necessity of a module to encapsulate a one-liner, I feel we are talking about two separate groups in this instance - It is true that this code could be replaced in functionality within another application by a single line of code similar to the following:
The issue however has to do with the level of knowledge this assumes on the part of the programmer. For a beginner programmer, it is more likely that they will resort to a foreach- or while-loop to read a file into a memory, rather than this simple hack. It is to this audience that the Slurp module is intended to be of use - To simplify code and provide a direct means of obtaining such functionality.
File::Slurp provides exactly the same functionality, plus some extra handy functions. It has been around since 1996 and is does its job well. Slurp has no advantage over File::Slurp. The author clearly didn't search CPAN, or wants to compete.
On one point, we do agree - A better name for this module would be File::Slurp but such a module does already exists. I did consider posting Slurp in the Acme:: namespace, but was unsure as to whether it fitted with the theme of existing modules in this namespace.
I would however object to the statement that I either failed to search CPAN or sought to compete with the existing File::Slurp module - I did review this module prior to writing and posting the Slurp module, following which I still felt that there was merit in the Slurp module. Indeed, upon review of the existing File::Slurp module, there were some aspects of this code which I felt warranted attention, but fell outside the scope of the intended focus for the Slurp module - I did not want to write another "(simple|portable) interface for manipulating files" (of which there are many on CPAN - this form of module and DBI wrappers do noteably comprise a large portion of the CPAN namespace), rather a simple and direct interface for slurping files into variables. Whether this is a valid argument may be questioned, but it was the motivation in this case.
I would also note that I did inform the maintainer of File::Slurp, David Muir Sharnoff (MUIR) of the upload of this module (immediately upon upload to CPAN) and the intended difference in focus - I have not as yet recieved a reply from David on this matter.
Thank you - Documentation is usually my weakest aspect in the development cycle.
I would also note that I did also put some effort into the short tests for this module - I believe that mandatory reading for all module maintainers are the excellent Introduction to Testing and Perl Testing Tutorial written by chromatic.
On this point, I disagree. Simply because another module provides similar functionality does not negate the worth of additional similar modules - If this were true, modules such as CGI::* and *::Session may also too deserve to go through a cull process. It is true that the Slurp module has similar focus to the existing File::Slurp module - But its focus is much more narrow and differs by the method in which it implements its functionality, itself, I believe, an element of worth in a community obsessed with the concern of "there being more than one way to do it".
Furthermore, given that only two modules currently on CPAN incorporate the term "Slurp" in their name, File::Slurp and Slurp, the argument for confusion is weak.
In conclusion, I would like to thank you for your review of this module - This having been said however, I do not believe that the general approach of harsh reviews or assumptions being made as to the actions or motives of module authors serves the Perl community well. I do agree with you that there are general issues with regard to CPAN namespace and perhaps to the placement of this module in said namespace, however from a functionality standpoint, I believe that this module still has merit and an audience which could benefit from its use. The wider issues with regard to the CPAN namespace, the moderated review of modules and alike certainly warrant consideration, but do not feel that these module reviews serve as the best place for their consideration.
perl -le 'print+unpack("N",pack("B32","00000000000000000000001000001010"))'
The issue however has to do with the level of knowledge this assumes on the part of the programmer.
I'm still not convinced. Your argument here appears to be "this is appropriate as a module because a beginning programmer would not be saavy enough to write this module."
I don't think that's a good criterion. A beginning programmer wouldn't necessarily know about localizing $/ for sure, nor about wantarray, but would said programmer attempt to write something abstract enough to handle both cases?
I'm also not convinced a beginning programmer would know enough to care about the difference between a function or a static method call. To your credit, the synopsis doesn't expose this, so you've ameliorated that somewhat.
Another consideration is that to use your code, a beginning coder would have to know about the existence of the CPAN, look for something named "slurp", install the module, and realize that error messages coming from your module are the fault of their code (if the file cannot be read).
I find it more likely that a beginning Perl programmer would be able to write the appropriate code before he figures out how to install your module.
The justification which I see for this module is not that a beginning programmer could not write this module, but rather to provide a very simple interface for reading files for that programmer.
I see merit in the Slurp module in that a beginning programmer can read one or more files into memory with a simple and direct one line construct such as:
While it is entirely true that this one line could be easily written as:
Thereby alleviating the need for any additional module, I propose that it is relatively unlikely that a beginning programmer would be comfortable with such a construct - It is thus that I see merit in the Slurp module.
It is for this very reason, the level of knowledge which may be assumed, that some attempt has been made to permit both function and static method calls to be made without error - This is an attempt to follow the "principle of least surprise" with regard to function reference.
The greater issue, which you rightly highlight, is the accessibility of this module to its target audience, in that it does assume knowledge of the existence of CPAN and module installation - This is a concern and perhaps the greatest problem with any module targeted at a beginning audience.
perl -le 'print+unpack("N",pack("B32","00000000000000000000001000001100"))'
I thought about this for a while.
There are definitely good reasons for beginners to be exposed the idiom in question and all of its facets. It is a noble idea to do something to increase awareness with whichever approach seems appropriate. However, I believe writing a module is ineffective in this case.
A beginner will benefit greatly for reading and understanding all the various details which comprise this idiom. However, that implies reading the code you used, not dropping a blackbox tnto one's code.
For the latter purpose, File::Slurp fits the bill perfectly.
For the former purpose, a detailed tutorial that explains the nooks and crannies seems to be the way to go. I don't think making it available as a module will achieve any increase in awareness - beginners more so than anyone else tend to not even take a cursory look into the guts of modules.
Note also that as discussed in the node Juerd referenced, the snippet as is is broken. And understanding why that is, actually, will benefit a beginner even more. There are dozens of lessons to be had in this snippet. But one has to read it and have it explained to learn them.
Makeshifts last the longest.
With this in mind, I wanted to make the interface as simple and forgiving as possible, such that a beginner could issue either Slurp::to_array or Slurp->to_array without error, hence this "fake object orientation"
Being fault tolerant is a bad way to teach newcomers.
When I was learning about packages and objects, at first I thought :: and -> were the same thing. After all, with many modules both worked. Curse those modules! Without them, it would not have taken me so long to comprehend the material.
In time, I used more and more modules. I liked -> because it was prettier. But some modules did not work with it. And others would not work with ::. If from the beginning there had been some indication abouth these two being different, I would have understood the difference much earlier.
We advise newbies to use strict and warnings. These pragmata decrease fault tolerancy. They make the beginner think about what he does.
Your way of helping is not helping at all. Thank you for trying, but you're only making things harder. If you want to help Perl initiates understand Perl, document that -> does not work, and explain why. If you really want to be fault tolerant, then do not document -> at all. I don't believe fault tolerancy really was your motivation: you would have used :: in the synopsis. Slurp's documentation actually promotes the use of -> where :: is much more appropriate. Please consider another way of helping.
The code for this "fake object orientation" is taken from the CGI.pm's self_or_default method
But CGI.pm really is object oriented. It provides a functional interface, where a object is automatically initialized. self_or_default always returns a CGI object, you only decide whether or not you're going to use $_[0].
CGI.pm provides a functional syntax for something that is object oriented. This is acceptable (I still don't like it).
Slurp.pm provides an object oriented syntax for something that doesn't use objects. There's no object. Not even a package global (or file scoped lexical) in which some data is stored. There is no object orientation, only its syntax.
I did review this module prior to writing and posting the Slurp module, following which I still felt that there was merit in the Slurp module.
Er, ehm. Right. Well. Er? Humm??
And why would anyone use Slurp instead of File::Slurp? And *if* there's anything that Slurp has that File::Slurp has not, wouldn't it be a good idea to patch File::Slurp?
rather a simple and direct interface for slurping files into variables.
That is exactly what File::Slurp provides. Together with that, it happens to provide functions that write files. Your module's functionality *completely* overlaps File::Slurp's.
I would also note that I did also put some effort into the short tests for this module
I haven't installed or tried Slurp.pm. Its source only reached my computer through the browser. That's why I didn't comment on your tests. Having tests is a good thing - I hope one day I too will know how to write good tests (for existing modules -- can anyone help me write tests for DBIx::Simple for example? I have no idea where to start. My only module with tests is Crypt::Caesar, but that was trivial.)
Simply because another module provides similar functionality does not negate the worth of additional similar modules
You are right. Having alternatives is a good thing. But not when two alternatives are EQUAL. You might not have noticed, but you reinvented exactly the same wheel that File::Slurp already provides. With only syntactical differences (your module tolerates false OO, and a list of files can be used instead of just one - nothing worth a new module, in my opinion).
You're now asking newbies to choose between two equivalent modules. Equivalent in slurping functionality, equivalent in quality. That's *hard*, especially if you don't have enough knowledge to base a decision on.
This having been said however, I do not believe that the general approach of harsh reviews or assumptions being made as to the actions or motives of module authors serves the Perl community well.
I do assume things. But both monks and the module authors can tell me I'm wrong if I am. That's why I send e-mail, that's why I post the reviews on PerlMonks. There's even the opportunity for anonymous lamers to flame without getting their own accounts downvoted.
however from a functionality standpoint, I believe that this module still has merit and an audience which could benefit from its use.
It would have been a great module if we didn't have File::Slurp already. Really, both your code and documentation are well written. But it doesn't add anything new to CPAN. Please at least think of a name that starts with File::. Maybe File::Slurp::Alternative, or File::OtherSlurp or something like that.
I question the need for a module that encapsulates a one-liner -- especially for a task as simple as this. That said, I do appreciate the @ARGV hack. That's pretty clever, even if it does pose error-handling problems.
That said, I do appreciate the @ARGV hack. That's pretty clever, even if it does pose error-handling problems.
I too appreciate the @ARGV hack. It handler errors by die()ing with a useful message. See also 204874 for lots of discussion about it.
- Yes, I reinvent wheels.
- Spam: Visit eurotraQ.
;)
I have to say I am surprised this one made it into the top level namespace. I don't mind having alternate solutions cropping up in CPAN, however.
Matt
I have to say I am surprised this one made it into the top level namespace.
Then you misunderstand how CPAN works. Once an author has a PAUSE id, she can upload whatever modules she wants. It's only entry into the module list that you need approval for - and I'm guessing that Slurp hasn't yet made it into there.
"The first rule of Perl club is you do not talk about
Perl club." -- Chip Salzenberg
perl -le 'print+unpack("N",pack("B32","00000000000000000000001000001011"))'
Having said that, I must admit I have not been tracking the modules discussion content lately. Perhaps I'm not the only one.
(on further reflection, perhaps my misunderstanding, if any, pertains to the level of attention paid to unregistered name spaces by the maintainers)]
Since when is competition a bad thing? The more choices available the better. Let people choose which module they want. Should Burger King be eliminate because it "duplicates the functionality" of McDonalds? Maybe Open Office should be shutdown because it duplicates MS Word's functionality and we wouldn't want to confuse people, would we?
Juerd, why stop there?
Don't worry, I'll write some more :)
Why not take all the modules that you don't like and flame each author in turn.
Good idea, but that would be a lot of work. Please note that I did not flame in this review.
You could probably do, say, one a day.
No, I've got other things to do as well, you know.
Which author are you planning to flame in tomorrow's post?
I'm reviewing modules, not authors. Handy::Dandy was so full of (censored), that I couldn't refrain myself from flaming the author personally. I regret my actions and am moving on; I hope you will too. I don't know which module will be next. The author of Attribute::Default asked me to review that module, so you can expect a review soon. It's a rather large module, and I need to study on attributes before I *can* review it, so it might take some time.
Says a lot about you.
"Follow me" said the blind man to the deaf one.
Throwing nuclear weapons into the sun
Making everybody on Earth disappear
A threat from an alien with a mighty robot
A new technology or communications medium
Providing a magic fish to a Miss Universe
Establishing a Group mind
Results (150 votes). Check out past polls.
|
http://www.perlmonks.org/?node_id=223106
|
CC-MAIN-2018-22
|
refinedweb
| 3,139
| 62.88
|
Claw marks on power
Tonight's debate between Senator Joe Lieberman and challenger Ned Lamont shows the older senator cool in the tactics of television debate. There are moments where Lamont clearly is, well, not a politician, in handing himself on the podium. There are wobbles, holds, and moments where he has to think, rather than simply regurgiate.
But that scraping noise you here in the background is the claws on power of Joe Lieberman scraping as he falls ever farther down the well. He insulted his opponent, and spilled out excuses for a record in power, and acted in that kind of arrogant and domineering way that someone too mixed up in power for too long does when forced to show itself.
Perhaps Lamont was not as polished as he could have been, but he has held up a mirror to the face of power.
The back drop of this debate is Lieberman's declaration that he will run regardless of the outcome of the primary. It is the breaking into the open of a war between establishment insiders and the base of the party. This war is beyond ideology, it features progressive darlings such as Boxer going out for Lieberman, not merely pro forma supporting him. It featured a month long weasel word fest from Senator Chuck Schumer about whether the DSCC would back Lieberman if he ran as an independent.
The opening statements set the tone - with Ned Lamont making an open appeal to primary voters who are involved in their communities, with Lieberman taking an almost snide tone in his declaration of why he needs to be sent back to Washington. America hates people who have become Washingtonized, and this moment was one reason. People vote for representatives so that the world close to home can be taken care of. They want people in Washington who haven't forgotten that life is about roads, schools, hospitals and jobs - not about some personal quest for aggrandizement.
Lieberman rode a wave of anger at another Senator into power, with some of the most brutal and stinging negative adds run to that time. Lieberman is still at home in the small screen world of small screen politics. He filled the podium easily, he seemed like he was in his living room almost chatting with, and chatting up, the moderator.
Lamont on the other hand, his thin face bringing back to mind every preacher, every Jimmy Stewart movie. It is something potent and powerful in American politics, where "angry" is bad, because it signified people who can't make it. But "outrage" is good, because it signifies people who have made it who will suffer the slings and arrows of outrageous inbreed power no longer, but have, by taking arms against a sea of troubles, decided to end them.
It is a thin face that was last seen thundering across the political landscape during the tax revolts of the late 1970's, and found its final form with a gaunt one time actor spitting out: "Government isn't the answer, government is the problem." Lamont's straight forward gutting of the fog of Lieberman's excuses about the war and the rush to it had the same quality. Lamont is here, in his own terms, to get the war off of people's backs.
Wars in Democracies are often like this - entered into with an easy sense of superiority at a nation of consent and consensus - and ended in the same way, with a sense of frustration that arms and men cannot make republics. This is why American wars have often been very short, or very long.
Lieberman, a child of the Reagan generation, with its easy arrogance of power, thinking that all that was needed to run the world was a certain sharpness, has failed to understand that whole eras of American government have been made, or broken, on the backs of a single throw of the dice.
It was Jackson's easy little war against the Cherokee that opened up the rest of the deep south. It was Tyler's belated commitment to a war in Texas that opened up the chance for Polk to take the Presidency, and then Taylor to take it back for the Whigs. The run of victories created an easy arrogance of power around a southern officer class, which then proceded to commit to a "War between the States" even as prominent Republicans in the Union wanted an external war to unify the nation.
The liberal coalition won a string of the largest wars in history - World War I and World War II, plus creating and effectively winning the Cold War, though fewer people knew it at the time than history has made evident. However this easy arrogance of victory of mobilization and brain power became lost in the quagmire.
The reactionary era, which began inauspiciously with Nixon's slender victory in 1969 came because a large section of Americans decided they wanted a reactionary republic to work, and gave it chance after chance. The gave it credit for winning the cold war, for surviving the demographic crime wave. Thus a crop of leaders, secure in the belief that all that was really needed was to hit the right angle and ride out the temporary ride out the ups and downs.
This arrogance was writ large on Lieberman, a man whose career came by tacking back and forth. Taking left symbolically on issues such as global warming, but then right when it came to needling the progressive ideas of his own party. It was bipartisanship in the same sense the brokeback marriages are bisexual. This process has won him defenders among his fellow Senate Democrats - from conservative Ken Salazar, to progressive Barbara Boxer. He's often there on the little vote, that senators go back to their constituents with, small bones to make up for the vast sell outs of the past 30 years.
The difference in demeanor of the Senator from Connecticut has been remarked - he was deferential and low key with Cheney, but nasty and brutal with Lamont. The conclusion - Lieberman is really a member of "that other Republican Party", and finds little real difference with Cheney, but must slag anyone who attempts to be a Democrat in his sight and hearing. Lieberman is like Treasury Secretary Paulsen, a moderate Republican concerned a bit, but not too much, about the costs of our present way of doing business.
But this is a year where the god of small things is no longer watching over incumbents, because the small things have been going wrong for Americans. And in such times, they don't blame their own small decisions, but the large looming clouds. Two hang over Lieberman in specific: one is the war, and the other is his absence of leadership. He is, in almost all respects, an anti-leader. The iconic hero of people who want government to do nothing efficiently, and who feel that being unattached to "either extreme" is the mark of a great statesman.
This logic has buffered his career, and he ran back to it tonight in attempting to needle Lamont as being stupid. But it made no headway, because Lieberman's judgment has been wrong - and his commitment to an extremist war, and his unwillingness to dig down and fight an extremist judiciary, have led him to a place where his party core does not support him. This is coupled with the economic reality that the metropolitan economy, resurgent, even if grungy and not resplendent, under Clinton, has turned hard and slow. The very smart people who, in Lamont's words "play by the rules and make good decisions" are not seeing rewards in general, beyond a booming hedge fund market in a small corner of the state. The Republican squeeze on blue states, both in terms of inflation policy and pork policy, has made for an angry electorate that does not like Mr. George Walker Bush.
Thus the person who is committing political suicide is Lieberman, and he is dragging down with him a party establishment willing to spend millions to protect itself, rather than go on the attack against the Republicans. The whole reason for nominating right leaning Democrats in Pennsylvania and Virginia, is that they would save money for harder fights elsewhere. But Lieberman is spending the money that might be needed to send Claire to the Senate, or perhaps to stage an upset in Montana.
This dynamic, where the Democratic Party is willing to do its all to protect the policies of George Walker Bush, regardless of the wishes of the electorate, is the ur source of the Lieberman sneer, and the ur reason that those who get to know Ned, seem to like him. After all, if Lamont were still polling at 10% of the primary electorate, the spectacle of Lieberman looking into the camera and telling a multi-millionaire businessman that he is stupid, would not have happened at all.
Thus the image that we are likely to take away is the face of an earnest morality, a revolt against the slickness that Lieberman has come to represent, and a revolt against being bribed with small things, in order to bear the large penalties heaped upon us. Because Connecticut has a strong old yankee moral streak in its politics, and the long face of Ned Lamont stares back into its puritan origins, and by doing so, shaken of the label of angry, and replaced it with another.
Ned Lamont thumped the bible of American faith in doing well by doing good, and told Americans that the way out of the hell of Iraq, was to return to the gospel of hard work, and away from the get rich quick world of George Walker Bush and the Reagan Repetition.
I got the same impression -- the Jimmy Stewart one. And Lamont also reminded me of that fellow, I believe his name was Harry Taylor, who spoke up to Bush during one of his appearances, and later all the blogs noted Taylor's uncanny resemblance to one of Norman Rockwell's paintings (about the Four Freedoms).
I think Joe lost the debate for himself tonight. He sounded like a Republican and acted like a Fox-News republican. For someone so "polished," he should have sounded confident and charming; instead he came off as brutal and dismissive.
I think Lamont was very smart not to go for the jugular -- if he had the whole debate would have devolved into the usual neocon talk radio everyone-speaking-at-once shoutfest.
I liked what Lamont said but I am a supporter, so I can't claim to be objective. But I really thought Joe lost this one for himself tonight.
Now it's up to the voters of CT -- and I'll be watching this race closely.
Excellent post, Stirling, as usual!
July 6, 2006 7:54 PM | Reply | Permalink
Didn't get a chance to watch the debate, as I was watching a production of Sam Shepard's new play, God of Hell But generally Lieberman comes across rather like the Claude Rains Character,Sen. Joseph Harrison Paine in the Mr. Smith Goes to Washington film. Not the graft bit, but the "go along and listen to the older wiser guy bit." Sanctimonious and just a wee bit sleazy. Great post. For the one or two (maybe three) who don't know the movie...here's a link:
Mike
July 6, 2006 9:01 PM | Reply | Permalink
"It is the breaking into the open of a war between establishment insiders and the base of the party."
The 'netroots' are not the base of the party.
The 'netroots' are an overwhelmingly upscale, white, well-educated wing of the Democratic party, more concerned with social and foreign policy issues than the economic issues that motivate the actual Democratic base.
Folks like Chuck Schumer and Barbara Boxer are much more interested in representing the base than the 'netroots' are.
The base of the party in CT are African-Americans and blue collar whites. The verdict on Lieberman will come from them, not from the 'netroots'.
July 6, 2006 9:53 PM | Reply | Permalink
I'm impressed by how profoundly shallow your analysis is. The fight between the new political space and Lieberman goes way back. The decision to circumvent the primary system, however, is a slap at primary voters. These people are the base of the party.
And as for the relationship between African American voters and the DC establishment, they have been troubled for a very long time, because, in the view of many in the Democratic African American community, the leadership has all too often not been there on the issues.
Stirling Newberry
July 6, 2006 10:32 PM | Reply | Permalink
Hence I found this interesting:
Hartford Courant, June 30:
I saw a photo related to this particular ad campaign: 5 smiling teen kids "of color" happy as clams to be with teacher Ned.
Also I note that his website is a very simple, humble affair, easy to navigate & written at a relatively low reading level, even though I am sure that his netroots fans combined with his money could get him all the bells and whistles one might want. Also, if you click on "about Ned" (first name used,) you will read that he and his family live in "Fairfield County," not Greenwich.
Bridgeport, that's the ticket.
We'll see how it goes soon enough; after all, they're both rich white bossman types.
;-)
July 6, 2006 10:51 PM | Reply | Permalink
"We'll see how it goes soon enough; after all, they're both rich white bossman types."
If Liberman wins the primary, it'll be because they don't both play that way in CT politics.
Lieberman should have a significant class and ethnic advantage over a Greenwich Yankee. It's the only reason his problems are potentially survivable in the primary.
July 6, 2006 11:20 PM | Reply | Permalink
If its economic issues that motivate the Democratic base, why all the times the Dems have focused so heavily on the economy (and ignoring things like the war) that they have lost in recent years?
July 7, 2006 12:06 AM | Reply | Permalink
Iraq and the economy are the same issue. And the democrats were not focused on "the economy" but on marginal bribes for the middle class.
People aren't going to trade victory and tax cuts for slightly better medicare.
Stirling Newberry
July 7, 2006 12:55 AM | Reply | Permalink
If you believe this, then you haven't looked at how the war is viewed by the working Democrats in the city environment. Ned had much of his strongest grass roots support in Hartford and other places where the voting population is not exactly upscale.
It is Joe Lieberman whose base is the upper middle class. It's interesting how his proxies are going down scale at this particular moment, it reeks of a dishonest desperation.
Stirling Newberry
July 7, 2006 1:00 AM | Reply | Permalink
This war is beyond ideology, it features progressive darlings such as Boxer going out for Lieberman, not merely pro forma supporting him.
My understanding is that the announcement that Boxer would be stumping for Lieberman came out of the Lieberman campaign, and Boxer is stepping away from the idea if not completely disowning the idea.
Kudos to HRC for announcing that she will respect the decision of the Democratic primary voters in CT. The timing was significant. Schumer's weasel words about DSCC support are a disgrace. That and his involvement in the Hackett fiasco throw into stark relief the sense of entitlement in the Senate. Perhaps when Dewine is reelected in Ohio, someone can ask for his head.
Assuming Lieberman loses the primary and wins reelection as a moderate Republican, I wonder if he will still be attractive to McCain as his 08 running mate?
July 7, 2006 3:43 AM | Reply | Permalink
If we could only get as much passion into defeating Republicans as we do in political infighting...
Every dollar spent on this primary could have been spent helping a Democrat in a shaky race instead. I don't have any strong feelings about either candidate, although I think Lieberman is a bit too conservative, but then so is much of the country these days.
Statistically speaking the Dems have a remote chance of winning either the house or senate and all this internal squabbling is music to the ears of Karl Rove.
--- Policies not Politics
Daily Landscape
July 7, 2006 6:41 AM | Reply | Permalink
My initial impression on this Connecticut race is the same as yours: seeing all the passion against Lieberman would be best spent fighting to win every other seat held by a Republican. However, maybe replacing Lieberman is the first step toward eventually recapturing the political marketplace of ideas. Running just slightly to the left of the conservatives has only served to find us in the minority in all three branches of government.
July 7, 2006 7:52 AM | Reply | Permalink
In defense of "infighting" and maybe also the "50 state strategy," I must say that the point is not to fill the Senate with right wing Democrats, but instead to shift it leftward.
Establishment types want to hammer away on liberal Republicans where they think the electorate is more receptive. The problem is not liberal Republicans though. Similarly, if you have a Zell Miller or Joe Lieberman who supports swindles and lies, it doesn't really matter that they happen to have a "D" next to their name.
Lamont supporters should take solace in this iron-clad rule of TV debates: They don't matter much.
Polls showed that Kerry won every debate with Bush. Walter Mondale was said to have slapped down Reagan in debates, and he actually outpolled Reagan for the only time in the campaign after the "Where's the Beef" line.
Much is made of the Nixon JFK debate, but it all focuses on sweating and five o' clock shadow. Radio listeners thought Nixon won handily.
As long as Lamont doesn't hand Joe a Bob Dole "Democrat Wars" line or Ford's bizarre claim that eastern Europe wasn't dominated by the USSR, Lamont supporters shouldn't worry.
July 7, 2006 8:45 AM | Reply | Permalink
King Elvis: "" I must say that the point is not to fill the Senate with right wing Democrats, but instead to shift it leftward."
Why is it so hard for this elementary and basic point to resonate more than it does? We do not want a Democratic administration, Congress, Judiciary whatever, leading the country in the same essential direction as the Republicans. We want to change the direction of this country in some basic ways. Government for people not corporations, care for the environment, human and civil rights, and a foreign policy that promotes peace and justice not American domination and war.
July 7, 2006 8:54 AM | Reply | Permalink
Brutal and dismissive. I think if there's one thing that everyone who votes in the next several years will be voting against, it will be the brutishness and scorn which has been the trademark of the right.
July 7, 2006 9:36 AM | Reply | Permalink
You're not wrong about possible outcomes, but you may have lost touch with the democratic process. The left, at its best, fights from county level on outward and upward, not from the top down. Unlike the right which has won two elections precisely because it's willing to trade democratic process for a Karl Rove (and others at high levels) pulling strings, organizing autocratically, from the center outward, from the top down. Of course it's tempting to emulate them, play dirty and win. But it's not democracy. That leaves us with a choice.
There's also the fact that, should Lieberman win, he will not be the same Lieberman who's been oiling his way along the floor of the Senate. He will have been made aware that he's no longer respected by a serious chunk of his party. He has been testing too many old friends a little too hard. Unless he's a robot, that should make him less willing to continue in his role as vinyl doll to the Administration's voracious appetites.
July 7, 2006 9:55 AM | Reply | Permalink
Yes, there was a time that Democrats were the reactionaries and Republicans the party of 'reform.' If every member of the GOP had Arnold Scwarznegger's views on energy and transportation, I'd probably vote GOP. If they were all like Arlen Specter on women's rights or McCain on campaign finance, they'd be great.
July 7, 2006 10:03 AM | Reply | Permalink
I remember during the Vietnam War how the war Democrats, Johnson and Humphrey and how can we forget Scoop, were as bad if not worse than the Republicans on this central issue.
July 7, 2006 10:55 AM | Reply | Permalink
Mondale used his "where's the beef" line against Gary Hart during the primaries.
Ironically, Hart had some quite substantial ideas.
July 7, 2006 11:16 AM | Reply | Permalink
Lamont needed mostly to show that he was capable of standing on the same stage with Lieberman. That he did.
Lieberman needed to show that his experience could benefit the state more than the Lamont's different policy approach might. That he did not do.
Worse, he showed that his personality is an affront to the Democratic Party voters who must decide between the two. Lamont might have exhibited some lack of polish last night, but Lieberman showed a total lack of humanity.
Why do we want to send a thug back to the Senate? That's the question his performance raised. Even the Republicans will be reconsidering their posture if Lieberman can successfully plant himself on the ballot in November as an independent.
July 7, 2006 11:21 AM | Reply | Permalink
Lamont supporters should take solace in this iron-clad rule of TV debates: They don't matter much.
Is your point that you think Lamont lost the debate? It's hard to figure out, is there any post-debate polling we can look at?
By the way, I think you're wrong about debates, especially for the non-incumbent, so I'm curious about Lamont. Debates are a chance for outsiders to establish their legitimacy and 'weightiness' up against an incumbent. Bush, for example, accomplished that in 2000, but only with the aid of the anti-Gore punditocracy, which took Gore's debate win and lied and poor-mouthed it into a loss, and did the reverse for Bush.
July 7, 2006 11:28 AM | Reply | Permalink
I thought it was Reagan too. I do remember Mondale saying in 2004 during the Dem convention that he only polled above Reagan after he triumphed in one of the debates.
July 7, 2006 11:32 AM | Reply | Permalink
I didn't see the debate, just reacting to the Conventional Wisdom that Lieberman opened up the proverbial can a' woop@ss.
I have never once watched a debate and thought, "Hey y,know I'm voting Republican - this guy's very convincing."
Even among that increasingly narrow "swing vote" or independent vote, do the debates really matter much? As said before, I think "swingers" (I like that name better!) would only react to some king sized blunder.
The debates on who won debates are all double talk from partisans about how their guy either won on points or won the "perception" game. I remember BushCo's claims about their guy were laughable. They basically maintained that as long as Bush didn't defecate in his pants or scream "NAZIS NAZIS" he 'won.'
July 7, 2006 11:45 AM | Reply | Permalink
I'm not defending Lieberman, but...
What is the objection to him other than his stance on Iraq? By the time the elections are over the situation in Iraq is likely to be much different than now. So are we supposed to be punishing him for what he did in the past?
Kissing the ring of the mafia don doesn't mean you love him, it means you are willing to kowtow to power in the hopes of getting something in return (say keeping the facility at Groton open).
The dems put through lots of good policies during the period that they were allied with the Dixiecrats. That's the way politics works, something about "strange bedfellows" as I recall.
--- Policies not Politics
Daily Landscape
July 7, 2006 12:58 PM | Reply | Permalink
A good point, and needed injection of reality in the debate debate. Nice "long view."
But,
Didn't Lieberman sign on to the Social Security swindle from Bush? That'd be a major reason to vote him down.
July 7, 2006 1:05 PM | Reply | Permalink
You mean besides just about everything else?
Seriously, any time there is a difficult issue, you can count on Joe Lieberman to vote with the Republicans.
July 7, 2006 2:59 PM | Reply | Permalink
1)"What is the objection to him other than his stance on Iraq?"
small matter indeed. He has attacked antiwar Democrats as unpatriotic but he is definitely deserving of our votes and perhaps financial contributions. "kick me, kick me, I'm a liberal" Why even raise such a small matter...very impolite.
2)"By the time the elections are over the situation in Iraq is likely to be much different than now. "
maybe his votes on Iraq give you some idea of how he approaches Iran? Maybe the situation in Iraq, different from now, will be worse? Maybe Joe will continue to support policies that make the situation even worse?
3)"Kissing the ring of the mafia don doesn't mean you love him, it means you are willing to kowtow to power in the hopes of getting something in return"
and again, why do we have to kiss his finger? I know HE expects it but I guess if you are on your knees already to receive the kick (see 1 above), you can turn around and kiss the ring.
4)"The dems put through lots of good policies during the period that they were allied with the Dixiecrats."
and not only that, the civil-rights-Dems only really disagreed with the Dixiecrats on the one small issue of segregation and lynchings and voting rights. Why the hell DID they keep raising these nagging issues? Why DID all those integrationist Dems refuse to vote or support segregationist Dixiecrats over one single issue. Damn one-issue leftists!!!
July 7, 2006 3:49 PM | Reply | Permalink
"that should make him less willing to continue in his role as vinyl doll to the Administration's voracious appetites."
or, feeling betrayed by "garden slug" Dems who think for themselves, he becomes still more prickly until at a key moment, like Zell Miller, we hear how "he hasn't left the party, the party left him" as he endorses the Republican standard bearer in 2008.
July 7, 2006 7:46 PM | Reply | Permalink
I strongly second your comments about Schumer. Lieberman himself raises the issue of his running as an independent and the DSCC refuses to state clearly what its policy will be if the Democratic candidate is not Lieberman. What the DSCC is in other words is an elite club quite distinct from the political party that calls itself Democratic. I would urge anyone who wishes to support a Democratic candidate to do so directly without the intermediate intervention of the club.
July 7, 2006 7:51 PM | Reply | Permalink
"It is Joe Lieberman whose base is the upper middle class. It's interesting how his proxies are going down scale at this particular moment, it reeks of a dishonest desperation."
It is interesting to me how when anyone disagrees with the quality of your political wisdom, you accuse them of being someone's proxy or being paid off.
But beyond that, do you really think the Lamont vote on 8/8 won't be more upscale and better educated than the Lieberman vote? If you do, either you live in a similar uber-spin zone like Markos does, or you have a remarkably tenuous grasp on how the party factions break down in this race.
July 8, 2006 7:11 AM | Reply | Permalink
|
http://tpmcafe.talkingpointsmemo.com/2006/07/06/claw_marks_on_power/
|
crawl-002
|
refinedweb
| 4,746
| 66.98
|
The Microsoft Graph is becoming more and more an essential technology inside the Microsoft ecosystem. It was born as a set of APIs specific to integrate apps with the Office 365 suite, but it's expanding more and more to become the single point of access to every data related to a user authenticated with a Microsoft Account or a work / school account.
Many of the Windows and cross-device experiences that have been launched or showcased at BUILD are powered by the Microsoft Graph: Timeline, Project Rome, the Your Phone app, etc.
The integration of the Microsoft Graph in your application always starts from the authentication. In order to access to all the information of the user, we need to make sure it's logged in a secure way. A great way to handle this scenario is using Azure Functions. We can use them to quickly build an ecosystem of REST APIs to interact with the Microsoft Graph and leverage the built-in authentication support.
There's a lot of documentation around this scenario and the Azure team has even built a couple of extensions to make this scenario really easy to implement. However, right now there are a couple of open issues which can drive you crazy, so I've decided to share the right steps to follow to make this scenario working.
Setting up the Azure Function
Azure Functions are a new approach to build APIs based on a serverless paradigm. This means that we don't have to worry about the underlying infrastructure of the API, but we can just focus on what we want to achieve. For example, compared to a traditional WebAPI project, we don't have to handle and maintain the web application, configure the routing, create a Visual Studio project, etc. With Azure Functions we can just write the code of the method we want to expose, using one of the available languages (Java, C#, JavaScript, etc.). The function is associated to a trigger, which is the event that causes its execution. It can be a HTTP trigger (so executed when a client hits the HTTP endpoint associated to the function), it can be a time trigger, it can be an event like a file created in the cloud.
Another big difference compared to the standard WebAPI approach is the consumption: Azure Functions are paid by number of executions and, as such, you're going to be billed only when you're effectively using them. If no clients are invoking your function, you won't be billed. This makes Azure Functions quite cheap: with an Azure subscription you get 1 millions of free executions; every consequential usage is billed at 0.20 $ per million.
In our sample we're going to build an Azure Function, which returns all the basic information about an AAD user using the Microsoft Graph.
Let's start by logging to your Azure Portal. Choose Create a resource and select Serverless Function App.
Now you need to setup your Azure Function, by specifying:
- The app name, which will be prefixed to the domain .azurewebsites.net and it will become the URL of your web service.
- Your Azure subscription, in case you have more than one.
- A resource group: you can create a new one or use an existing one.
- The OS where you want to host the function. Since the 2.0 version of the runtime (currently in preview) is based on .NET Core, you can also opt-in for a Linux backend. This post is based on a Windows backend.
- Hosting plan: you can choose the consumption model (the one I've described before) or, if you can prefer, you can host the function inside an App Service and use the same hosting plan.
- The Azure region.
- The storage which will be used to store the function's infrastructure. Also in this case you can create a new one or leverage an existing storage.
- If you want to turn on Application Insights to gather analytics and exception logging during the usage.
Now you just need to wait a while for the portal to finish the creation of the required infrastructure. Once everything is setup, you can open the dashboard on the Azure Function you have just created. As first step, move to the Platform features tab and choose Function app settings. Here you need to change the Runtime version from ~1 to beta (which currently means using the 2.0.xxxxx.0 version). This is required because the Microsoft Graph extensions we're going to leverage are supported only by the newest version of the runtime. We need to do this as first step because we can't change the runtime once we have created one or more functions.
Please note Make sure that the runtime version you're using is at least the 2.0.11888.0 one. Previous versions contain a bug which prevents the usage of the Microsoft Graph extensions we're going to leverage later.
The second important thing we need to do is to setup the Authentication. Since Azure Functions are hosted by an App Service, we can leverage a feature called Easy Auth. This means that we can add support to various authentication providers (like AAD, Microsoft Account, Facebook, etc.) without manually implementing the flow, but just by sharing with Azure the authentication information that the provider has shared with us (typically, an application identifier and a client secret).
In our case we're going use our function to access to the Microsoft Graph and, as such, we're interested in enabling authentication using Azure Active Directory. Move again to the Platform features tab and, this time, choose Authentication / Authorization.
Turn on the App Service Authentication switch to reveal all the different configuration options and the list of supported providers. As first thing, from the dropdown labeled Action to take when request is not authorized choose Login with Azure Active Directory. This means that our function won't allow anonymous calls, but the user will be required to authenticate using AAD when the function is invoked.
Then click on Azure Active Directory, where you will be asked to choose between Express or Advanced configuration. The easiest one is Express, but it will allow only to configure the authentication using the same Active Directory linked to your Azure account. For example, this is what I see when I use this option with my Azure subscription, which is linked to my @microsoft.com account:
The advantage of the Express configuration is that I don't have to manually register my application with my Azure Active Directory tenant. I just need to choose the option Create new AD App, give it a name (by default, it will be the same name of your Azure Function) and then press OK. The Azure portal will do everything for me.
Otherwise, if you want to connect your application to an Active Directory tenant different than the one linked to your Azure subscription, you can choose the Advanced mode, which will require you to register your application inside the Azure Active Directory tenant and then specify the Client ID, Issuer URL and Client Secret returned by the portal, in addition to the callback URL.
Here you can find all the steps to follow in case you need to perform the manual configuration.
Based on the usage you want to make of the Microsoft Graph, you may want to click on the Manage Permissions button and choose which information you want to expose. Keep in mind that some of them require Admin permissions to be enabled, so you will have to be Administrator of the Azure Active Directory tenant you're trying to setup. In my case, I'm just looking to get the user's profile, which is enabled by default.
Once you have completed the setup, press OK and then Save at the top of the Authentication / Authorization blade. Now we'ready to write some code! Move fo the Functions section in the tree on the left and choose New function. Choose HTTP Trigger (we want to build a REST endpoint), then select one of the supported languages (in my case, C#) and then give it a name.
As Authorization level choose Anonymous. By default this value is set to Function, which means that our function will be protected by unauthorized calls and we will be required to add a secret key to its URL if we want to invoke it. In our case, since we already have a level of protection (the AAD authentication), we can safely set this option to Anonymous. This way, we will be able to invoke the function simply by calling the URL associated to it.
The function editor will appear and it will contain the basic skeleton of a function. As you can see, an Azure Function is really just... a function! There's no infrastructure, no web project, no configuration. Just a Run() method, which receives as input the HTTP request that triggered it.
If we want the full power of an IDE, we have the opportunity also to create a function in Visual Studio or in Visual Studio Code. In this case, we will have access to all the Visual Studio features (Intellisense, local debugging, etc.) and the opportunity to include libraries and components using NuGet. However, in this case we will lose the opportunity to edit and configure the function right in the Azure portal.
For the moment, let's ignore the IDE approach since the Microsoft Graph integration still has some hiccups. Let's write some code directly in the web editor. Here is how our function will look like:
using System.Net; using System.Net.Http; using System.Net.Http.Headers; using System.Text; public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, string graphToken, TraceWriter log) { HttpClient client = new HttpClient(); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", graphToken); var me = await client.GetStringAsync(""); return new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(me, Encoding.UTF8, "application/json") }; }
Thanks to the Easy Auth approach supported by the App Service, we don't have to perform the authentication against ADD ourselves. We can just add a parameter to the Run() method, called graphToken in this case, which will contain the authentication token required to perform any operation against the Microsoft Graph.
As such, we can just build a new HttpClient object and add it to the authentication header, by using the DefaultRequestHeaders.Authorization property and the AuthenticationHeaderValue object. This object is just a key-pair value, composed by the the name of the header (Bearer) and its value (the token).
Then, using the GetStringAsync() method exposed by the HttpClient object, we hit the Microsoft Graph endpoint to retrieve the basic information about the logged user. We take the returning JSON and we send it back as a result of the function, by encapsulating it inside a HttpResponseMessage object.
This is just one of the many available endpoints exposed by the Microsoft Graph. If you want to see all of them and understand how they work, you can use the great Graph Explorer tool.
Now that we have created the function, we need to customize the bindings. The bindings are the way the various input and output parameters of the function are translated. As you can see, we have written a function which accepts, as input, a HttpRequestMessage object and a token as a string (plus a TraceWriter object used for logging purposes); as output, instead, it returns a HttpResponseMessage object. However, we haven't specified anywhere where these input and ouput data are coming from. We can do this from the Integrate section that you can find on the left when you expand the function you have created.
By default, you will see a visual editor that will allow you to configure the trigger, the inputs and the outputs.
Let's start from the Triggers.
In most of the cases, all the default options will be good for us. The trigger, by default, supports the GET and POST HTTP methods and, based on the function's code we've written before, the name of the parameter which contains the HTTP request is req.
Now let's move to the Inputs section.
Click on the New Input button and choose Auth token from the list. Let's complete the configuration by filling the other fields:
- Under Identity choose Use from HTTP request, since the authentication token will be included directly as header of the request.
- Set as Resource URL the domain we want to use the authentication for. In our case it's
- Set as Auth token parameter name the name of the input parameter which will contain the authentication token. Since we have called the parameter of the Run() method graphToken, change the default value to graphToken.
If you have done everything properly, the box in the middle of the page should display a list of green marks, which will tell you that the authentication configuration is correct.
I will skip the Outputs section, since it doesn't require any special configuration. It just specifies which is the value returned by the function.
All the information we have setup so far are translated into bindings, which are described inside a JSON file called function.json. You can see the content of this file by clicking on the Advanced editor button on the top right of the Integrate section:
{ "bindings": [ { "authLevel": "anonymous", "name": "req", "type": "httpTrigger", "direction": "in", "methods": [ "get", "post" ] }, { "name": "$return", "type": "http", "direction": "out" }, { "type": "token", "name": "graphToken", "resource": "", "identity": "UserFromRequest", "direction": "in" } ] }
As you can see, it's nothing more than a JSON representation of the configuration we just did in a visual way.
Now we're ready to test the function. Go back to the code editor and click on the Get function URL button. Copy the URL associated to the function. It will be something like https://*my-custom-domain*.azurewebsites.net/api/*my-function-name*) and paste it into your browser. I suggest you to use an InPrivate instance of your favorite browser, to avoid being automatically logged in with the wrong account.
The first time you hit the url, you will be asked to login your AAD account and to authorize the web application (in this case, the Azure Function) to access to your profile. Complete the login and observe... your function failing miserably with a server error.
If you go back to the code editor, you will find a section called Logs at the bottom. Open it to get access to the real-time logs and you should see an error similar to the following one:
[Error] System.Private.CoreLib: Exception while executing function: Functions.GetUserProfile. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'graphToken'. Microsoft.Azure.WebJobs.Host: No value was provided for parameter 'graphToken'.
Please note: if, instead, you see an exception like:
'Microsoft.Azure.WebJobs.Extensions.Tokens, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified
it means that you're using a runtime version lower than 2.0.11888.0. In this case, there's nothing you can actively do. You just need to wait for your Azure Function instance to get the required update.
The reason why you're seeing this exception is that the older versions of the Microsoft Graph extensions contained some bugs that prevented the binding to work properly. As such, the graphToken parameter of the Run() method isn't being injected with the authentication token.
In order to fix this, we need to force the update of the extensions to the last version, which contains the required fixes.
In order to do this, let's return to the to our Azure Function dashboard and stop the function. Then move to the Platform Features section and choose Advanced Tools (Kudu). This will open up the advanced diagnostic tools for the App Service.
Choose Debug console from the top menu, then click on CMD. You will open a command prompt on the server's folder which contains the web application that powers your Azure Function. Move to the site --> wwwroot folder.
Now using your favorite editor (in my case, Visual Studio Code) create a new file on your machine called extensions.csproj and add the following content:
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.MicrosoftGraph" Version="1.0.0-beta3" /> <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.AuthTokens" Version="1.0.0-beta3" /> <PackageReference Include="Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator" Version="1.0.0-beta2" /> </ItemGroup> </Project>
Now drag and drop the file from your computer to the list of files included in wwwroot folder. Once the upload has been completed, move to the command prompt below and type the following command:
dotnet build -o bin extensions.csproj
The process will take approximately 3 minutes. It will care of downloading the last version of the Microsoft Graph packages (defined in the csproj file you have just created) and compiling them inside your Azure Function folder.
Now feel free to close the advanced tools and return to the Azure Portal. Start again the function, then open the logs and hit again the URL. This time... the function will fail again, but with another exception.
Microsoft.IdentityModel.Clients.ActiveDirectory: Response status code does not indicate success: 400 (BadRequest). {"error":"invalid_request","error_description":"AADSTS240002: Input id_token cannot be used as 'urn:ietf:params:oauth:grant-type:jwt-bearer'"}
This problem is caused by a configuration of the Azure Active Directory application. With the default parameters, in fact, the authentication token isn't properly recognized. In order to fix this, we need to change a value in the manifest of the AAD application.
Let's return again to the Platform Features section of our function and open the Authentication / Authorization section. Click on Azure Active Directory under the Authentication Providers section and then press the Manage Application button at the bottom.
Click on Manifest and, inside the JSON editor, look for the attribute oauth2AllowImplicitFlow, which should be set to false. Change it to true and then save.
Now close every instance of your browser or open an InPrivate window. We need to trigger a new authentication flow in order to propagate the change to the client (in our case, the Azure Function). Hit again the HTTP url of your function and... crossing fingers, this time you should see the JSON response with your AAD profile being displayed inside the browser!
Wrapping up
In this post we have seen the power of Azure Functions combined with the Microsoft Graph. Thanks to the built-in authentication support, handling the whole authentication process is a much easier process than having to implement everything on your own. However, right now, you may hit some blockers in building an Azure Function which connects to the Microsoft Graph, due to some bugs included in the function runtime and in the Graph Extension SDKs. Hopefully they will be fixed soon but, in the meantime, this post should give you all the information you need to get up & running.
Happy coding!
great article, thanks Matteo!
|
https://blogs.msdn.microsoft.com/appconsult/2018/06/27/using-microsoft-graph-in-an-azure-function/
|
CC-MAIN-2019-22
|
refinedweb
| 3,193
| 62.58
|
Hi all
I'm working on a flex app that is using ResourceManager for localization
I've set up all the locales, properties files, ant script and all is running fine
I find it a bit redundent to write this for every string in the entire app - resourceManager.getString('myBundle','SOME_KEY')
so I've tried to come up with a cleaner way to encapsulate this repetative task.
instead of implementing a singleton, I simply wrote a global function in the getText style:
so the function usage would simply be: _('SOME_KEY')
nice right?
the function implementation:
package
{
import mx.resources.IResourceManager;
import mx.resources.ResourceManager;
[Bindable("change")]
public function _(key:String,bundle:String='myBundle'):String{
return ResourceManager.getInstance().getString(bundle,key);
}
}
the problem is that this works fine:
<s:Label
but this doesn't:
<s:Label
using the debugger I can noticed the function is called the first time before the resource bundle is ready
after the resourcemanager had finished loading the resource bundle modules, this function is not being called for some reason
so I figured it has to do with the binding
I made sure I'm using the same exact bindable event metadata tag as the getString method use on IResourceManager
but it still doesn't get updated
any reason for that
do you see anything else I can do to make it work?
thanks
Yariv
anyone?
Can you please help?
Don't use a global function. You need a class in order to listen for events (IE. use binding).
This is arguable, but I think a better (safer/more reliable) solution would be to create a singleton model where there is a property for every ID you plan on binding to. Wrap calls to the ResourceManager in the getters and setters, and dispatch a PropertyChangeEvent when the resource manager is finished loading. PROTIP: Create a code template in Flash Builder to auto-generate these getters/setters.
Another option is to create a static class in the default namespace with a shorter function call like: RM.getText('foo')
|
https://forums.adobe.com/thread/912132
|
CC-MAIN-2018-30
|
refinedweb
| 342
| 50.57
|
Hey! I made the hashmap static and it works!
Here's the final code:
package de.patrick.colonia.commands;
import java.util.HashMap;
import...
After the tpa command it prints the right uuids, so that seems to be working. When I try to print the same after the tpaccept command, nothing...
Like this? System.out.println(requests.get(player.getUniqueId()));
It just prints null, so there must be something wrong when I'm adding the...
It stops nowhere. It happens exactly as it should.
Yes, already tried. The issue might have something to do with adding the player to hashmap..?
I see, stupid mistake. Changed it, here's the code:
package de.patrick.colonia.commands;
import java.util.HashMap;
import java.util.UUID;
import...
Okay, gonna change it and then try to use uuids. But that doesn't fix the first problem.
[20:54:19 ERROR]: null
org.bukkit.command.CommandException: Unhandled exception executing command 'tpaccept' in plugin Colonia v1.0
at...
Doesn't work. Still got the same problem.
Here's the actuall code:
package de.patrick.colonia.commands;
import java.util.HashMap;
import...
I tried it now with uuids and player names. Both didn't worked.
Hey, I'm trying to make a "tpa" plugin, so a player sends a teleportation request to target and the target can accept the teleport request with...
What I want:
I would like a plugin where you can prevent certain villager trades, like the mending book. This would be great for a survival world....
Hey guys,
I got 2 problems with betterchat.
The first one is that I can't color names in the tablist. I doesn't matter which group it is.Of course...
Separate names with a comma.
|
https://dl.bukkit.org/members/spatziempire.91314588/recent-content
|
CC-MAIN-2020-10
|
refinedweb
| 288
| 71.51
|
The Ember Run Loop and Asynchronous Testing
You can’t run away from the Run Loop
This summer, I interned on the Square Seller Dashboard team, which works on the web application that Square merchants use for everything from checking sales, to managing their payroll, to signing up for Square Capital. For some context, I’m a rising senior at the University of Chicago and before this summer I’d never worked on a web application.
Image: A screenshot of the Square Seller Dashboard
Dashboard is built with Ember — an open-source web framework for single-page web applications. Working on a team that focused on such a large app meant that I could dive deeply into Ember — and one thing I kept noticing throughout the summer was the special, powerful, and at times, confusing way that Ember handles asynchronous work.
My Project
One project I worked on was displaying a warning before merchants were logged out due to inactivity.
This required monitoring different state throughout the life of the application, and rendering components accordingly. I used an Ember Service, which allowed me to set global state on the app based on code that was executed every 250 milliseconds. Removing the API calls and some of the complexity of a service, the code looked something like this:
import Ember from 'ember'; export default Ember.Component.extend({ millisecondsElapsed: 0, didInsertElement(...args) { this._super(...args); this.set('initialTime', Date.now()); Ember.run.later(() => this.monitorTimeWithEmberRun(), 250);; }, monitorTimeWithEmberRun() { const dateNow = Date.now(); // component renders the millisecondsElapsed this.set('millisecondsElapsed', dateNow - this.get('initialTime')) Ember.run.later(() => this.monitorTimeWithEmberRun(), 250); }, });
In development, it worked great. But when I ran acceptance tests, they would hang and then timeout, throwing a bunch of unpredictable errors. You can see that by running the tests in this EmberTwiddle.
Even though this was supposed to be the simple part of the project, I couldn’t get a single acceptance test to finish running, much less pass. Turns out, the reason my tests were breaking was because of the way I used
Ember.run.later, which is supposed just run the callback function after the given milliseconds. Finding the solution to my problem revealed a lot about how (awesome) the Ember Run Loop is, and the way it interacts with asynchronous testing.
Gif: Actress Meryl Streep waving her hand, with the text “Whoa, whoa, whoa, whoa..” (Source)
Wait wait, back up. What’s the Ember Run Loop?
It’s nearly impossible to work in the Ember World without running into some mention of the Run Loop, though the docs indicate that many developers won’t deal with it directly—
“working with [the Ember Run] API directly is not common in most Ember apps,”
Whether or not you work with the Ember Run API directly, the Run Loop is fundamental to Ember applications, and is often at the heart of figuring out strange errors and unexpected side effects.
In general terms, the Run Loop efficiently organizes and schedules work associated with events. The work triggered by user events (i.e.
mouseDown,
keyPress, etc) is often asynchronous. Ember can do other work before all the side effects are executed. The Ember Run Loop schedules these side effects and organizes any asynchronous work. Which sounds awesome, but needs some more context.
A bit of a misnomer, the Ember run loop isn’t really a loop — and there doesn’t have to be just one of them. Rather, a run “loop” has six different queues that organize work triggered by events. So, the Run Loop (capitalized) is more of a concept, and an app will have multiple run loops running at once.
The docs list the priority order of the queues as:
Ember.run.queues // => [“sync”, “actions”, “routerTransitions”, “render”, “afterRender”, “destroy”]
The docs also give a short explanation of each queue
The** sync queue contains binding synchronization jobs The **actions queue **is the general work queue and will typically contain scheduled tasks e.g. promises The **routerTransitions queue contains transition jobs in the router The render queue contains jobs meant for rendering, these will typically update the DOM The afterRender queue **contains jobs meant to be run after all previously scheduled render tasks are complete. This is often good for 3rd-party DOM manipulation libraries, that should only be run after an entire tree of DOM has been updated The **destroy queue contains jobs to finish the teardown of objects other jobs have scheduled to destroy
My short interpretation of the job scheduling “algorithm” is that the Run Loop executes jobs with the highest priority first, based on queue. These jobs may add work to other queues, and the Run Loop will loop back to the highest priority jobs until all jobs are completed.
I really love Katherine Tornwall’s explanation of the Ember Run Loop! It has a wonderful in-depth explanation of each queue, and I used some of her descriptions and examples to help with this illustration of a run loop.
Flowchart: The Ember Run Loop, Illustrated. Link to plaintext version of chart.
The inner details of each queue are interesting, but the most important detail is that events trigger a run loop and may place various asynchronous work into different queues. For example, a
mouseDown event could start a run loop, and other work associated with it would be placed in the appropriate queues. Ember ensures work related to data synchronization happens before any rendering. If data synchronization happened after rendering, it might change the rendering of templates that are bound to that data. Then, more expensive re-rendering would be required!
For an interactive demo, check out Machty’s Run Loop visualization or the simple example in the Ember docs.
Ember.run.later
The way my service took advantage of the run loop was
Ember.run.laterone of the many Ember.run API’s.
Ember.run.later() :.
Ember.run.later does has an advantage over
setTimeout or other alternative timers— it lives in the “Ember World,” so it can take advantage of all the run loop has to offer (efficiency, organization).
Ember.run.lateralso respects Ember’s internal queue of timers (which can’t be said of normal javascript timers).
If we look back at my component, we see that I call
Ember.run.later recursively in one of the methods:
... monitorTimeWithEmberRun() { const dateNow = Date.now(); // component renders the millisecondsElapsed this.set('millisecondsElapsed', dateNow - this.get('initialTime')) Ember.run.later(() => this.monitorTimeWithEmberRun(), 250); }, ...
Ultimately, as I’d discover later, it was these recursive calls to
Ember.run.laterthat caused the tests to timeout.
So why did my tests hang?
Gif: A monkey puppet looking wide-eyed and confused (Source)
I had a hunch that
Ember.run.later was causing my problem, since replacing it with
setTimeout stopped the tests from hanging (more on the drawbacks of that later). As I googled around, I realized that my tests failed for the same reason that makes Ember.run.later awesome — because there was always work scheduled for a future run loop.
How did I figure that out? I wasn’t sure what the problem was at first, so after some searching around, I checked in the source. In the source code, we can see that the
wait() test helper checks to make sure all run loops are completed and there are no queued timers.
Image: Screenshot of source code where wait() checks for run loops or scheduled timers. (Source)
But why? Initially, it seems inconvenient. As it turns out, Ember tests want to make sure all asynchronous work to complete before they finish. Internally, this means that Ember
wait() helper checks to make sure that all run loops are completed and there are no more queued timers — if this check fails, clearly there’s still work to be done.
Since I was calling
Ember.run.later recursively, there was always either a job in the run loop or a scheduled timer, so the tests would never finish.
Flowchart: Recursive Ember.run.later. Link to plaintext version of chart.
A Feature, Not a Bug.
Waiting for all work to finish is fundamental to the way that Ember tests asynchronous code. Of course, it’s not the only way to test asynchronous code. For example, capybara uses a maximum wait time for asynchronous calls and automatically ends tests after that time has passed.
That can lead to problems of its own. For example, suppose we click to close a modal, and want to assert that the modal is closed. In capybara, when the time runs out, if the asynchronous work of closing the modal happens after the time limit, the test will end and fail. On the other hand, Ember can ensure that the asynchronous work associated with closing the modal finishes before calling assert.
Gif: Actor Tom Hanks frowning with his hand over his mouth (Source)
What next? Another Error
Now that I understood the bug, I needed to figure out a solution. I bounced around a few ideas, especially at the beginning. Even though I had wanted to use
Ember.run.later, my initial reaction was to try to go back to using
setTimeout
... monitorTimeWithSetTimeout() { const dateNow = Date.now(); // component renders the millisecondsElapsed this.set('millisecondsElapsed', dateNow - this.get('initialTime')) ; setTimeout(() => this.setTimeout(), 250); }, ...
Running the acceptance tests again, I got this infamous error:
Assertion failed: You have turned on testing mode, which disabled the run-loop’s autorun. You will need to wrap any code with asynchronous side-effects in an Ember.run
Why does that happen? Well, I like to think there are two worlds in your Ember application: the Ember World, and the World Outside Ember. The Ember World uses run loops, and asynchronous work in the World Outside Ember, like websockets, ajax calls, should also be wrapped in an
Ember.runso that the run loop can handle side effects correctly.
In fact, there’s actually a lot of things that can have asynchronous side effects, even calling “Ember.object.set” on a property bound to a template. Ember saves us by smartly by wrapping your code in an “Autorun”, basically an
Ember.run, which runs in development and production, so that the side effects occur in a run loop.
However, Autoruns are disabled in testing. The Ember docs list several reasons for this:
Autoruns are Embers way of not punishing you in production if you forget to open a runlooploop, these will resolve too early and give erroneous test failures which are difficult to find. Disabling autoruns help you identify these scenarios and helps both your testing and your application!
This means if you try to use
setTimeout, make ajax calls, or even set properties which have asynchronous side effects, you’ll get an error in testing. That way, you can catch potentially unpredictable side effects before they happen in production.
What’s the solution?
One commenter calls the problem with recursive
Ember.run.later a “hurtful issue,” and I can see why. There is no perfect elegant, concise solution to this problem. The right solution requires thinking deeply about your own application and testing.
One suggestion that appears frequently on the Ember forums is to wrap the callback in an Ember.run:
monitorTimeWithSetTimeoutInRun() { const dateNow = Date.now(); // component renders the millisecondsElapsed this.set('millisecondsElapsed', dateNow - this.get('initialTime')) ; setTimeout(Ember.run(() => this.monitorTimeWithSetTimeoutInRun()), 250); },
The advantage of this solution is that in testing, the timer functions the same way it would in production.
The disadvantage of this solution is that
setTimeout still lives in a world outside of Ember, and as mentioned previously, won’t respect an internal timer queue. In my project, I found this to be more unpredictable than I would have liked. Sometimes, due to run loop execution, the next
setTimeout()would execute after the element that was supposed to stop it was destroyed. If you’re doing something like
if (!this.get(‘isDestroyed), you’re probably encountering an error like this.
A Better Solution
Since my code dealt with a lot of application-wide state, I really wanted to take advantage of the Ember Run Loop.
I turned to the Square #ember Slack channel, and immediately someone suggested I check out rwjblue’s Ember Lifeline. Thank you, slack channel hero. Ember Lifeline had an innovative approach in the
pollTask function — rather than hackily incorporating
setTimeout,
pollTask handles work differently in testing versus development/production. In development/production,
Ember.run.later is called recursively. But in testing, the next asynchronous
run is saved, and you’re able to manually
tick forward the poller.
Since I was working in an app that doesn’t yet support using Ember add-ons, and I had no need for the whole Ember Lifeline library, I just incorporated a few key ideas into my own app. I can manually tick the timer and control exactly what work I’m doing. My tests finally didn’t hang, nothing broke, and I’d found a satisfying solution to my all my errors.
import Ember from 'ember'; // Used for testing let _asyncTaskToDo = null; export function runNextAsyncTask() { // Used in testing // to manually call next recursive call _asyncTaskToDo(); } export default Ember.Component.extend({ millisecondsElapsed: 0, didInsertElement(...args) { this._super(...args); this.set('initialTime', Date.now()); Ember.run.later(() => this.monitorTimeWithEmberRun(), 250);; }, monitorTimeAndSaveCalls() { const dateNow = Date.now(); // component renders the millisecondsElapsed this.set('millisecondsElapsed', dateNow - this.get('initialTime')) ; this.queueAsyncTask(() => { Ember.run.later(() => this.monitorTimeAndSaveCalls(), 250); }); }, queueAsyncTask(taskToDo) { // if testing, save next work to do // so that we don't block the run loop if (Ember.testing) { _asyncTaskToDo = taskToDo; } else { taskToDo(); } }, });
For me, it didn’t matter that the timer wasn’t “actually” running in my tests. I cared more that the functions executed on each tick were monitoring correctly and making API calls at the correct frequency. This had the additional advantage that my timer wouldn’t end up running in the background of anyone else’s tests. There’s understandable hesitation in running different code in testing than in production. But the thing about projects that rely on timing is that it’s nearly impossible for tests to be the same as production. Even if you’re using
setTimeout you’ll often end up using Sinon’s fake timers—so your code already won’t execute exactly as it would in production. Writing concise, testable code that doesn’t actually rely on any of the timing means higher test coverage and makes time less of an issue.
Gif: Four people dancing on stage in celebration. (Source)
It took awhile for me to find the right solution for my project, and while it worked for me, it might not be the right choice for everyone. I’m interested to see what sort of different solutions the Ember community comes up with in the future as they deal with this problem!
Takeaways
I learned a lot this summer about Ember, but also how to solve problems in general. I can’t capture everything I learned in a few bullet points, but here goes:
Ember thoughts:
Wrap your asychronous code in
Ember.run. You will save yourself so many strange, unpredictable test failures.
Even if you don’t think you’ll interact with the Run Loop directly, it’s worth learning about. I think it’s unlikely that in a real world application you won’t have some sort of asynchronous work that lives outside of the Ember World.
Ember can be very powerful and efficient if you let it. Take advantage of all it has to offer.
Other thoughts:
You can look at other project’s source code for inspiration, even if you can’t use it directly. Even though I couldn’t use Ember Lifeline, it helped me find an elegant solution for an ugly problem.
On that note, source code can be the ultimate source of truth. There were many times where other explanations of the Run Loop were outdated or vague. Reading backburner.js was a gamechanger. Learning to read and understand source code was a huge lesson from the summer.
The solutions you read online aren’t always the best option. I read lots of examples of pollers that had complex polling objects or used
setTimeout()or
setInterval(). While those options worked for some people, something simpler and Ember-esque was better for me. It was worth the investment of writing something special for my project.
Learn how other people learn. Watching other team members solve problems taught me so much: different ways of developing, of debugging, of googling, of using keyboard shortcuts — even watching videos on 1.5x speed to get through them faster.
But even the smartest people on your team won’t know everything and can’t answer all of your questions. So if you do spend the time to dig deep into a problem, make sure to share that knowledge — whether it’s in a presentation, an email, or a blogpost (10/10 would recommend).
Kudos
Gif: Dwight Schrute from the Office tearfully looking up mouthing “Thank you”, with text “Thank you” (Source)
This summer was amazing. A huge shout-out to my team for encouraging me and supporting me, in my work and in this blogpost. And special special thanks to George, for always reminding me how important this project is. Marie Chatfield, a mentor, inspiration, and Ember queen. And Lenny, whose incredible knowledge is only outshined by his ability to share it, and who sat with me for countless hours debugging, reading through backburner.js, and discussing my midnight slack rants. Overall, thank you to Square for being such a wonderful, inclusive, smart place.
|
https://developer.squareup.com/blog/the-ember-run-loop-and-asynchronous-testing/
|
CC-MAIN-2019-26
|
refinedweb
| 2,930
| 55.74
|
Created on 2011-10-20 02:05 by ncoghlan, last changed 2012-02-14 03:22 by ncoghlan.
I needed a depth-limited, filtered search of a directory tree recently and came up with the following wrapper around os.walk that brings in a few niceties like glob-style filtering, depth limiting and symlink traversal that is safe from infinite loops. It also emits a named tuple rather than the bare tuple emitted by os.walk:
I think this would make a nice addition to 3.3 as shutil.filter_walk, but it need tests, documentation and reformatting as a patch before it can go anywhere.
During the discussion about adding a chowntree function to shutil ( and ), Victor suggested this:
> I don't like the idea of a recursive flag. I would prefer a "map-like" function to "apply" a
> function on all files of a directory. Something like shutil.apply_recursive(shutil.chown)...
> ... maybe with options to choose between deep-first search and breadth-first search, filter
> (filenames, file size, files only, directories only, other attributes?), directory before
> files (may be need for chmod(0o000)), etc.
Such a function removes the need for copytee, rmtree, dosomethingelsetree, etc. When I first read this feature request I thought it was the same idea, but after reading it again I see that you propose a function that walks and returns, not a function that walks and applies a callback. Both use cases are important, so an apply_tree function could be implemented on top of your filter_walk (I’ll open another report if we reach agreement on the idea of these two functions working together).
s/and /and #13033/
I should probably update that posted recipe to my latest version (which adds "excluded_files" and "excluded_dirs" parameters).
However, since I've been dealing with remote filesystems where os.listdir() and os.stat() calls from the local machine aren't possible lately, I also think we may need to reconsider how this is structured and look at the idea of building a more effective pipeline model that permits more efficient modes of interaction.
Let's take 'os.walk' as the base primitive - the basis of the pipeline will always be an iterator that produces 3-tuples of a base name, a list of subdirectories and a list of files. The filtering pipeline elements will require that the underlying walk include "topdown=True" and pay attention to changes in the subdirectory list.
Then consider the following possible pipeline elements:
def filter_dirs:
subdirs[:] = [subdir for subdir in subdirs
if should_include(subdir) and not should_exclude(subdir)]
yield dirpath, subdirs, files
def filter_files:
files[:] = [fname for fname in files
if should_include(fname) and not should_exclude(fname)]
yield dirpath, subdirs, files
def limit_depth(walk_iter, depth):
if depth < 0:
msg = "Depth limit greater than 0 ({!r} provided)"
raise ValueError(msg.format(depth))
sep = os.sep
for top, subdirs, files in walk_iter:
yield top, subdirs, files
initial_depth = top.count(sep)
if depth == 0:
subdirs[:] = []
break
for dirpath, subdirs, files in walk_iter:
yield dirpath, subdirs, files
current_depth = dirpath.count(sep) - initial_depth
if current_depth >= depth:
subdirs[:] = []
def detect_symlink_loops(walk_iter, onloop=None):
if onloop is None:
def onloop(path):
msg = "Symlink {!r} refers to a parent directory, skipping\n"
sys.stderr.write(msg.format(path))
sys.stderr.flush()
for top, subdirs, files in walk_iter:
yield top, subdirs, files
real_top = os.path.abspath(os.path.realpath(top))
break
for dirpath, subdirs, files in walk_iter:
if os.path.islink(dirpath):
# We just descended into a directory via a symbolic link
# Check if we're referring to a directory that is
# a parent of our nominal directory
relative = os.path.relpath(dirpath, top)
nominal_path = os.path.join(real_top, relative)
real_path = os.path.abspath(os.path.realpath(dirpath))
path_fragments = zip(nominal_path.split(sep), real_path.split(sep))
for nominal, real in path_fragments:
if nominal != real:
break
else:
if not onloop(dirpath):
subdirs[:] = []
continue
yield dirpath, subdirs, files
And pipeline terminators:
def walk_dirs(walk_iter):
for dirpath, subdirs, files in walk_iter:
yield dirpath
def walk_files(walk_iter):
for dirpath, subdirs, files in walk_iter:
for fname in files:
yield os.path.join(dirpath, fname)
def walk_all(walk_iter):
for dirpath, subdirs, files in walk_iter:
yield dirpath
for fname in files:
yield os.path.join(dirpath, fname)
The pipeline terminators could then be combined with ordinary iterable consumers like comprehensions:
base_walk = detect_symlink_loops(os.walk(os.path.abspath(base_dir, followlinks=True)))
depth_limited_walk = limit_depth(base_walk, 2)
filtered_walk = filter_dirs(filter_files(depth_limited_walk, "*.py"), "*.pyp")
tree_info = {path, os.stat(path) for path in walk_all(filtered_walk)}
This needs more thought - pypi package coming soon :)
Nick, perhaps you want to have a look at
(it doesn't have a filter_walk equivalent but it could grow one :-))
That's one of the nicer attempts I've seen at an object-oriented path library, but I have a core problem with OOP path APIs, and it relates to the Unicode encoding/decoding problem: the ultimate purpose of path objects is almost always to either pass them to the OS, or else to another application. That exchange will almost *always* happen as a plain string.
So when your approach is based on a more sophisticated type internally, you end up having to be very careful about all of your system boundaries, making sure that "paths" are correctly being turned into "Paths".
However, one of my hopes for iterwalk will be that it *won't* care if the underlying walk iterator produces Path objects instead of ordinary strings, so long as those objects can be passed to fnmatch, os, etc and work correctly.
Initial version available at:
I'll get it published to PyPI once the test suite is in a slightly better state (no filesystem based tests as yet)..
I changed the package name to walkdir:
And walkdir is now a published package:
My plan for this issue now is to maintain walkdir as a standalone package for 2.7 and 3.2, but still add the functionality to shutil for 3.3+.
However, I'll gather feedback on the PyPI module for a while before making any changes to the 3.3 repo.
With the release of 0.3, I'm pretty happy with the WalkDir design (previous versions coerced the output values to ordinary 3-tuples, now it will pass along whatever the underlying iterable produces with changing the type at all).
(speaking of which, I'd just mention I've published pathlib as a standalone package: )
Not.
Oh, forgot to mention, the term "symlink loop" itself is ambiguous.
There are direct symlink loops: an example is a "some_dir/linky" link pointing to "../some_dir/linky". These will fail when resolving them.
There are indirect symlink loops: "some_dir/linky" pointing to "../some_dir". Resolving them works fine, but recursively walking them produces an infinite recursion.
Lots of fun :)
WalkDir attempts to handle symlink loops, but the test suite doesn't currently ensure that that handling works as expected (I did some initial manual tests and haven't updated it since, though).
It's... not trivial:
|
http://bugs.python.org/issue13229
|
crawl-003
|
refinedweb
| 1,157
| 62.17
|
This blog will discuss features about .NET, both windows and web development.
LINQ to SQL generates business objects that work similar to datasets. There are some upsides and downsides to that, but overall, I like the approach. LINQ to SQL creates these classes all in the same namespace, so if you have the same object in multiple LINQ-to-SQL class designers in the same namespace, you will get an error in this respect. The best approach may be to create one massive data model for your entire database. Though this may seem like a problem, LINQ can solve this by using a deferred loading approach.
This means that table data loads when you access it. However, all the data loads into that table (but not into respective child tables unless you set it up to load immediately) because LINQ needs the entire data set to query against it. This often puts people on edge about the usefulness of it from the get-go.
However, it's nice because LINQ manages its state for you, and manages all the relationships between objects. It also can have individual properties be lazy loaded (such as loading raw data). And, the designer manages all the relationships between entities, and each relationship is setup as a parent/child collection approach, so the parent object can have immediate (yet lazy loaded) access to its children.
All in all, it is a good idea, though for those who love to have control over their data model, it may not be the most ideal. Something like LINQ to Entities, which should be released this month or the beginning of February, may be a better approach.
Sometimes there needs to be logic that someone on the outside using your object can do. For instance, there is a piece of calculation that only the subscriber of the class can use. Instead of using inheritance and the template method approach, it's also possible to perform some of this through a delegate. Simply creating a delegate as such:
public delegate float CalculateValueHandler(Account account, MortgateRate rate);
Allows the code to do this:
public void Calculate(MortgageRate rate, CalculateValueHandler handler){ Accounts accounts = AccountService.LoadAccounts(); foreach (Account account in accounts) account.Estimate = handler(account, rate); this.SaveChanges();}
This may not be the most practical example, but delegates can be more useful than inheritance in certain situations..
Extension methods are really cool. The following is an example of one, and I'll discuss it in detail:
public static class StringExtensions{ public static string GetFormattedValue(this string data) { //Do some formatting }}
In the above example, you notice similarities to a normal static class; however, this can be applied to an instance of a string, as shown below:
string value = " this is my sentence. ";string formattedText = value.GetFormattedValue();
The method defined in the static method above can be used in the string instance below. Let's go into detail. An extension class in this situation must be a static class; that is a requirement. Secondly, the "this" declaration as the first parameter notes the type to extend (a string in this instance). It, in reality, can be any instance class in the .NET framework or any custom library (no limitations except for non-static classes only). Next, the first parameter of this static extension method is the type to extend, but the rest of the parameters are the parameters for the method. So if the method was:
public static string GetFormattedValue(this string data, bool trim, bool punctuate, bool camelCase){ //Do some formatting}
It would be invoked as such:
string formattedText = value.GetFormattedValue(true, false, false);
Very handy to have. And very easy to create. You will see this a lot in the .NET framework 3.5; the lambda expression methods (Where, Contains, First, etc.) are often static extensions defined on the list, and there are many other places throughout.
When querying for a single item, bringing back a collection from a query brings back an enumerable list when all you wanted was a single item. You can use the First() method to return the first item in the list, thus getting the item from the query as such:
var data = (from o in Orders where o.OrderID = id select o.Name).First();
However, it may cause problems if no records exist in the query, so it may be more beneficial to do:
var data = (from o in Orders where o.OrderID = id select o.Name);if (data.Count() == 0) return string.Empty;else return data.First();
Oftentimes, when working with an API, it is helpful to utilize the best of generics; after all, generics automatically strong-type a class, and avoid boxing/unboxing issues. There can be some downfalls to this approach, however. For instance, if you have an classes called Menu, Document, Window, all of which inherit from the same base class (say UIObject, for instance). The generic strongly-typed collection looks like this:
public class UIObjectCollection<T> : List<T> where T:UIObject { }
However, one of the benefits of polymorphism is that you can work with the base class interface without having to worry about the specifies. For instance, usually Menu, Document, and Window have derived collection classes as such:
public class MenuCollection : UIObjectCollection<Menu> { }public class DocumentCollection : UIObjectCollection<Document> { }public class WindowCollection : UIObjectCollection<Window> { }
This approach is very handy, because generics strong-type the collection class, without having to write any additional code. However, sometimes, it helps to work in generalities, and work with the base class of UIObjectCollection. However, with generics, it's not UIObjectCollection, it's UIObjectCollection<Menu> and UIObjectCollection<Document>. This can be problematic.
The solution can be to use an interface. Define an interface as such:
public interface IObjectCollection{ void Add(UIObject item); void Remove(UIObject item);}
Then, in the root collection, define this:
public class UIObjectCollection<T> : List<T>, IObjectCollection where T:UIObject{ public void Add(T item) { .. } public void Remove(T item) { .. } void IObjectCollection Add(UIObject item) { this.Add((T)item); } void IObjectCollection.Remove(UIObject item) { this.Remove((T)item); }}
The interface simply reuses the existing methods, simply by casting the parameter. T is a UIObject class anyway, and can be downcast. This situation solves some of the dynamic typing feature that's lacking in .NET. The interface lets you cast MenuCollection and DocumentCollection to IObjectCollection, so you don't have to know the generic type off-hand..
Recently, when adding WPF features to a windows class library I created, I ran into an issue where there was an error with the InitializeComponent method of the WPF user control in Visual Studio 2008. I couldn't figure out why this was an error, as the use of that method hasn't changed with WPF. After looking around, I found this solution:. Essentially, an import statement needs added to the <Project> element, as described in detail in that blog entry.
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro
|
http://dotnetslackers.com/Community/blogs/bmains/archive/2008/01.aspx
|
crawl-003
|
refinedweb
| 1,168
| 55.74
|
[Solved] Using OpenTK at Linux?Posted Tuesday, 30 June, 2009 - 19:27 by CheatCat in
I tried the run my game at Linux, but I only get this error:
"The type or namespace name `OpenTK' could not be found. Are you missing a using directive or an assembly reference?(CS0246)]"
And I have the OpenTK dll in the bin\Debug directory. Should I make a new dll in Linux? (I use the one I compiled in Windows)
A have another question also: How I do a Linux executable file?
Re: Using OpenTK at Linux?
This is a compilation error, not a runtime error. Are you using MonoDevelop? Have you added OpenTK.dll to your project references? Have you added OpenTK.dll.config to your project and enabled the "copy to output directory" option?
To answer your question, OpenTK works out of the box on Linux, you don't have to recompile or anything.
Edit: 'gmcs' is the C# compiler for Mono. It creates executables by default.
Install MonoDevelop to get a full-featured IDE. MonoDevelop uses Visual Studio projects, so you can move from Windows <-> Linux painlessly.
On Ubuntu:
Type this line on a terminal to get a full-feature environment for .Net development.
Re: Using OpenTK at Linux?
Yes, I use MonoDevelop and I do what you say. I try to run the QuickStart program but it ends quickly after running it. Then I test Debug mode and get a black window. There is a yellow pile that point at VSync = VSyncMode.On; in the function:
But I see no errors... I removed the VSync line and the yellow pile point at the } instead. It something wrong with public Game() : base(800, 600, GraphicsMode.Default, "OpenTK Quick Start Sample") I think..
Re: Using OpenTK at Linux?
Need more information:
Re: Using OpenTK at Linux?
The version of OpenTK is.. 0.9.7, I think. My OS is Kubuntu 9.04 x86.
The output:
Re: Using OpenTK at Linux?
It seems that OpenTK.dll.config is missing. This file should be in the same directory as OpenTK.dll, because otherwise OpenTK won't work on Linux/Mac OS X.
Re: Using OpenTK at Linux?
Okey, I get it! :D But how I start my program stand-alone? If I double-click at the exe file, it opens with Wine and don't run at all (but it run in Mono at debugmode)!
Re: Using OpenTK at Linux?
Right click and select 'open with mono'. If there's no such option, use 'open with other application' and type 'mono' as the application.
You could also create an executable script that simply calls 'mono App.exe' (where App.exe is the name of your application).
Re: Using OpenTK at Linux?
But then you must have mono on every computer with Linux you that to play the game on?
Re: Using OpenTK at Linux?
Obviously, yes. Developing for the .Net framework means you are dependent on .Net on Windows and Mono on Linux / Mac OS X.
Re: Using OpenTK at Linux?
Oh, thank you! :D
|
http://www.opentk.com/node/988
|
CC-MAIN-2015-14
|
refinedweb
| 512
| 79.56
|
does anyone know of a place in Virginia or a website that sells right hand drive vehicles? how did you guys who have the cars w/the steering wheels on the right who live in USA get yours?
Printable View
does anyone know of a place in Virginia or a website that sells right hand drive vehicles? how did you guys who have the cars w/the steering wheels on the right who live in USA get yours?
import
and why on earth would you want a right hand drive if you live in the US ?
I saw some lefties when i was in japan last. go figure.
motorex imports skylines. kinda pricey but none the less right hand drive and legal.
You can probably goto government auctions and pick up Postal vehicles that're right hand drive. Not the Ice cream truck type but the Subaru Legacys and Saturns SL1. And yes Japan have both right hand drive and left hand drive. I've seen a few leftys in Japan that last time I went. When I was in Hong Kong, I saw a right hand drive Explorer, Taurus, and a Caddilac STS.
Apparently it's "cool" to have a right hand drive in the US. I have a friend from Hong Kong and he says it's cool to drive lefties. Go figure.
Most of the RHD vehicles are imported, so you'd either:
A) Pay big bucks to have one imported.
B) Buy a pre-owned RHD import from somebody.
It depends on what kinda car you're looking for. Some people can swap LHD for RHD and vice versa because the parts are available. It's a lot different when you're trying to get RHD in a domestic. You'd have to have a lot of money and have it custom done or something.
Does your friend from Hong Kong drive lefties in Hong Kong? It's just that my cousin from there says it's illegal...Does your friend from Hong Kong drive lefties in Hong Kong? It's just that my cousin from there says it's illegal...Quote:
Originally Posted by noodles
Driving a car with steering opposite side to the norm might be cool, but bear in mind the huge engineering obstacles and compromises these might offer. For example, I know the right-hand drive Peugeot 206 still have the wipers pushing rain on the wrong side and hindering the driver! But that's just being cheapskate... And right-hand drive French cars have notoriously poor driving positions with pedals never quite lining up centrally..... Might be different today though. Also, overtaking other vehicles could be tricky, and Euro truckers have this problem in the Uk.... but since I understand you're allowed to 'undertake' and overtake in the States anyway it wouldn't really matter.
Anyway, you'd be a bit buggered at drive-thru's!! :lol:
Anyway, you'd be a bit buggered at drive-thru's!!
Ask the GF to pay :D
He says it's cool... whether it's legal or not. ;)
Seriously though, it's only cool to have a right hand drive vehicle if you're a postal worker or if you've got a very nice import that you're trying to preserve.
|
http://www.mp3car.com/general-mp3car-discussion/18137-right-hand-drive-print.html
|
CC-MAIN-2014-15
|
refinedweb
| 549
| 82.75
|
Asked by:
fake C++ for Windows 8 ?
General discussion++.
So my question is why bother saying Windows 8 works with C++?
Neat marketing but VERY misleading. Why bother. Just say Windows 8 C# or code your own interface.Frustrating and disappointing, when will MS learn we dont code in C++ because its safe or
simple but because it delivers the best performance and flexibility of any language with the
lowest level possible overhead.
Anyway thumbs down on the Windows 8 slide.
bfxtech.com
Friday, September 16, 2011 2:25 AM
- Moved by Jesse Jiang Monday, September 19, 2011 6:00 AM (From:Visual C++ Language)
- Moved by Steven - Support EngineerMicrosoft Support Wednesday, September 21, 2011 5:16 PM (From:Windows Developer Preview: General OS questions )
All replies
- btasa wrote:>++.Actually, the situation is much more complicated than that. This is NOTC++ with the /cli option. Instead, Microsoft has added the CLI syntaxextensions to the native compiler.>So my question is why bother saying Windows 8 works with C++?Well, you're talking about Metro and WinRT. Windows 8, under the hood, isstill Windows NT. All the native apps that used to work, will still work.This new stuff is all to interface with Metro and WinRT. Those modulesare, apparently, managed code. There has to be magic to transition fromnative code to managed code. Rather than force you to write a /cli moduleto do that transition, the compiler now generates that for youautomatically.I haven't decided yet whether I love this or hate this.>Frustrating and disappointing, when will MS learn we dont code in C++>because its safe or simple but because it delivers the best performance>and flexibility of any language with the lowest level possible overhead.Well, if you want the lowest possible overhead, you won't be writing Metroapplications. If you want to write Metro applications, now you can do thatfrom a native C++ app, without worrying about manually partitioning yourcode into a native part and a CLR part.If you want to be disappointed that Metro is a managed technology, then Ithink you haven't been paying attention.--Tim Roberts, timr@probo.comProvidenza & Boekelheide, Inc.
Tim Roberts, DDK MVP Providenza & Boekelheide, Inc.Friday, September 16, 2011 4:40 AM
Actually if you follow the developer forum for native/WinRT you will see that it is really native COM with C++/CLI like syntax (instead of CLI references, the ^s are actually COM smart pointers). As to why they did that, I have no idea.
Personally I loathe programming COM, I was hoping for a REAL C++ programming library, not COM rehashed yet again. (I suppose, it might be tolerable if it actually throws exceptions, but I will scream if I ever have to write "if (FAILED(hr))" one more time.Friday, September 16, 2011 1:53 PM WPF front end with all the?
bfxtech.com
Friday, September 16, 2011 2:55 PM
This is all still new stuff, of course, but I think the use of similar syntax to C++/CLI for WinrRT makes it easier to use ultimately. Obviously, writing code for WinRT will make it non-standard anyway so why invent another syntax? It also makes it really apparent what the functions are being used for. I don't think of it as "fake" C++, but more like an extension of C++ since all the core language stuff is still there.
The fact that will have so much functionality in C++ for Windows 8 is really exciting imo.
TomFriday, September 16, 2011 4:24 PM
> As to why they did that, I have no idea.
Well maybe because:
> I loathe programming COM
Maybe they have done that to try to please you (and me also). ;)
Not sure they SUCCEEDED(hr);
Saturday, September 17, 2011 2:42 PM
- btasa wrote:>>In the managed code world if I have 6 YUV HD DirectX surfaces and multiple DirectShow>callbacks which feed my assembly language interpreters before I display them in>realtime how would it fair when compared to C++?If C# is just doing the plumbing -- gluing the parts together -- then theperformance impact is negligible. What you have up there appears to be agraph doing all native code. The performance of that part will not changejust because the graph happened to be created by a managed module.>In PDC 2009 the Microsoft DirectShow persenter said that managed code is>more than 2x as slow as C++.That's one person's opinion, but that's not what most people have found.Please remember that "managed code" is STILL compiled to machine codebefore it runs. People I trust tell me that the performance is prettyclose to native code. Transitions between managed and unmanaged can beexpensive, because you have to take extra care to marshal parameters backand forth.>So you can only conclude that WinRT and all its glamour will be not be>for high performance bits...Absoutely true.>...and there is not a good way of making stylish front ends without>rolling your own. (which I think is sad)Absolutely false. You can use the fancy stuff for your front ends, andtransition to native code for the hard work. Just the way Windows hasalways worked. People scoffed at the Windows API in the early days becauseit was a C interface. "Real programmers do their work in assembler!"You're singing the same song.--Tim Roberts, timr@probo.comProvidenza & Boekelheide, Inc.
Tim Roberts, DDK MVP Providenza & Boekelheide, Inc.Saturday, September 17, 2011 7:43 PM
Hello,
I think your issue should be raised in the Windows Developer Preview: General OS questions..
Monday, September 19, 2011 6:00 AM
I am a C# developer at heart but i'm real excited to see what you can do with C++/WinRT. Like what was mentioned earlier, The WinRT apis does get compiled to native code.
Also, it IS possible to write a metro application without the c++ extensions (see here). Just know that you will be writing 10x more code (if not more).
Herb has a great video here that explains what the extensions do 'behind the scenes'.
For example, the Hat(^) is nothing more than a shared_ptr and the exceptions are nothing more than wrappers around HRESULT's.Monday, September 19, 2011 4:19 PM
After watching some of the channel9 videos made of the build conference, thanks MS, I am trying very hard to get my head around all of this. At first I was certain this was managed C++. Now after reviewing the video again it looks like it might not be.. Here is where the confusion starts.
The new MS C++ takes some C++ 11. adds in new keywords like ref ,namespace delegates and events and partial classes but its COM and ?? err umm some compiler twist ??
This is what I got so far. Things I dont know.
I dont know how char arrays can work with these string objects.
I dont know how assembly intergrates.
How can I convert information recieved from the Metro interface via delegates and events to
array of strcuts and use pointers etc with it.
There is so much I dont know and feel so lost on.
I hope they do one day learning tours where we can ask questions with videos to watch before we go.
With all the languages of the day MS has pushed (WPF, SilverLight, ASP.NET, C#, VB.NET,HTML 5 - JavaScript) and multiple XAML implementations, its near impossible to understand anything very well anymore.
Is that the point?
bfxtech.comWednesday, September 21, 2011 4:30 PM
- The language extensions for C++ are syntactic sugar that makes dealing with WinRT easier. However it isn't necessary to use those and you can write 100% standards compliant code that deals with the COM interfaces directly if you so choose.Wednesday, September 21, 2011 4:47 PM
I want to see more about that. Like how do I deal with a string object? Or those delegates? How about those Events? There is a lot more going on that just calling some funtions.
bfxtech.comWednesday, September 21, 2011 7:09 PM
I want to see more about that. Like how do I deal with a string object? Or those delegates? How about those Events? There is a lot more going on that just calling some funtions.
bfxtech.com
There is very little documentation available right now but watch Deon Brewis' session from Build where he goes into some behind-the-covers detail on C++/WinRT., September 21, 2011 7:12 PM
C++/CX is just syntactic sugar. (Think about C# and how LINQ query expressions are just syntactic sugar for LINQ query operators.) It helps you write less code, but more readable code and be more productive. Isn't that we all expect from a programming language? Compiled C++/CX code translates to 100% C++ code. But the beauty is you don't have to care about all that verbatim code you'd have to write if you were targeting WinRT from C++, with COM wrapper, etc. So why is it so bad for you?
Microsoft MVP VC++ | |, September 21, 2011 9:13 PM
@bfxtech,
using namespace Platform;
#include <string>
// initializing a String^
String^ str1 = "Test";// no need for L
String^ str2("Test");
String^ str3 = L"Test";
String^ str4(L"Test");
String^ str5 = ref new String(L"Test");
String^ str6(str1);
String^ str7 = str2;
wchar_t msg[] = L"Test";
String^ str8 = ref new String(msg);
std::wstring wstr1(L"Test");
String^ str9 = ref new String(wstr1.c_str());
String^ str10 = ref new String(wstr1.c_str(), wstr1.length());
// accessing chars into String^
auto it = str1->Begin();
wchar_t ch = it[0];
// String^ is null terminated (but can contain L'/0', just like wstring)
assert(it[str1->Length()] == L'/0');
// String^ is immutable
it[1] = L'A';// yields compile error C3892
// initializing a std::wstring from String^
std::wstring wstr2(str1->Data());
std::wstring wstr3(str1->Data(), str1->Length());
FROM:
Wednesday, September 21, 2011 10:04 PM
- Edited by Andrew7Webb Wednesday, September 21, 2011 10:05 PM
I am not saying WinRT is bad but trying to see how it all fits together and how existing complex applications can leverage or utilize it.
Like I mentioned if I have an application that has 800MB/sec uncompressed YUV data and want to display it on a WinRT screen is it possible?
Whats the limits? Is it easy to try? Where are the gottcha's. Its a natural question with anything new.
If I already programmed and tested all the business logic and tokenizing using strings stored in char arrays (STL vectors or whatever)
with a new interface can I keep my logic without a huge penalty? I noticed that all strings have to allocated with
"WindowsCreateString" (not sure this is the call) or some thing like that. Will that be real expensive with performance?
In windows 8 how is the task switching improved? Is it? I think they spent a lot of time to make a new language extensions to
make some cooler interface widgets that could have easily been placed in a library.
bfxtech.comThursday, September 22, 2011 5:25 PM
Can I do this?
char * temp;
temp = (char *)malloc(25);
strcpy(temp,"Test");
String^ str1 = temp; ??
bfxtech.comThursday, September 22, 2011 5:27 PM
Can I do this?
char * temp;
temp = (char *)malloc(25);
strcpy(temp,"Test");
String^ str1 = temp; ??
bfxtech.com
If you use wchar_t it would work just like that. But since you used char, you'd probably have to convert to wchar_t before passing to a String^ constructor., September 22, 2011 5:32 PM
Why would they not have an overload for something as basic as conversions?
I mean GEEZ look at the number of strings and types.
char *, wchar_t*, _bstr_t, CComBSTR, CString, basic_string, and System.String.
A suggestion to you Windows 8 folk. You need a one stop covert class
for C++ C# etc . In comes one thing and ToXXX. They always talk about
reducing the messy code but simple string conversion still looks like this.
char *orig = "Hello, World!";
cout << orig << " (char *)" << endl;
// Convert to a wchar_t*
size_t origsize = strlen(orig) + 1;
const size_t newsize = 100;
size_t convertedChars = 0;
wchar_t wcstring[newsize];
mbstowcs_s(&convertedChars, wcstring, origsize, orig, _TRUNCATE);
wcscat_s(wcstring, L" (wchar_t *)");
wcout << wcstring << endl;
bfxtech.comThursday, September 22, 2011 7:13".Thursday, September 22, 2011 9:18".
So far no one's profiled performance differences between C++/CX generated assembly code vs direct-COM generated assembly code. I'd be very surprised if the C++ team haven't made sure that there won't be any performance loss in using the C++/CX syntax. And a pre-beta is probably too early to be making such comparisons anyway.
And if you are that concerned you can just use WRL (although at the moment there is zero documentation on that).
Friday, September 23, 2011 12:27 PM
- Edited by Nishant Sivakumar Friday, September 23, 2011 12:27 PM
As to why they did that, they investigated several options when building the C++ projection - they were either extremely confusing (using "*" instead of "^" to represent references for example) or they violated low level language constraints (using "." for example). The "^" syntax was the best compromise that didn't break the language.
Having written metro style apps using the C++ projection, it take a bit of time (an hour or so) to get used to "ref new" and "^" but then it seems really natural.Saturday, September 24, 2011 4:32 PM
|
https://social.msdn.microsoft.com/Forums/en-US/62b78d93-8592-4b8e-b65d-66a8657db498/fake-c-for-windows-8-?forum=winappswithnativecode
|
CC-MAIN-2022-27
|
refinedweb
| 2,249
| 64.91
|
Today we’re very happy to introduce Socket.IO P2P, the easiest way to establish a bidirectional events channel between two peers with a server fallback to provide maximum reliability.
Let’s look at the API and build a little chat application. Or check out the repository directly!
Socket.IO P2P provides an easy and reliable way to setup a WebRTC connection between peers and communicate using thesocket.io-protocol.
Socket.IO is used to transport signaling data and as a fallback for clients where the WebRTC
PeerConnection is not supported.Adding a simple piece of middleware to your socket.io setupenables this no need to hand roll your own signaling exchange or set up, deploy and scale new servers.
It only takes a few lines of code to set up the server and client.
Server:
var io = require('socket.io')(server); var p2p = require('socket.io-p2p-server').Server; io.use(p2p);
Client:
var P2P = require('socket.io-p2p'); var io = require('socket.io-client'); var socket = io(); var p2p = new P2P(socket); p2p.on('peer-msg', function (data) { console.log('From a peer %s', data); });
There are various options for the advanced user. Once signaling data has been exchanged an
upgrade event is triggered and an optional callback is called.
var opts = { numClients: 10 }; // connect up to 10 clients at a time var p2p = new P2P(socket, opts, function(){ console.log('We all speak WebRTC now'); });
We will build a simple chat application, as our tradition dictates, but with P2P capabilities! In this application:
All code from this example is included in the main repository.
We first setup the client with
autoUpgrade set to false so that clients can upgrade the connection themselves. Set
numClients to
10 to allow up to 10 clients to connect with each other.
var opts = {autoUpgrade: false, peerOpts: {numClients: 10}}; var p2psocket = new P2P(socket, opts)
Setup the event listeners:
p2psocket.on('peer-msg', function(data) { // append message to list }); p2psocket.on('go-private', function () { p2psocket.upgrade(); // upgrade to peerConnection });
In this example, we want any clients connecting to the server to exchange signaling data with each other. We can use the server component as a simple middleware. Clients will connect on the root namespace.
If we wanted clients to exchange signalling data in rooms, rather than on a whole namespace, we could use the server module upon connection like this.
var server = require('http').createServer(); var p2pserver = require('socket.io-p2p-server').Server var io = require('socket.io')(server); server.listen(3030) io.use(p2pserver);
We then setup listeners to pass messages between clients and to broadcast the
go-private event.
io.on('connection', function(socket) { socket.on('peer-msg', function(data) { console.log('Message from peer: %s', data); socket.broadcast.emit('peer-msg', data); }) socket.on('go-private', function(data) { socket.broadcast.emit('go-private', data); }); });
That’s all you need: add a little markup we are off! Here’s the demo application in action:
Thanks to Guillermo Rauch (@rauchg) for the advice, testing and patience, Harrison Harnisch (@hharnisc) for bug fixes and to Feross Aboukhadijeh (@feross) for providing the underlying WebRTC abstraction simple-peer.
Pull requests, issues, comments and general rantings are all welcome over at the GitHub repo.
Continue reading on socket.io
|
https://hackerfall.com/story/socketio-p2p
|
CC-MAIN-2017-51
|
refinedweb
| 544
| 51.85
|
Revision history for Apache2-Controller 1.001.001 2014-06-11 Was the UNIVERSAL bug fixed in the last version? It looked that way. Corrected some POD errors. t/render.t was failing because I had set the relative_uri with the uri if there was no relative_uri. That didn't make sense, it resulted in a url /foo/default/default.html. All the other tests seem to pass still. I don't know why I introduced that bug or why it appeared at the time that the test worked with that regression. Look for A2C in Debian! 1.001.000 2012-07-?? Net::OpenID::Consumer & ::Server required to build in 5.12, even though the OpenID module doesn't work yet. I really need to get to figuring that out, but I don't have a use case. A2C_Render_Template_Path directive needed assignment of empty arrayref if not specified but used somewhere. Dispatch::Simple assigns the requested uri if no matching uri found, to kick back up to "not found" Apache errors. Logging a bad request shouldn't limit reason length (as much). It will get cut off anyway by whatever it sends it to, however Apache logging is configured. Render::Template now handles error status codes correctly, returning DONE in the case an error occured and an error template was used, so Apache doesn't try to append its own html error messages to the finished html output. Session factored tying and will now generate a new cookie and session if the session store object referenced by the old cookie is no longer present. 1.000.111 2010-12-21 bad cookie eval only if directive A2C_Skip_Bogus_Cookies. 1.000.101 2010-12-21 mkdir explicitly uses $_ for perl < 5.10. Fixes RT# 60240. Errors from Apache2::Cookie::Jar->new() skipped if the APR::Request::Error code is NOTOKEN. Bad cookie, buggy client. Fixes RT# 61744. 1.000.100 2009-02-02 Fixed MANIFEST. RT# 42979. Switched to Module::Build to better control prereqs, build prereqs etc. Render::Template doesn't assign stash functions anymore, in order to reduce number of dependencies. Do this yourself by overloading render(). Test suite uses File::Temp tempdir now so that temp directories get cleaned up properly. No longer tries `od` with open3(), so MS Win might work? A2C stash data is put in sub-hash 'a2c' of stash, so it doesn't clutter the namespace and is consistent with the same change that was made to consolidate pnotes. reference in pnotes because it contains a ref to the Request. Pushed Session handler now saves a top-level timestamp to force update of Apache::Session, if you set the directive flag. Session save handler is now its own package, concerned about closures keeping the request around. Is this a problem with the DBI rollback handler, which closes on a string var in the setup phase? Moved all of the internal notes/pnotes garbage under one structure $r->pnotes->{a2c} because it makes more sense to not clutter the limited notes/pnotes key namespace. 1.000.010 2008-12-2 Detaints directive values. Controller doesn't double-check allowed_methods in case your dispatch wants freedom from that. Improvements/correction to documentation. Session now uses a checkum. Thanks David Ihern. OpenID auth module put on backburner until I rewrite it again, I wanted to get this out the door with functional session checksum and correct documentation. 1.000.001 2008-12-20 Put CPAN upload in the right directory. Duh. 1.000.000 2008-12-20 Session now saves only if status < 300, or if notes->{a2c_session_force_save} is set. OpenID! Rockin. It doesn't preserve post or get vars yet from the prior request across the openid redirect cycle, but it's good enough to release. Hurrah! Instead of using PerlCleanupHandler to save session and commit database, it uses PerlLogHandler while the connection is still open. Otherwise, session untying and database commits may not be processed synchronously before the next request that depends on those changes. 0.110.000 2008-08-06 Removing the "RUN_ALL" simulated mode from response handler. It's an over-optimization, and A2C:DBI::Connector supports a PerlCleanupHandler to rollback uncommitted transactions. Now Controller.pm does not return DONE, but sets status and returns OK. Apache2::Controller::DBI::Connector works. Cool! 0.101.111 2008-07-22T18:01:00 LAME LAME LAME I hate versions. 0.101.110 2008-07-22T18:00:00 Oops I see, Version.pm was not in the MANIFEST, no wonder. 0.101.101 2008-07-22T02:30:00 Version numbers are so strange. 0.4.3 2008-07-21T00:22:00 Removed ability to limit methods. Use Apache config for that. A2C:SQL::Controller. untested. Controller now passes @path_args as arguments to method. Version is now in its own package. Some new SQL-related packages that may or may not work. Universal use of Apache2::Controller::Directives - no more PerlSetVar. Documentation edits. 0.4.1 2008-07-16T00:16:00 Changing version number to use 'version' package. I had the version number screwed up before. The numbers below are actually 0.4.0, 0.3.0 etc. 0.0.4 2008-07-16T00:08:00 New dispatch method HashTree implements alternative dispatching. Various improvements to documentation. More tests. Developed unit test infrastructure. 0.0.3 2008-07-06T00:08:00 Changed render to always use A2CRenderTemplateDir as INCLUDE_PATH. Various and sundry updates. Trying to get Directives to work. Bleah. 0.0.2 2008-07-02T00:08:00 Woo-hoo! Rendering and sessions work. More tests. 0.0.1 2008-06-01T00:00:00 First version, released on an unsuspecting world.
|
https://metacpan.org/changes/distribution/Apache2-Controller
|
CC-MAIN-2015-48
|
refinedweb
| 943
| 68.77
|
4.4.1. Generation and Analysis of HOLE pore profiles (Deprecated) —
MDAnalysis.analysis.hole¶
With the help of this module, the hole program from the HOLE suite of tools [Smart1993] [Smart1996] can be run on frames in an MD trajectory or NMR ensemble in order to analyze an ion channel pore or transporter pathway [Stelzl2014] as a function of time or arbitrary order parameters. Data can be combined and analyzed.
HOLE must be installed separately and can be obtained in binary form from or as source from. (HOLE is open source and available under the Apache v2.0 license.)
4.4.1.1. Examples for using HOLE¶
The two classes
HOLE and
HOLEtraj in this module primarily
act as wrappers around the hole program from the HOLE suite of
tools. They contain many options to set options of hole. However,
the defaults often work well and the following examples should be good starting
points for applying HOLE to other problems.
4.4.1.1.1. Single structure¶
The following example runs hole on the experimental
structure of the Gramicidin A (gA) channel. We use the
HOLE
class, which acts as a wrapper around hole. It therefore
shares some of the limitations of HOLE, namely, that it can only
process PDB files [1].
from MDAnalysis.analysis.hole import HOLE from MDAnalysis.tests.datafiles import PDB_HOLE H = HOLE(PDB_HOLE, executable="~/hole2/exe/hole") # set path to your hole binary H.run() H.collect() H.plot(linewidth=3, color="black", label=False)
The example assumes that hole was installed as
~/hole2/exe/hole)
4.4.1.1.2. Trajectory¶
One can also run hole on frames in a trajectory with
HOLEtraj. In this case, provide a
Universe:
import MDAnalysis as mda from MDAnalysis.analysis.hole import HOLEtraj from MDAnalysis.tests.datafiles import MULTIPDB_HOLE u = mda.Universe(MULTIPDB_HOLE) H = HOLEtraj(u, executable="~/hole2/exe/hole") H.run() H.plot3D()
The profiles are available as the attribute
HOLEtraj.profiles
(
H.profiles in the example) and are indexed by frame number but
can also be indexed by an arbitrary order parameter as shown in the
next example.
4.4.1.1.3. Trajectory with RMSD as order parameter¶
In order to classify the HOLE profiles \(R(\zeta)\) the RMSD \(\rho\)
to a reference structure is calculated for each trajectory frame (e.g. using
the
MDAnalysis.analysis.rms.RMSD analysis class). Then the HOLE
profiles \(R_\rho(\zeta)\) can be ordered by the RMSD, which acts as an
order parameter \(\rho\).
import MDAnalysis as mda from MDAnalysis.analysis.hole import HOLEtraj from MDAnalysis.analysis.rms import RMSD from MDAnalysis.tests.datafiles import PDB_HOLE, MULTIPDB_HOLE mda.start_logging() ref = mda.Universe(PDB_HOLE) # reference structure u = mda.Universe(MULTIPDB_HOLE) # trajectory # calculate RMSD R = RMSD(u, reference=ref, select="protein", weights='mass') R.run() # HOLE analysis with order parameters H = HOLEtraj(u, orderparameters=R.rmsd[:,2], executable="~/hole2/exe/hole") H.run()
The
HOLEtraj.profiles dictionary will have the order parameter as key
for each frame. The plot functions will automatically sort the profiles by
ascending order parameter. To access the individual profiles one can simply
iterate over the sorted profiles (see
HOLEtraj.sorted_profiles_iter()). For example, in order to plot the
minimum radius as function of order parameter
we iterate over the profiles and process them in turn:
import numpy as np import matplotlib.pyplot as plt r_rho = np.array([[rho, profile.radius.min()] for rho, profile in H]) ax = plt.subplot(111) ax.plot(r_rho[:, 0], r_rho[:, 1], lw=2) ax.set_xlabel(r"order parameter RMSD $\rho$ ($\AA$)") ax.set_ylabel(r"minimum HOLE pore radius $r$ ($\AA$)")
(The graph shows that the pore opens up with increasing order parameter.)
4.4.1.2. Data structures¶
A profile is stored as a
numpy.recarray:
frame rxncoord radius
- frame
- integer frame number (only important when HOLE itself reads a trajectory)
- rxncoord
- the distance along the pore axis, in Å
- radius
- the pore radius, in Å
The
HOLE.profiles or
HOLEtraj.profiles dictionary
holds one profile for each key. By default the keys are the frame
numbers but
HOLEtraj can take the optional orderparameters
keyword argument and load an arbitrary order parameter for each
frame. In that case, the key becomes the orderparameter.
Notes
The profiles dict is not ordered and hence one typically needs to manually order the keys first. Furthermore, duplicate keys are not possible: In the case of duplicate orderparameters, the last one read will be stored in the dict.
4.4.1.3. Analysis¶
- class
MDAnalysis.analysis.hole.
HOLE(filename, **kwargs)[source]¶
Run hole on a single frame or a DCD trajectory.
hole is part of the HOLE suite of programs. It is used to analyze channels and cavities in proteins, especially ion channels.
Only a subset of all HOLE control parameters is supported and can be set with keyword arguments.
hole (as a FORTRAN77 program) has a number of limitations when it comes to filename lengths (must be shorter than the empirically found
HOLE.HOLE_MAX_LENGTH). This class tries to work around them by creating temporary symlinks to files when needed but this can still fail when permissions are not correctly set on the current directory.
Running hole with the
HOLEclass is a 3-step process:
- set up the class with all desired parameters
- run hole with
HOLE.run()
- collect the data from the output file with
HOLE.collect()
The class also provides some simple plotting functions of the collected data such as
HOLE.plot()or
HOLE.plot3D().
New in version 0.7.7.
Changed in version 0.16.0: Added raseed keyword argument.
Set up parameters to run HOLE on PDB filename.
Notes
- An alternative way to load in multiple files is a direct read from a CHARMM binary dynamics DCD coordinate file - using the dcd keyword or use
HOLEtraj.
- HOLE is very picky and does not read all DCD-like formats [1]. If in doubt, look into the logfile for error diagnostics.
profiles¶
After running
HOLE.collect(), this dict contains all the HOLE profiles, indexed by the frame number. If only a single frame was analyzed then this will be
HOLE.profiles[0]. The entries are stored in order of calculation, but one can also sort it by the keys.
Note
Duplicate keys are not possible. The last key overwrites previous values. This is arguably a bug.
check_and_fix_long_filename(filename, tmpdir='.')[source]¶
Return a file name suitable for HOLE.
HOLE is limited to filenames <=
HOLE.HOLE_MAX_LENGTH. This method
- returns filename if HOLE can process it
- returns a relative path (see
os.path.relpath()) if that shortens the path sufficiently
- creates a symlink to filename (
os.symlink()) in a safe temporary directory and returns the path of the symlink. The temporary directory and the symlink are stored in
HOLE.tempfilesand
HOLE.tempdirsand deleted when the
HOLEinstance is deleted or garbage collected.
collect(**kwargs)[source]¶
Parse the output from a
HOLErun into numpy recarrays.
It can process outputs containing multiple frames (when a DCD was supplied to hole).
Output format:
frame rxncoord radius
The method saves the result as
HOLE.profiles, a dictionary indexed by the frame number. Each entry is a
numpy.recarray.
If the keyword outdir is supplied (e.g. “.”) then each profile is saved to a gzipped data file.
create_vmd_surface(filename='hole.vmd', **kwargs)[source]¶
Process HOLE output to create a smooth pore surface suitable for VMD.
Takes the
sphpdbfile and feeds it to sph_process and sos_triangle as described under Visualization of HOLE results.
Load the output file filename into VMD by issuing in the tcl console
source hole.vmd
The level of detail is determined by
HOLE.dotden(which can be overriden by keyword dotden).
The surface will be colored so that parts that are inaccessible to water (pore radius < 1.15 Å) are red, water accessible parts (1.15 Å > pore radius < 2.30 Å) are green and wide areas (pore radius > 2.30 Å are blue).
default_ignore_residues= ['SOL', 'WAT', 'TIP', 'HOH', 'K ', 'NA ', 'CL ']¶
List of residues that are ignore by default. Can be changed with the ignore_residues keyword..
- class
MDAnalysis.analysis.hole.
HOLEtraj(universe, **kwargs)[source]¶
Analyze all frames in a trajectory.
The
HOLEclass provides a direct interface to HOLE. HOLE itself has limited support for analysing trajectories but cannot deal with all the trajectory formats understood by MDAnalysis. This class can take any universe and feed it to HOLE. It sequentially creates a temporary PDB for each frame and runs HOLE on the frame.
The trajectory can be sliced by passing the start, stop, and step keywords to
HOLEtraj.run(). (hole is not fast so slicing a trajectory is recommended.)
Frames of the trajectory can be associated with order parameters (e.g., RMSD) in order to group the HOLE profiles by order parameter (see the orderparameters keyword).
Set up the HOLE analysis over a trajectory.
Changed in version 1.0.0: Support for the start, stop, and step keywords has been removed. These should instead be passed to
HOLEtraj.run().
Changed in version 1.0.0: Changed selection keyword to select
profiles¶
After running
HOLE.collect(), this dict contains all the HOLE profiles, indexed by the frame number or the order parameter (if orderparameters was supplied). The entries are stored in order of calculation, but one can also sort it by the keys.
Note
Duplicate keys are not possible. The last key overwrites previous values. This is arguably a bug.
guess_cpoint(select='protein', **kwargs)[source]¶
Guess a point inside the pore.
This method simply uses the center of geometry of the selection as a guess. select is “protein” by default.
Changed in version 1.0.0: Changed selection keyword to select.
run(start=None, stop=None, step=None)[source]¶
Run HOLE on the whole trajectory and collect profiles.
4.4.1.4. Utilities¶
MDAnalysis.analysis.hole.
write_simplerad2(filename='simple2.rad')[source]¶
Write the built-in radii in
SIMPLE2_RADto filename.
Does nothing if filename already exists.
MDAnalysis.analysis.hole.
SIMPLE2_RAD= '\nremark: Time-stamp: <2005-11-21 13:57:55 oliver> [OB]\nremark: van der Waals radii: AMBER united atom\nremark: from Weiner et al. (1984), JACS, vol 106 pp765-768\nremark: Simple - Only use one value for each element C O H etc.\nremark: van der Waals radii\nremark: general last\nVDWR C??? ??? 1.85\nVDWR O??? ??? 1.65\nVDWR S??? ??? 2.00\nVDWR N??? ??? 1.75\nVDWR H??? ??? 1.00\nVDWR H? ??? 1.00\nVDWR P??? ??? 2.10\nremark: ASN, GLN polar H (odd names for these atoms in xplor)\nVDWR E2? GLN 1.00\nVDWR D2? ASN 1.00\nremark: amber lone pairs on sulphurs\nVDWR LP?? ??? 0.00\nremark: for some funny reason it wants radius for K even though\nremark: it is on the exclude list\nremark: Use Pauling hydration radius (Hille 2001) [OB]\nVDWR K? ??? 1.33\nVDWR NA? ??? 0.95\nVDWR CL? ??? 1.81\nremark: funny hydrogens in gA structure [OB]\nVDWR 1H? ??? 1.00\nVDWR 2H? ??? 1.00\nVDWR 3H? ??? 1.00\nremark: need bond rad for molqpt option\nBOND C??? 0.85\nBOND N??? 0.75\nBOND O??? 0.7\nBOND S??? 1.1\nBOND H??? 0.5\nBOND P??? 1.0\nBOND ???? 0.85\n'¶
van der Waals radii are AMBER united atom from Weiner et al. (1984), JACS, vol 106 pp765-768. Simple - Only use one value for each element C O H etc. Added radii for K+, NA+, CL- (Pauling hydration radius from Hille 2002). The data file can be written with the convenience function
write_simplerad2().
MDAnalysis.analysis.hole.
seq2str(v)[source]¶
Return sequence as a string of numbers with spaces as separators.
In the special case of
None, the empty string “” is returned.
- class
MDAnalysis.analysis.hole.
ApplicationError[source]¶
Raised when an external application failed.
The error code is specific for the application.
New in version 0.7.7.
References
|
https://docs.mdanalysis.org/1.0.0/documentation_pages/analysis/hole.html
|
CC-MAIN-2020-34
|
refinedweb
| 1,959
| 51.34
|
You are not logged in.
Dear experts
I would like to add a register/login feature to my program. I will be very grateful if anyone can give me hints on how i can create a secure database to store the users login information. the simple code below just stores the information in a file. witch of course is not a good method unless I use some sort of encryption...
#include <stdio.h> #include <string.h> struct user_info { char user[30]; char pass[30]; }; int main() { FILE *fp; struct user_info store; int a, b; printf("Please enter a username: "); gets(store.user); printf("Please enter a password: "); gets(store.pass); while (1) { a = strlen(store.user); if (a >= 5 && a <= 15) { b = strlen(store.pass); if (b >= 5 && b <= 15) { printf("Creating account....\n"); break; } else { printf("Please enter a password using characters between 5 and 15!\nTry again: "); gets(store.pass); continue; } } else { printf("Please enter a username using charcters between 5 and 15!\nTry again: "); gets(store.user); continue; } } if ((fp = fopen("User.dat", "w")) == NULL) { printf("Failed to create account!\n"); return 1; } else { fprintf(fp, "User: %s\tPass: %s\t", store.user, store.pass); fclose(fp); } printf("Account has been created.....\n\n"); return 0; }
Looking forward to your replys :)
You shouldn't store the passwords as given, nor encrypted. Instead you should
hash them together with the username and a salt in a secure way. After that
it doesn't really matter how you store it, as it doesn't contain any sensitive
information.
Don't use MD5 or SHA-1.
Thanks for your reply. I'm still not sure how i would do that, would i need to create a function witch hashes the 2 strings together and a simple fprintf() into a file to store them? i imagine i would need to create a second function to decrypt the hashes back to a readable string? if anyone can suggest more information on hash functions and how they work, i will be most grateful. would like to grok them!
Many thanks
OK guys, iv been looking at hash functions for an hour or so this morning, and I thought I would have a go at making my own... To keep this simple I haven't included a salt (that's the next stage) instead I have just based it on the old RSHash function, so I can understand the notation of hashing the user-name + password together in code :-)... also stuck on a windows machine at this current time so no crypt() function witch I believe is only Unix based...
here it goes...
/* My first hash function */ #include <stdio.h> #include <string.h> // Create a function witch hashes username & password together int hashit(char *str1, char *str2, unsigned int len) { unsigned int b = 378551; unsigned int a = 63689; unsigned int hash = 0; unsigned int i = 0; for (i = 0; i < len; str1++, str2++, i++) { hash = hash * a + (*str1) + (*str2); a = a * b; } printf("The hash is: %u\n", hash); return hash; } // Main int main() { char user[30]; // Username char pass[30]; // Password int ulen, plen, total; // Length of username + password printf("Hash test...\n"); printf("\nPlease enter a username: "); gets(user); printf("\nPlease enter a password: "); gets(pass); // Check character length on input while (1) { ulen = strlen(user); if (ulen >= 5 && ulen <= 10) { plen = strlen(pass); if (plen >= 5 && plen <= 10) { total = ulen + plen; printf("\nDisplay results...\n\nUser: %s\tUser_len: %d\tPass: %s\tPass_len: %d\tTotal: %d\t\n\n", // Show results user, ulen, pass, plen, total); hashit(user, pass, total); // Hash the strings break; } else { printf("\nPlease enter a password using charcters between 5 and 10!\nTry again: "); gets(pass); continue; } } else { printf("\nPlease enter a username using charcters between 5 and 10!\nTry again: "); gets(user); continue; } } getchar(); return 0; }
Output....
Hash test... Please enter a username: Daniel Please enter a password: hashing Display results... User: Daniel User_len: 6 Pass: hashing Pass_len: 7 Total: 13 The hash is: 4225718179
It Works... Looking forward to hearing your great knowledge once again :-)
Damn it! I know all this, I'm annoyed with myself for even thinking of that notation in the first place :-(... Oh well back to the drawing board, need to look up and see if windoze have any functions for creating ssl sockets...
Well, it's not such a bad idea for account creation, because then it doesn't
really matter. It's just that for every login after that you have to do it the
conventional way of sending the username + password, so you're only making
the implementation more complicated.
But if you never want to send the plain password then you can do double
hashing, with the client sending a hash, and the server hashing that again
with its salt added. If you want the client to be stateless but still protect a bit
against pre-computed dictionary attacks, then use always the same salt for the
client hashing. E.g. hash username + fixedsalt + password on the client, and
just username + clienthash + salt on the server.
Sending the plain text password should never be done, just send the hash.
It's only one extra hash on the client side, nothing needs to change on the
server side.
But yeah, once you need SSL, you can as well do the RSA thing both ways.
|
http://developerweb.net/viewtopic.php?id=6873
|
CC-MAIN-2019-35
|
refinedweb
| 895
| 72.66
|
How to use authorization in Laravel: Gates, policies, roles and permissions
In a previous entry, I mentioned the importance of hand-in-hand authorization and authentication. Now, let’s talk about the many ways that Laravel provides to apply authorization to your application.
The Laravel documentation describes multiple tools to authorize access to your application. It goes into detail about creating, constructing, and applying these authorization mechanisms. However, it only gives light direction about which method is best to use in your application. That’s because each application is different, and the way you apply authorization can be subjective. One of the packages I describe later, Spatie’s Laravel Permission, also walks the same tightrope. They make sure to integrate with Laravel and provide robust features but generally hint at guidance.
So, how do you decide what authentication mechanism to apply? Do you use Laravel’s built-in tools, or must you install a third-party package to get the functionality you need?
This question is complicated, but we can work towards an answer. Let’s begin by examining what we have available to us.
Authorization tools available in a Laravel app
Laravel provides gates and policies right out of the box. You can read the authorization documentation for detailed implementation instructions. But let’s talk specifically about each and what they’re best used for.
Gates are more high-level and generic. While they can be applied to specific objects (or Eloquent models), they tend to be more business process-oriented or abstract. I like to think of this by picturing a gatekeeper. You have to get past the gatekeeper or bouncer to get into the club. Inside the club, that’s a whole different story as you interact with individuals. That’s more along the lines of policies.
Policies tend to match up nicely with the basic CRUD (Create, Read, Update, Delete) mechanics. By design, they provide a paradigm for these actions.
Policies tend to be applied to a resource, like an Eloquent model, to determine if the user can do one of the CRUD actions. They’re not limited to these actions, though. They can be expanded with custom actions and used to authorize relationships. (Laravel Nova uses registered policies to determine if you can do things like creating or attaching models to a relationship.)
Gates and policies offer a mix of fine-grain and abstract authorization, but they lack a hierarchy or grouping functionality. That’s where the Laravel-permission package by Spatie shines. It provides a layer that both Gates and Policies can take advantage of. This functionality includes roles, permissions and teams.
Permissions describe if a user can do an action in a general, non-specific sense. Roles describe the role a user may play in your application. Users can have multiple roles. Permissions are applied to roles. (Permissions can be applied to a user directly, but I’d advise against that.) Finally, and optionally, teams are groups of users. Those Teams can contain many roles.
Now you know what’s all available. See how this could get confusing? There seem to be many ways to solve the same problem if you’re looking at these definitions. Hopefully, things will clear up when we look at these in practice.
Laravel gate example
In this scenario, I want to ensure my user has gained at least 100 points to see the link for redemption. I’m storing the number of points directly on the
user model.
Let’s see what the gate might look like:
use App\Models\User;
use Illuminate\Support\Facades\Gate;
Gate::define('access-redemptions', function (User $user) {
return $user->points >= 100;
});
Now, let’s see what our navigation HTML in the Blade file may look like:
<nav>
<a href="{{ route('dashboard') }}">Dashboard</a>
@can('access-redemptions')
<a href="{{ route('redemptions') }}">Redemptions</a>
@endcan
</nav>
Here, we can see that we’re using Laravel’s Blade
@can directive to check the authorization of this action for the current user.
To complete the full check, we’d probably add something like this at the top of our method accessed for the
redemptions route:
use Illuminate\Support\Facades\Gate;
Gate::authorize('access-redemptions');
If the user did not have permission to
access-redemptions, an authorization exception would be thrown.
So, let’s break this down so we can tell why this was the perfect use for a gate:
- It is a generic action: “can we access a business process” is basically what the question is. Can I access redemptions? Well, only if you have 100 or more points.
- Even though it depends on the current user, an Eloquent model, it doesn’t apply to another model. We’re not checking some external resource or model for points. Instead, we’re looking at ourselves, what we know about our state, so that means it’s probably not something a policy would work with.
- We need to do a calculation, so roles and permissions are out.
They only provide a binary determination if an action is allowed or not.
Laravel policy example
To demonstrate a policy, let’s pick a simple example. I want to authorize only owners of a book to update it. Each book has a
user_id field that represents the owner.
Here’s what that policy class would look like:
namespace App\Policies;
use App\Models\Book;
use App\Models\User;
class BookPolicy
{
public function update(User $user, Book $book): bool
{
return $book->user_id === $user->id;
}
}
Now, we want to authorize our controller method. Normally I’d recommend using a resourceful controller with the authorizeResource() helper. But, let’s demonstrate this in a more verbose way by applying it directly in the
update() method of a
BookController.
namespace App\Http\Controllers;
class BookController extends Controller
{
public function update(Book $book, Request $request)
{
$this->authorize('update', $book);
// ... code to apply updates
}
}
The
BookController::authorize() method, or authorization helper, will pass the current user into the
BookPolicy::update() method along with the updated instance of the
$book. If the policy method returns
false, an authorization exception would be thrown.
Why is a Policy the chosen authorization tool? First, we are working with a specific type of action: we have a noun and want to do something. We have a book, and we want to update it in this case. Second, since it’s a specific Eloquent model, a Policy is the best tool to work with individual items. Finally, because this is a CRUD type action, and we’re already following the paradigm of naming methods after their action in the controller, that’s a great hint that we should be using the same method names in a policy named after that model.
Laravel role and permission example
To demonstrate the Role and Permission authorization tools, let’s think about an organization with departments. In that company, there is a sales team and a support team. The sales team can see client billing information but cannot change it. The Support team can see and update client billing information.
In order to accomplish this, I want two permissions and two roles. Let’s set them up:
use Spatie\Permission\Models\Role;
use Spatie\Permission\Models\Permission;
$sales = Role::create(['name' => 'sales']);
$support = Role::create(['name' => 'support']);
$seeClientBilling = Permission::create(['name' => 'see-client-billing']); $updateClientBilling = Permission::create(['name' => 'update-client-billing']);
$sales->givePermissionTo($seeClientBilling);
$support->givePermissionTo($seeClientBilling);
$support->givePermissionTo($updateClientBilling);
We’ve registered two roles and applied the appropriate permissions to each role. Now, users who have these roles will inherit those permissions as well.
Now, let’s see a few methods in our billing controller.
namespace App\Http\Controllers;
use App\Models\Client;
use Illuminate\Http\Request;
class ClientBillingController extends Controller
{
public function show(Client $client, Request $request)
{
abort_unless($request->user()->can('see-client-billing'), 403);
return view('client.billing', ['client' => $client]);
}
public function update(Client $client, Request $request)
{
abort_unless($request->user()->can('update-client-billing'), 403);
// code to update billing information
}
}
Now, if a user visits
ClientBillingController::show() with either role of
sales or support
they will have access to see the billing information. Only users with the role
support, which gives them permission to
update-client-billing, will submit to the
update() method.
Why are Roles and Permissions the right authorization choice? You could accomplish the same sort of thing with Gates or, to some extent, policies. But, roles and permissions make it easier to understand and apply the permission approach in only one location. Let’s say in the future you want Sales to be able to update Client billing information as well: you’d only have to add the
update-client-billing permission to the
sales role. One quick change. You wouldn’t have to check various gates or track down policies. This type of action, which is not necessarily unique to a specific model but provides levels of access or authorization, makes roles and permissions the perfect tool.
TLDR; which authorization mechanism should I use?
Gates are used for specific functionality outside the standard CRUD mechanisms. And, they’re great for vast and sweeping access to whole sections or modules. Policies work best with the CRUD paradigm, authorizing specific objects or eloquent models. Roles and policies work well when each group or department needs to accomplish specific actions. Functionally, there is a lot of overlap for each tool, so you might find yourself mixing and matching. Also, you may integrate one inside another (like permission checking combined with ownership verification in a policy).
Uh-oh!
We've encountered a new and totally unexpected error.
Get instant boot camp pricing
Thank you!
A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here.
|
https://resources.infosecinstitute.com/topic/how-to-use-authorization-in-laravel-gates-policies-roles-and-permissions/
|
CC-MAIN-2022-27
|
refinedweb
| 1,621
| 55.95
|
Azure Serverless Real-Time Application and Its Challenges
Azure Serverless Real-Time Application and Its Challenges
Learn how you can imlement an order processing scenario with Azure serverless components and some of the challenges you might face.
Join the DZone community and get the full member experience.Join For Free
What. Here are the business requirements to be achieved in the serverless orchestration which solves a specific part of the Order Processing application to manage orders for books:
- Filter orders for "Book" and execute
- Segregate orders for hard copy and soft copy
- Retrieve the book price from the Database
- Generate Invoice
- Notify Customer on order confirmation
A prerequisite to creating a Service Bus Queue or Topic is a Service Bus namespace. Create a new namespace with a name unique across Azure
To create the Namespace in the Azure portal:
- Click on the + Create a resource, select Integration and then Service Bus.
- Enter a name for the namespace which should be unique across Azure. Select Pricing tier, Subscription, Resource Group, and Location. Click Create.
- Now that the Service Bus Namespace is created, inside the Namespace, click + Queue button to create a Queue that handles the order message.
- Enter the details required to create the queue and click Create.
- Orderprocessingqueue is now created. Similarly, the queue with the name product-queue needs to be created to process the order details of products other than the book._5<<
Create the Functions — ordervalidator and and
- Copy and.
-.
_9<<
Out-of-the-boxL
- these 4 stages:
- Order details processing:
- Segregation of orders based on their type:
- Book Type Validation: a book or "false" if not. The Condition will segregate and send the order message to the respective entities based on the return value of the ordervalidator Function. Below will be the orchestration till the condition.
When the above condition fails, The order message will be sent to productqueue and if that condition passes, It will send the message to the orderprocessingtopic. Here is the orchestration when the condition is passed.
When the Condition fails, the order message will be sent to productqueue,
Here when a Segregation condition passes, it will send the order message to topic orderprocessingtopic with the subscriptions HardCopy and SoftCopy.
_14<<
Invoice and Notification:
- For Invoice and Notification, there are a lot of Logic App connectors like Stripe and FreshBooks that. Here.
Conclusion
In this blog, we discussed and arrived at a solution for the questions below:
- How to implement the order processing scenario using Azure serverless components?
- The challenges that the user will encounter in managing and monitoring this application in the Azure portal?
Published at DZone with permission of surya venkat . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/azure-serverless-real-time-application-and-its-cha
|
CC-MAIN-2020-29
|
refinedweb
| 466
| 52.39
|
I am beginner to c language and I am practicing it to developed my code of my study. I am trying to build a
.c
if
.c
#include <stdio.h>
int main()
{
double 0.5 ;
double *w;
for (i=0;i<= int 10; i++) w[i] = runif(0,1);
if (w[i] < double 0.5 )
{
int 4 + int 5
} else if w[i] < double 0.2){
int 10 + int 5
}else{
w[i]
}
return 0;
}
expected identifier or '{'
use of undeclared identifier 'w'
I think that none of us can deduce from your code what you want to do exactly, however, here is a version that compiles and is correct. Now you adapt it to let it do what you actually want to do.
#include <stdio.h> int main() { double w[11]; int i, j; for (i = 0; i <= 10; i++) { w[i] = runif(0,1); if (w[i] < 0.5 ) { j= 4 + 5; } else if (w[i] < 0.2) { j= 10 + 5 ; }else { printf ("w[%d]= %f",i, w[i]); } } return 0; }
|
https://codedump.io/share/STmzawZxqRPY/1/if-statement-with-double-value-in-c-language
|
CC-MAIN-2019-13
|
refinedweb
| 173
| 83.15
|
robotparser – Internet spider access control¶.
Note
The robotparser module has been renamed urllib.robotparser in Python 3.0. Existing code using robotparser can be updated using 2to3.
robots.txt¶
The robots.txt file format is a simple text-based access control system for computer programs that automatically access web resources (“spiders”, “crawlers”, etc.). The file is made up of records that specify the user agent identifier for the program followed by a list of URLs (or URL prefixes) the agent may not access.
This is the robots.txt file for:
User-agent: * Disallow: /admin/ Disallow: /downloads/ Disallow: /media/ Disallow: /static/ Disallow: /codehosting/
It prevents access to some of the expensive parts of my site that would overload the server if a search engine tried to index them. For a more complete set of examples, refer to The Web Robots Page.
Simple Example¶
Using the data above, a simple crawler can test whether it is allowed to download a page using the RobotFileParser‘s can_fetch() method.
import robotparser import urlparse AGENT_NAME = 'PyMOTW' URL_BASE = '' parser = robotparser.RobotFileParser() parser.set_url(urlparse.urljoin(URL_BASE, 'robots.txt')) parser.read() PATHS = [ '/', '/PyMOTW/', '/admin/', '/downloads/PyMOTW-1.92.tar.gz', ] for path in PATHS: print '%6s : %s' % (parser.can_fetch(AGENT_NAME, path), path) url = urlparse.urljoin(URL_BASE, path) print '%6s : %s' % (parser.can_fetch(AGENT_NAME, url), url) print
The URL argument to can_fetch() can be a path relative to the root of the site, or full URL.
$ python robotparser_simple.py True : / True : True : /PyMOTW/ True : True : /admin/ True : False : /downloads/PyMOTW-1.92.tar.gz False :
Long-lived Spiders¶
An application that takes a long time to process the resources it downloads or that is throttled to pause between downloads may want to check for new robots.txt files periodically based on the age of the content it has downloaded already. The age is not managed automatically, but there are convenience methods to make tracking it easier.
import robotparser import time import urlparse AGENT_NAME = 'PyMOTW' parser = robotparser.RobotFileParser() # Using the local copy parser.set_url('robots.txt') parser.read() parser.modified() PATHS = [ '/', '/PyMOTW/', '/admin/', '/downloads/PyMOTW-1.92.tar.gz', ] for n, path in enumerate(PATHS): print age = int(time.time() - parser.mtime()) print 'age:', age, if age > 1: print 're-reading robots.txt' parser.read() parser.modified() else: print print '%6s : %s' % (parser.can_fetch(AGENT_NAME, path), path) # Simulate a delay in processing time.sleep(1)
This extreme example downloads a new robots.txt file if the one it has is more than 1 second old.
$ python robotparser_longlived.py age: 0 True : / age: 1 True : /PyMOTW/ age: 2 re-reading robots.txt False : /admin/ age: 1 False : /downloads/PyMOTW-1.92.tar.gz
A “nicer” version of the long-lived application might request the modification time for the file before downloading the entire thing. On the other hand, robots.txt files are usually fairly small, so it isn’t that much more expensive to just grab the entire document again.
See also
- robotparser
- The standard library documentation for this module.
- The Web Robots Page
- Description of robots.txt format.
|
https://pymotw.com/2/robotparser/index.html
|
CC-MAIN-2018-51
|
refinedweb
| 509
| 59.4
|
OK, I sent this to Jesse, but realized I might also be better off (and
Jesse would be better off grin) if I sent this to the list itself.
I had the old “webrt” debian package installed. In its configs, it’d
asked me for “what to call the RT system”, so I called it “ByramRT”,
because that seemed like a nice name. and then later promptly forgot all
about it. When I moved to the 2.0.x tarball, I followed the suggestions
which said “don’t pollute the namespace, use your hostname”. I then
happily imported my data…keyed off ByramRT.
Now I’ve got two names in the system (from the old system, and from the
new system), and it’s torqueing the entire thing as you can well
imagine. Any time someone replies to, say “rt.OURDOMAIN.com #NNN”, they
get a response from “ByramRT #XXX”, thanking them for starting a ticket
up, etc. etc.
Is there any clean way of solving this? Somehow I have the feeling this
is going to be one of those “dump the DB, trash it and start over” but
I’d really prefer NOT to do that.
D
|
https://forum.bestpractical.com/t/fsck/8065
|
CC-MAIN-2020-40
|
refinedweb
| 198
| 80.92
|
Summary: The smbfs in kernel 2.0.31-33 causes serious corruption of file modification times of files on NT 4.0 (SP1/SP3) workstations & servers: 1) Opening a file for reading via smbfs (and closing it) corrupts the mtime as seen locally on NT! 2) Listing (stat'ing) a _file_ via smbfs returns a bogus mtime, but does not propagate it. 3) Listing a _directory_ (with e.g. ls(1)) returns the correct NT mtimes. Of course, if Linux have previously corrupted the NT file, NT will dutifully return the bogus mtime in the new listing too. Windows NT 3.51 (SP0/SP5) is not affected. Don't know about Win95. A preliminary patch at the end of this mail fixes the problem, provided EXTRA_CFLAGS='-DCONFIG_SMB_NT4' is given on the `make' or `make modules' command line. Win95 testers wanted!On Friday 16 Jan 1998, Eric Hoeltzel <eric@dogbert.sitewerks.com> said:> I'm getting some weird dates with smbfs mounting some NT4S shares.> Smbmount is version 2.0.2, kernel is 2.0.32. Any clues?>[...]> -rwxr-xr-x 1 danny users 21149 Thu Sep 11 04:52:16 1941 x> -rwxr-xr-x 1 danny users 32 Tue Jan 9 10:27:12 1962 xx>> The last two files, x and xx, I created with a find > x. Immediately> after creating them I checked the date and it was correct.> A bit later they had changed for no apparent reason.>> Any ideas?Certainly. Shortly before you sent your mail, I captured a few small testsessions with ls(1) and cat(1) on directories/files smbmounted on Linux2.0.33 from NT4 SP3, and from NT3.51 SP5. This resulted in the conclusionssummarized above. That NT 4.0 SP1 is affected too, and that NT3.51 SP0 isnot, comes from earlier experience.So 2.0.33's smb_proc_getattr_core() doesn't work with NT4. What can we do?I see several possibilities: 1) Reviving the smb_proc_getattr_trans2 from Linux 2.0.30 that sends a TRANSACT2_QPATHINFO command at info level 1. However, this command is known to be awfully slow with Win95. The 2.0.30 smbfs was slow against NT 3.51 too, but perhaps for other reasons. 2) Trying to backport the info level 259/260 code from 2.1.79, since Bill Hawes said it `works pretty well with NT4'. Again Win95 seems to loose. 3) Backporting Bill Hawes' smb_proc_getattr_ff() ("updated patch for 2.1.79 smbfs", 15 Jan 1998). It uses a info level 1 TRANSACT2_FINDFIRST command to get the file attributes, like 2.0.33's smb_proc_readdir_long does. This solution was meant to give hope to Win95 lusers.I chose solution 3), i.e. building on pieces partly from Bill Hawes' code,partly from the 2.0.33 smb_proc_readdir_long() and smb_decode_long_dirent().Apart from debugging code, all new code is currently wrapped in `#ifdefCONFIG_SMB_NT4' ... `#endif' pairs, analogous to the existingCONFIG_SMB_WIN95 option. The patch is preliminary. The final patch dependson how well the preliminary one is received among testers and developers: * If all non-NT4 folks are happy with it, the #ifdef's can go away. For example, a `find . -ls' on the D drive of my NT 3.51 SP5 machine completes in 52 seconds after the patch, compared with 42 before (7000 files, 400 MB). That factor 1.23 slowdown is insignificant to me. * If some think a choice is important for performance reasons, the #ifdef's stay, and then appropriate changes to fs/Config.in and Documentation/Configure.help must be added, with a suitable default. * If Win95 (or other) testers find that the patch works very poorly, then somebody else will come up with a completely different fix.Finally, remember that this preliminary patch requires you to add a flag tothe make(1) command line (executed from /usr/src/linux/), e.g. make EXTRA_CFLAGS=-DCONFIG_SMB_NT4 # or make EXTRA_CFLAGS=-DCONFIG_SMB_NT4 modules # (preferred)As usual for 2.0.3x smbfs, debugging information is obtained by adding`-DDEBUG_SMB=1' or `-DDEBUG_SMB=2' to the EXTRA_CFLAGS. The latter isextremely verbose. (BTW, the patch preliminarily excludes the password frombeing logged. The final patch will probably exclude the last hunk, turningpassword logging back on, even though I don't like it.)====== Cut here ===========================================================--- linux-2.0.33/fs/smbfs/proc.c.shipped Mon Oct 27 12:30:47 1997+++ linux-2.0.33/fs/smbfs/proc.c Mon Jan 19 18:19:01 1998@@ -1356,7 +1356,7 @@ smb_lock_server(server); - DDPRINTK("smb_proc_getattr: %s\n", name);+ DDPRINTK("smb_proc_getattr_core: name=%s\n", name); retry: buf = server->packet;@@ -1378,11 +1378,100 @@ entry->f_ctime = entry->f_atime = entry->f_mtime = local2utc(DVAL(buf, smb_vwv1)); + DDPRINTK("smb_proc_getattr_core: mtime=%ld\n", entry->f_mtime);+ entry->f_size = DVAL(buf, smb_vwv3); smb_unlock_server(server); return 0; } +#ifdef CONFIG_SMB_NT4+/*+ * This version use the trans2 findfirst to get the attribute info.+ */+static int+smb_proc_getattr_ff(struct inode *dir, const char *name, int len,+ struct smb_dirent *entry)+{+ unsigned char *resp_data = NULL;+ unsigned char *resp_param = NULL;+ int resp_data_len = 0;+ int resp_param_len = 0;++ char param[SMB_MAXPATHLEN + 1 + 12];+ int mask_len;+ unsigned char *mask = &(param[12]);++ int result;+ char *p;+ struct smb_server *server = SMB_SERVER(dir);++ mask_len = smb_encode_path(server, mask,+ SMB_INOP(dir), name, len) - mask;++ mask[mask_len] = 0;++ DDPRINTK("smb_proc_getattr_ff: mask=%s\n", mask);++ smb_lock_server(server);++ retry:++ WSET(param, 0, aSYSTEM | aHIDDEN | aDIR);+ WSET(param, 2, 1); /* max count */+ WSET(param, 4, 2 + 1); /* close on end + close after this call */+ WSET(param, 6, 1); /* info level */+ DSET(param, 8, 0);++ result = smb_trans2_request(server, TRANSACT2_FINDFIRST,+ 0, NULL, 12 + mask_len + 1, param,+ &resp_data_len, &resp_data,+ &resp_param_len, &resp_param);++ if (result < 0)+ {+ if (smb_retry(server))+ {+ DPRINTK("smb_proc_getattr_ff: error=%d, retrying\n",+ result);+ goto retry;+ }+ goto out;+ }+ if (server->rcls != 0)+ {+ result = -smb_errno(server->rcls, server->err);+ if (result != -ENOENT)+ DPRINTK("smb_proc_getattr_ff: rcls=%d, err=%d\n",+ server->rcls, server->err);+ goto out;+ }+ /* Make sure we got enough data ... */+ result = -EINVAL; /* WVAL(resp_param, 2) is ff_searchcount */+ if (resp_data_len < 22 || WVAL(resp_param, 2) != 1)+ {+ DPRINTK("smb_proc_getattr_ff: bad result, len=%d, count=%d\n",+ resp_data_len, WVAL(resp_param, 2));+ goto out;+ }+ /* Decode the response (info level 1, as in smb_decode_long_dirent) */+ p = resp_data;+ entry->f_ctime = date_dos2unix(WVAL(p, 2), WVAL(p, 0));+ entry->f_atime = date_dos2unix(WVAL(p, 6), WVAL(p, 4));+ entry->f_mtime = date_dos2unix(WVAL(p, 10), WVAL(p, 8));+ entry->f_size = DVAL(p, 12);+ entry->attr = WVAL(p, 20);++ DDPRINTK("smb_proc_getattr_ff: attr=%x\n", entry->attr);++ result = 0;++out:+ smb_unlock_server(server);+ return result;+}+#endif /* CONFIG_SMB_NT4 */+ int smb_proc_getattr(struct inode *dir, const char *name, int len, struct smb_dirent *entry)@@ -1391,7 +1480,14 @@ int result; smb_init_dirent(server, entry);- result = smb_proc_getattr_core(dir, name, len, entry);++#ifdef CONFIG_SMB_NT4+ if (server->protocol >= PROTOCOL_LANMAN2)+ result = smb_proc_getattr_ff(dir, name, len, entry);+ else+#endif+ result = smb_proc_getattr_core(dir, name, len, entry);+ smb_finish_dirent(server, entry); entry->len = len;@@ -1612,8 +1708,8 @@ word passlen = strlen(server->m.password); word userlen = strlen(server->m.username); - DPRINTK("smb_proc_connect: password = %s\n",- server->m.password);+/* DPRINTK("smb_proc_connect: password = %s\n", */+/* server->m.password); */ DPRINTK("smb_proc_connect: usernam = %s\n", server->m.username); DPRINTK("smb_proc_connect: blkmode = %d\n",====== Cut here ===========================================================Happy testing-- Ulrik Dickow, Systems Programmer Snail Mail: Kampsax TechnologyE-mail: ukd@kampsax.dk P.O. Box 1142Phone: +45 36 39 08 00 DK-2650 HvidovreFax: +45 36 77 03 01 Denmark
|
https://lkml.org/lkml/1998/1/19/88
|
CC-MAIN-2016-44
|
refinedweb
| 1,191
| 50.12
|
as best movies. That is not not a recommendation. When you browse through several categories or clicked several catchy posters and you get lines like, “People who bought/watched this also bought/watched that”. That is an example of recommendation. It assumes that if you share a similar taste with someone, you are going to like what they liked.
Preparing the Data Set
MovieLens is a database which was prepared by the GrpupLens Research Project at the University of Minnesota. The data set we are going to use for building the recommendation engine contains over hundred thousands rated movies. Each user has rated at least 20 movies. Ratings of different sizes (1 million,
Ratings
Movies
About the Algorithm:
To make a preference prediction for any user, Collaborative filtering uses a preference by other users of similar interests and predicts movies of your interests which is unknown to you. Spark MLlib uses Alternate Least Squares (ALS) to make recommendation. Here is a glimpse of collaborative filtering method:
Here user ratings on movies are represented as a matrix where a cell represents ratings for a particular movie by a user. The cell with “?” represents the movies which is user U4 is not aware of or haven’t seen. Based on the current preference of user U4 the cell with “?” can be filled with approximated rating of users which are having similar interest as user U4.
Building Recommendation Model
Spark MLlib uses Alternate Least Squares (ALS) to build recommendation model.
import org.apache.spark.mllib.recommendation.ALS import org.apache.spark.mllib.recommendation.Rating val data = sc.textFile("your_dir/ratings.dat") val ratings = data.map(_.split("::") match { case Array(user, item, rate, timestamp) => Rating(user.toInt, item.toInt, rate.toDouble) }) val movies = movieRecommendationHelper.getMovieRDD.map( _.split("::")) .map { case Array(movieId,movieName,genre) => (movieId.toInt ,movieName) } val myRatingsRDD = movieRecommendationHelper.topTenMovies //gets 10 popular movies. See <a href="">Code</a> for for details val training = ratings.filter { case Rating(userId, movieId, rating) => (userId * movieId) % 10 <= 3 }.persist val test = ratings.filter { case Rating(userId, movieId, rating) => (userId * movieId) % 10 > 3}.persist
Training the Model
The rating data is split into two part. The variable training contains 47% of data for training. Rest is kept for evaluating the model. We are going to join the Rating of user U4 (the user for which we are going to predict) for the movies he has seen so far so that we can learn taste of user U4 about movies.
val rank = 8 val iteration = 10 val lambda = 0.01 val model = ALS.train(training.union(myRatingsRDD), rank, iteration, lambda) val moviesIHaveSeen = myRatingsRDD.map(x => x.product).collect().toList val moviesIHaveNotSeen = movies.filter { case (movieId, name) => !moviesIHaveSeen.contains(movieId) }.map( _._1)
Evaluating the Model
Now let us evaluate the model on test data and get the prediction error. The prediction error is calculated as Root Mean Square Error (RMSE)
val predictedRates = model.predict(test.map { case Rating(user,item,rating) => (user,item)} ).map { case Rating(user, product, rate) => ((user, product), rate) }.persist() val ratesAndPreds = test.map { case Rating(user, product, rate) => ((user, product), rate) }.join(predictedRates) val MSE = ratesAndPreds.map { case ((user, product), (r1, r2)) => Math.pow((r1 - r2), 2) }.mean()
The RMSE value was: 0.95.
Making Recommendation for User U4
Here a new user id (0) is associated with each movie user u4 has not seen so far. And MatrixFactorization model is used to predict values for the unknown movies. Method predict returns Rating(user,item,rating) RDD which is again sorted by Rating to recommend best movies out of entire list.
val recommendedMoviesId = model.predict(moviesIHaveNotSeen.map { product => (0, product)}).map { case Rating(user,movie,rating) => (movie,rating) } .sortBy( x => x._2, ascending = false).take(20).map( x => x._1)
The above code predict values of (user,Item) where user id is 0 i.e. of U4 and item is all movies he hasn’t watched. The result is further sorted by the rating and the top 20 movie Ids are returned.
You can view entire code example at: GitHub
References
[1] Collaborative Filtering Apache Spark Documentation.
1 thought on “Build your personalized movie recommender with Scala and Spark”
Thanks for the article! Much helpful.
Can you also do some other MLLib algorithms as recommender systems have already been done many times.
|
https://blog.knoldus.com/2016/07/01/build-your-personalized-movie-recommender-with-scala-and-spark/
|
CC-MAIN-2018-30
|
refinedweb
| 719
| 52.66
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.